Rebecca Petras
A system of collective action is necessary to help tech workers safely speak out about concerns
Mikolaj Kniejski
Do ACE-style cost-effectivness analysis of technical AI safety orgs.
Murray Buchanan
Leveraging AI to enable coordination without demanding centralization
PauseAI US
SFF main round did us dirty!
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
Michel Justen
Help turn the video from an amateur side-project to into an exceptional, animated distillation
Tyler Johnston
AI-focused corporate campaigns and industry watchdog
Ekō
Case Study: Defending OpenAI's Nonprofit Mission
Piroska Rakoczi
I would investigate whether LLMs used in AI agents have got any signs of being a person.
Dr Waku
CANCELLED Cover anticipated costs for making videos in 2025
David Conrad
We are fostering the next generation of AI Policy professionals through the Talos Fellowship. Your help will directly increase the number of places we can offer
Joep Meindertsma
Help the largest AI activist group grow
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Jordan Braunstein
Combining "kickstarter" style functionality with transitional anonymity to decrease risk and raise expected value of participating in collective action.
AI Safety and Governance Fund
Educating the general public about AI and risks in most efficient ways and leveraging this to achieve good policy outcomes
Constance Li
Field building on AI's impact on nonhumans
Alex Lintz
Mostly retroactive funding for prior work on AI safety comms strategy as well as career transition support.
Gaurav Yadav
Explainable videos for ideas in AI Governance
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Apart Research
Support the growth of an international AI safety research and talent program