Remmelt Ellen
Cost-efficiently support new careers and new organisations in AI Safety.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
ampdot
Community exploring and predicting potential risks and opportunities arising from a future that involves many independently controlled AI systems
Ryan Kidd
Help us support more research scholars!
Michaël Rubens Trazzi
How California became ground zero in the global debate over who gets to shape humanity's most powerful technology
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
Jørgen Ljønes
We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.
Apollo Research
Hire 3 additional AI safety research engineers / scientists
Jesse Hoogland
Addressing Immediate AI Safety Concerns through DevInterp
Dan Hendrycks
Kunvar Thaman
Apart Research
Incubate AI safety research and develop the next generation of global AI safety talent via research sprints and research fellowships
6-month funding for a team of researchers to assess a novel AI alignment research agenda that studies how structure forms in neural networks
joseph bloom
Trajectory Models and Agent Simulators
Lucy Farnik
6-month salary for interpretability research focusing on probing for goals and "agency" inside large language models
Rubi Hudson
FazlBarez
Deleted
Siao Si Looi
12 months funding for 3 people to work full-time on projects supporting AI safety efforts
Garrett Baker