Main points in favor of this grant
I want to see more public impact analysis of AI safety organizations. To my knowledge, there has been very little of this to date.
Donor's main reservations
Mikolaj has some experience with impact analysis, but I suspect that this project will be quite hard and might need more funding.
Process for deciding amount
I decided to partly fund this project to give other donors room to contribute and mitigate my conflict of interest.
Conflicts of interest
I am Co-Director at MATS, Board-Member at LISA, and advise several AI safety projects, including Catalyze Impact and AI Safety ANZ. I often work out of FAR Labs. As a Manifund Regrantor, I regularly fund AI safety projects, the success of which is probably a determining factor in my future status as a Regrantor. To mitigate these obvious conflicts of interest, I ask that Mikolaj share his key results in a public database and post, where any discrepancies can be identified.