Manifund foxManifund
Home
Login
About
People
Categories
HomeAboutPeopleCategoriesLoginCreate
vincentweisser avatarvincentweisser avatar
Vincent Weisser

@vincentweisser

focused on open/decentralized agi, alignment and scientific progress

vincentweisser.com
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

Outgoing donations

Luthien
$200
2 months ago
Luthien
$200
2 months ago
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents
$250
3 months ago
Investigating and informing the public about the trajectory of AI
$250
3 months ago
human intelligence amplification @ Berkeley Genomics Project
$100
3 months ago
Attention-Guided-RL for Human-Like LMs
$100
3 months ago
human intelligence amplification @ Berkeley Genomics Project
$500
3 months ago
AI-Driven Market Alternatives for a post-AGI world
$115
4 months ago
AI-Driven Market Alternatives for a post-AGI world
$100
4 months ago
MATS Program
$200
4 months ago
Lightcone Infrastructure
$100
4 months ago
Next Steps in Developmental Interpretability
$200
4 months ago
10th edition of AI Safety Camp
$200
4 months ago
Biosecurity bootcamp by EffiSciences
$100
4 months ago
SafePlanBench: evaluating a Guaranteed Safe AI Approach for LLM-based Agents
$200
4 months ago
Investigating and informing the public about the trajectory of AI
$200
4 months ago
Making 52 AI Alignment Video Explainers and Podcasts
$50
over 1 year ago
AI Safety Research Organization Incubator - Pilot Program
$200
over 1 year ago
AI Safety Research Organization Incubator - Pilot Program
$277
over 1 year ago
AI Safety Research Organization Incubator - Pilot Program
$500
over 1 year ago
Scaling Training Process Transparency
$150
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$10
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$100
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$790
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$1000
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$210
over 1 year ago
Cadenza Labs: AI Safety research group working on own interpretability agenda
$500
over 1 year ago
Exploring novel research directions in prosaic AI alignment
$200
over 1 year ago
MATS Program
$300
over 1 year ago
MATS Program
$500
over 1 year ago
Empirical research into AI consciousness and moral patienthood
$50
over 1 year ago
Empirical research into AI consciousness and moral patienthood
$70
over 1 year ago
Run five international hackathons on AI safety research
$100
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$50
over 1 year ago
AI Alignment Research Lab for Africa
$150
over 1 year ago
AI Alignment Research Lab for Africa
$100
over 1 year ago
AI Alignment Research Lab for Africa
$150
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$100
over 1 year ago
Discovering latent goals (mechanistic interpretability PhD salary)
$150
over 1 year ago
Introductory resources for Singular Learning Theory
$50
over 1 year ago
Holly Elmore organizing people for a frontier AI moratorium
$100
over 1 year ago
Recreate the cavity-preventing GMO bacteria BCS3-L1 from precursor
$50
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$150
over 1 year ago
Activation vector steering with BCI
$150
over 1 year ago
Avoiding Incentives for Performative Prediction in AI
$50
over 1 year ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$70
almost 2 years ago
Alignment Is Hard
$70
almost 2 years ago
Introductory resources for Singular Learning Theory
$70
almost 2 years ago
WhiteBox Research: Training Exclusively for Mechanistic Interpretability
$100
almost 2 years ago
Compute and other expenses for LLM alignment research
$100
almost 2 years ago
Optimizing clinical Metagenomics and Far-UVC implementation.
$100
almost 2 years ago
Apollo Research: Scale up interpretability & behavioral model evals research
$160
almost 2 years ago
Apollo Research: Scale up interpretability & behavioral model evals research
$250
almost 2 years ago
Run five international hackathons on AI safety research
$250
almost 2 years ago
Holly Elmore organizing people for a frontier AI moratorium
$100
almost 2 years ago
Discovering latent goals (mechanistic interpretability PhD salary)
$400
almost 2 years ago
Discovering latent goals (mechanistic interpretability PhD salary)
$40
almost 2 years ago
Scoping Developmental Interpretability
$45
almost 2 years ago
Scoping Developmental Interpretability
$1000
almost 2 years ago
Scoping Developmental Interpretability
$455
almost 2 years ago
Joseph Bloom - Independent AI Safety Research
$250
almost 2 years ago
Joseph Bloom - Independent AI Safety Research
$100
almost 2 years ago
Joseph Bloom - Independent AI Safety Research
$50
almost 2 years ago
Agency and (Dis)Empowerment
$250
almost 2 years ago
Isaak Freeman
$100
almost 2 years ago
Medical Expenses for CHAI PhD Student
$43
almost 2 years ago
Long-Term Future Fund
$50
almost 2 years ago

Comments

Ozempic for Sleep: Research for Safely Reducing Sleep Needs
vincentweisser avatar

Vincent Weisser

5 months ago

Important research project! Isaak, Helena are awesome and assembling a great team that should make progress on it

Cadenza Labs: AI Safety research group working on own interpretability agenda
vincentweisser avatar

Vincent Weisser

over 1 year ago

Awesome work! One of the most exciting areas of alignment in my view!

AI Safety Research Organization Incubator - Pilot Program
vincentweisser avatar

Vincent Weisser

over 1 year ago

Very excited about this effort, think it could have great impact, and personally know Kay and think he has a good chance to deliver on this with his team!

Empowering AI Governance - Grad School Costs Support for Technical AIS Research
vincentweisser avatar

Vincent Weisser

over 1 year ago

is this project still seeking funding or un-related to this one? https://manifund.org/projects/gabriel-mukobi-summer-research

AI Alignment Research Lab for Africa
vincentweisser avatar

Vincent Weisser

almost 2 years ago

glad to hear and awesome to see this initiative!

Compute and other expenses for LLM alignment research
vincentweisser avatar

Vincent Weisser

almost 2 years ago

Might be worth keeping it open for more donations if requested?