@matiroy Thanks Mati!
@tsvibt
former AGI alignment at MIRI, now human intelligence enhancement
https://tsvibt.blogspot.com/$0 in pending offers
Trying to reduce existential risk from AGI. Making http://berkeleygenomics.org/
Tsvi Benson-Tilsen
21 days ago
@Kaarel Thanks for your offer!
I'm unsure whether I agree with this strategically or not.
One consideration is that it may be more feasible to go really fast with [AGI alignment good enough to end acute risk] once you're smart enough, than to go really fast with convincing the world to effectively stop AGI creation research. The former is a technical problem you could, at least in principle, solve in a basement with 10 geniuses; the latter is a big messy problem involving myriads of people. I have substantial probability on "no successful AGI slowdown, but AGI is hard to make". In those worlds, where algorithmic progress continually burns the fuse on the intelligence explosion, a solution remains urgent, i.e. prevents more doom the sooner it comes.
But maybe good-enough AGI alignment is really extra super hard, which is plausible. Maybe effective world-coordination isn't as hard.
But I do mostly agree with this in terms of long-term vision.