Viewing a single comment thread. View all comments

TemetN t1_jd5c7lt wrote

This seems to imply some sort of foom if I'm reading it right, in which case alignment would be the only really significant thing you could do in preparation, besides ensuring living that long. Honestly, I tend to consider this the least probable of the major proposed runups to the singularity given the number of potential bottlenecks and the current focus of research.

​

On the plus side, if aligned then foom would also likely deliver by far the fastest results - with the world effectively revolutionized overnight.

5

awcomix OP t1_jd5h7ls wrote

Thanks for teaching me a new term FOOM. I had to look it up. I’m curious about other run up scenarios that you mentioned.

2

TemetN t1_jd5jmsq wrote

Top of my head? Apart from that, the other two big ones are the argument that the rate of progress is exponential in general, and AI's integration will further improve it. And Vinge's superhuman agents idea, which posits that we can't predict the results of AI R&D once it hits the point of being beyond human capabilities.

I tend to think that either of those is more likely (or rather, the first is inevitable and the second is hard to predict), and that we're in the runup to a soft takeoff now.

1