Viewing a single comment thread. View all comments

sticky_symbols t1_j9rezil wrote

ML researchers worry a lot less than AGI safety people. I think that's because only the AGI safety people spend a lot of time thinking about getting all the way to agentic superhuman intelligence.

If we're building tools, not much need to worry.

If we're building beings with goals, smarter than ourselves, time to worry.

Now: do you think we'll all stop with tools? Or go on to build cool agents that think and act for themselves?

0

Jinoc t1_j9rpo3y wrote

That’s a misreading of what the AI alignment people say, they’re quite explicit that agency is not necessary for AI risk.

18

sticky_symbols t1_j9u4kd2 wrote

Yes; but there's a general agreement that tool AI is vastly less dangerous than agentic AI. This seems to be the crux of disagreement between those who think risk is very high or just moderately high.

1