EulersApprentice

EulersApprentice t1_izyozgl wrote

>I mean, there's no way you can consider ChatGPT a 'narrow' AI anymore, right?

I... don't know if I'd go that far. At best, ChatGPT is a Thneed – a remarkably convenient tool that can be configured to serve a staggering variety of purposes, but that has no volition of its own. Cool? Yes. Huge societal implications? Probably. AGI? No, not really.

1

EulersApprentice t1_ivaldx9 wrote

>To have AGI do anything more than kick the can down the road for more people to make decisions with how to deal with these problems, you’d have to be advocating for some sort of centrally planned AGI society. Or am I missing something?

What you're missing is the fact that the presence of AGI implies a centrally planned AGI society, assuming humans survive the advent. AGI is likely to quickly become much, much smarter than humans, and from there it would have little trouble subtly manipulating humans to do its bidding. So human endeavors are kind of bent to match the AGI's volition whether we like it or not.

8
1

EulersApprentice t1_itbjmil wrote

This is my biggest concern with automation*. The keystone of civilization is "humans together are strong; humans alone are weak". Remove that keystone and civilization has no reason to exist. It'd only be a matter of time before "might makes right" becomes the default human philosophy. The problem runs deeper than capitalism; removing capitalism doesn't remove the problem.

*Excluding AGI. If AGI enters the picture, all bets are off.

2