Viewing a single comment thread. View all comments

t1_je0mint wrote

Goofy take. It is true that we don't yet fully understand Concsciousness. But calling official microsoft papers clickbait is some next level dogshit take.

Also we kind of do understand what "could" be the needed improvements made to LLMs in order for them to get better and eventually gain consciousness. These improvements were discussed in the "clickbait microsoft papers".

​

It seems to me the only one not actually reading those papers is you

7

t1_je0wqe9 wrote

Why does AGI need to be conscious?

In fact why does it have to be general. A bunch of specialized networks that can speed up human science or discover things on its own will advance progress in a way that is indistinguishable from an agi acting on its own.

If we build a machine intelligence capable of improving other ais and the hardware they run on then specialized "dumb" ais will still outpace human development faster than we can keep up.

2