Viewing a single comment thread. View all comments

natepriv22 t1_j64rkys wrote

How so?

I have to admit I've never heard this kind of response before. AGI is when an AI will be able to answer in such an unexpected way lol.

1

ArgentStonecutter t1_j656714 wrote

AGI is an artificial general intelligence. It's an intelligence capable of acting as a general agent in the world. That doesn't imply that it's smarter than a human, or capable of unlimited self improvement, or answering any question or solving any problem. An AGI could be no smarter than a dog, but if it's competent as a dog that would be a huge breakthrough.

A system capable of designing a cheap fusion reactor doesn't need general intelligence, it could be an idiot savant or even not recognizably an intelligence at all. From the point of view of a business, it should be an oracle, simply answering questions, with no agency at all. General intelligence is likely to be a problem to be avoided as long as possible, you don't want to depend on your software "liking" you.

Vinge's original paper talked about a self-improving AGI but people seem to have latched on to the AGI part and ignored the self-improving part. He was talking about one that could update its fundamental design or design successively more capable successors.

1