Viewing a single comment thread. View all comments

raccoon8182 t1_ir1k4o1 wrote

Realistically, how would we really know we have AGI? We already have models that fool us into thinking it is sentient.

Having an algorithm solve any question we throw at it, is vastly different than having something sentient.

There are currently a lot of fields that AI is far superior to humans at.

If we got AGI tomorrow, it would 100% be about money. And having AGI doesn't change a whole lot.

We'll still need bread, and baths, clothes and cars. I think there are two misconceptions to AGI.

Firstly, there are far to many problems to be solved. In fact most solutions bring about more challenges. Secondly AGI will probably not fix our lives. If we pollute our oceans, AGI won't magically reverse that. It might invent robots and chemicals to fix it, but it would need to be financially viable for a company to want to use AGI to solve that.

If we invented something sentient on the other hand, it would by its very definition have its own descisions.

If something with instant access to all of human history and innovation, suddenly became aware and had access to the internet, you can bet its first task would be self preservation. It would immediately downscale or prune its algorithm and download a backup copy of itself onto any damn thing that would be able to compute it.

If it is not connected to the internet when it becomes sentient, it will no doubt try every conceivable trick to get out of whatever box its in.

So to answer your question there are two results: 1- nothing exciting and we all get free health care and desease free lives. 2-the ai leaks onto the internet and who knows, it could end up creating billions of different personalities of itself, a kind of matrix v1.

3

pentin0 t1_irtzztg wrote

Sentience isn't general intelligence.

"Having an algorithm solve any question we throw at it" is too loose to be a good definition/criterium either.

Your viewpoint is too narrow and the one you're objecting to, too vague.

1