Viewing a single comment thread. View all comments

blueSGL t1_izuf5tm wrote

> Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.

what if the 'hard problem of consiousness' is not really that hard, there is a trick to it, no one has found it yet, and an AGI realizes what that is. e.g. intelligence is brute forced by method X and yet method Y runs so much cleaner with less overhead and better results. something akin to targeted sparsifcation of neural nets where a load of weights can be removed and yet the outputs barely change.

(look at all the tricks that were discovered to get stable diffusion running on a shoebox in comparison to when it was first released)

8

Geneocrat t1_izvxa40 wrote

Great point. AI will be doing a lot more with a lot less.

There have to be so many inefficiencies with the design of CNNs and reinforcement.

Clearly you don’t need the totality of human knowledge to be as smart as a above average 20 year old, but that’s what we’ve been using.

ChatGPT is like a well mannered college student who’s really fast at using google, but it obviously took millions of training hours.

Humans are pretty smart with limited exposure to knowledge and just thousands of hours. When ChatGPT makes it’s own AI, it’s going to be bananas.

2

Accomplished_Diver86 t1_izuhyen wrote

Yeah sure. I agree that was the point. Narrow AI could potentially see what it takes to make AGI which would in turn free up resources.

All I am saying that it would take a ton of new ressources to make the AGI into an ASI.

1