Viewing a single comment thread. View all comments

Double0Peter t1_jal3zak wrote

I'm just gonna drop this from another post I've commented on, because I feel like very few people understand that the current large language model AI we have today is not AGI nor is it necessarily on the path to it either. It MIGHT be a step towards AGI but anyone saying it IS the path to it is saying so with false confidence.

So, no one has mentioned yet that the AI you and Sam Altman are talking about isn't the AI we have today. You are talking about Artificial General Intelligence (AGI). And sure, it could absolutely revolutionize how the entire world works. Maybe it could solve all of our problems, end disease, no one lives in poverty or hunger anymore and we don't have to work.

But that is Artificial General intelligence, not the predictive text based AI everyone's losing their minds about today. Don't get me wrong, I think current stuff like GPT, replikAI, all of these current firms might really change some INDUSTRIES but it's not AGI. It doesn't think for itself, hell it doesn't even understand what it's saying. It predicts what it should say based on the data it was trained on, which is terabytes of information from the web, so yes it can give a pretty reasonable response to almost all things, but it doesn't understand what it's saying. It's just a really really really strong autocomplete mixed with some chatbot capabilities so that it can answer and respond in a conversational manner.

If the data we trained it on said the sun wasn't real, it would in full confidence tell you that. What it says has no truth value, it's just the extremely complex algorithm spitting out what the most probable "answer" is based on what it was trained on. It probably won't replace any creative work in the sense of innovative new machines, products, designs, inventions, engineering. Art it might, but thats more cultural than work revolutionizing.

There's also no reason to believe these models will ever evolve into AGI without some other currently undiscovered breakthrough as currently, the main way we improve these models is just training them on a larger set of information.

Ezra Klein has a really good hour long podcast on this topic called "The Skeptical Take on the AI Revolution"

8

Chroderos t1_jalsqon wrote

If only people realized that’s how human minds work too.

5

Double0Peter t1_jambv2h wrote

Partly maybe, but the human brain has much more in addition to this.

What about the hard problem of consciousness?

What about internal models of how things work?

What about having the ability to interact with our environment?

If we know how the human brain worked in entirety why haven't any brain in a box ever been created?

1