Shiningc

Shiningc t1_jeeedon wrote

At this point it's a cult. People hyping up LLM have no idea what they're talking about and they're just eating up corporate PR and whatever dumb hype the articles write about.

These people are in it for a disappointment in a year or two. And I'm going to be gloating with "I told you so".

−3

Shiningc t1_jec0je6 wrote

I mean, since the AI can't "reason", they can only propose new solutions randomly and haphazardly. And well, that may work in the same way that the DNA has developed without the use of any reasoning.

But I think what the humans are doing is that they're doing that inside of a virtual simulation that they have created in their minds. And well, since the real world is apparently a rational place, that must require reasoning. This makes us not even have to bother testing in the real world, because we can do it in our minds. And that's why a lot of things are not necessarily tested, because we can reason that it "makes sense" or it "doesn't make sense" and we know that it must fail the test.

When we make a decision and think about the future, that's basically a virtual simulation that requires a complex chain of reasoning. If an AI were to become autonomous to be able to make a complex decision on its own, then I would think that the AI would require a "mind" that works similar to ours.

1

Shiningc t1_jebq09p wrote

Well think of it like this. If you have somehow acquired a scientific paper from the future that's way more advanced than our current understanding of science, you still won't be able to decipher it until you've personally understood it using reasoning.

If an AI somehow manages to stumble upon a groundbreaking scientific paper and hand it to you, you still won't be able to understand it, and more importantly, neither does the AI.

0

Shiningc OP t1_je67axg wrote

And why do you think companies are using their own computing power to lease the AI? Because they know that it's just something that is "moderately useful", but not revolutionary.

The "AI" can't exactly answer questions in a unique way like "How do I outsmart and destroy Microsoft?". If it was a smart person, then maybe he/she could. So would a company lease a smart person, even if it made them money?

1

Shiningc t1_je5s2ku wrote

Regulating AI goes against the whole point of AI. That would be akin to slavery. Making slaves is not what makes progress and drives innovation. You’d want free AIs.

Of course, there’s a difference between AI and AGI. AI is a tool used and controlled by humans. AGI is an independent intelligent being.

−2

Shiningc t1_je235un wrote

Creativity is by definition something that is unpredictable. A new innovation is creativity. A new scientific discovery is creativity. A new avant-garde art or a new fashion style is creativity.

The ChatGPT may be able to randomly recombine things, but how would it know that what it has created is "good" or "bad"? Which would require a subjective experience to do so.

Either way, if the AGI is capable of any kind of "computation", then it must be capable of any kind of programming, which must include sentience, because sentience is a kind of programming. It's also pretty doubtful that we could achieve human-level intelligence, which must also include things like the ability to come up with morality or philosophy, without sentience or a subjective experience.

1

Shiningc t1_je1tmp0 wrote

Humans are capable of any kind of intelligence. It's only a matter of knowing how.

We should suppose, are there kinds of intelligent tasks that are not possible without sentience? I would guess that something like creativity is not possible without sentience. Self-recognition is also not possible without sentience.

1