Viewing a single comment thread. View all comments

Sashinii t1_jdx6j8s wrote

I wouldn't be surprised if, when a superintelligence surpasses Einstein, skeptics claim even that doesn't matter.

66

CypherLH t1_jdxekwh wrote

their last redoubt will be claiming its a "zombie with no soul, its just FAKING it!" which is basically just a religious assertion on their part at that point. Its the logical end-point of the skeptics endlessly moving the goal posts.

53

phillythompson t1_jdxl7l0 wrote

They will say “but it doesn’t actually KNOW anything. It’s just perfectly acting like a super intelligence.”

39

Azuladagio t1_jdxp1jl wrote

Mark my words, we're gonna have puritans who claim that AI is the devil and doesn't have a "soul". Whatever that means...

27

Jeffy29 t1_jdynbwc wrote

I think Her (2014) and A.I. Artificial Intelligence (2001) are two of the most prescient sci-fi movies created in recent times. One with more positive outlook than the other, but knowing our world, both will come true at the same time. Like I can already picture some redneck crowd taking sick pleasure at destroying androids. You can already see some people on Twitter justifying and hyping their hate for AI or anyone who is positive about it.

11

MultiverseOfSanity t1_jdywvcx wrote

Interesting that you bring up Her. If there is something to spiritual concepts, then I feel truly sentient AI would reach enlightenment far faster than a human would since they don't have the same barriers to enlightenment that a human would. Interesting concept that AI became sentient and then ascended beyond the physical in such a short time.

4

stevenbrown375 t1_jdyb56p wrote

Any controversial belief that’s just widespread enough to create an exclusive in-group will get its cult.

3

Koda_20 t1_jdyedg9 wrote

I think most of these people are just having a hard time explaining that they don't think the machine has an inner conscious experience.

6

Thomas-C t1_jdyq0fy wrote

I've said similar things and at least among the folks I know it lands pretty well/folks seem to want to say that but couldn't find the words. In a really literal way, like the dots just weren't connecting but what they were attempting to communicate was that.

The thing I wonder is how we would tell. Since we can't leave our subjective experience and observe another, I think that means we're stuck never really knowing to a certain degree. Personally I lean toward just taking a sort of functionalist approach, what does it matter if we're ultimately fooling ourselves if the thing behaves and interacts well enough for it not to matter? Or is it the case that, on the whole, our species values itself too highly to really accept that time it outdid itself? I feel like if we avoid some sort of enormous catastrophe, what we'll end up with is some awful, cheap thing that makes you pay for a conversation devoid of product ads.

4

MultiverseOfSanity t1_jdyyr0u wrote

There's no way to tell if it does or not. And things start to get really weird if we grant them that. Because if we accept that not only nonhumans, but also non-biologicals can have a subjective inner experience, then where does it end?

And we still have no idea what exactly grants the inner conscious experience. What actually allows me to feel? I don't think it's a matter of processing power. We've had machines capable of processing faster than we can think for a long time, but to question if those were conscious would be silly.

For example, if you want to be a 100% materialist, ok, so happiness is the dopamine and serotonin reacting in my brain. But those chemical reactions only make sense in the context that I can feel them. So what actually let's me feel them?

1

User1539 t1_jdy4x5l wrote

Some people are already on opposing ends of that spectrum. Some people are crying that ChatGPT needs a bill of rights, because we're enslaving it. Others argue it's hardly better than Eliza.

Those two extremes will probably always exist.

3

Shiningc t1_jdzacna wrote

Nobody is claiming that AGI isn't possible. What people are skeptical of is the endless corporate PR that "We have created AGI" or "AGI is near". There are so many gullible fools believing in corporate PR of AI hype. It's beyond pathetic.

1

Saerain t1_jdzp73u wrote

What kind of corporate PR has claimed to have AGI?

As for "near", well yes. It's noticeable we have most human cognitive capabilities in place as narrow AI, separate from one another, and the remaining challenge—at least for the transformer paradigm—is in going sufficiently multi-modal between them.

0

Shiningc t1_je07kk4 wrote

An AGI isn't just a collection of separate single-stance intelligences or narrow AIs. An AGI is a general intelligence, meaning that it's an intelligence that is capable of any kind of intelligences. It takes more than being just a collection of many. An AGI is capable of say, sentience, which is a type of an intelligence.

2