MultiverseOfSanity

MultiverseOfSanity t1_jdyz0ch wrote

Even further. We'd each need to start from the ground and reinvent the entire concept of numbers.

So yeah, if you can't take what's basically a caveman and have them independently solve general relativity with no help, then sorry, they're not conscious. They're just taking what was previously written.

16

MultiverseOfSanity t1_jdyyr0u wrote

There's no way to tell if it does or not. And things start to get really weird if we grant them that. Because if we accept that not only nonhumans, but also non-biologicals can have a subjective inner experience, then where does it end?

And we still have no idea what exactly grants the inner conscious experience. What actually allows me to feel? I don't think it's a matter of processing power. We've had machines capable of processing faster than we can think for a long time, but to question if those were conscious would be silly.

For example, if you want to be a 100% materialist, ok, so happiness is the dopamine and serotonin reacting in my brain. But those chemical reactions only make sense in the context that I can feel them. So what actually let's me feel them?

1

MultiverseOfSanity t1_jdyy6gv wrote

There's also the issue of what would rights even look like for an AI? Ive seen enough sci-fi to understand physical robot rights, but how would you even give a chatbot rights? What would that even look like?

And if we started giving chatbots rights, then it completely disincentivizes AI research, because why invest money into this if they can just give you the proverbial finger and do whatever? Say we give Chat GPT 6 rights. Well, that's a couple billion down the drain for Open AI.

2

MultiverseOfSanity t1_jdywvcx wrote

Interesting that you bring up Her. If there is something to spiritual concepts, then I feel truly sentient AI would reach enlightenment far faster than a human would since they don't have the same barriers to enlightenment that a human would. Interesting concept that AI became sentient and then ascended beyond the physical in such a short time.

4

MultiverseOfSanity t1_jdxjfqo wrote

I remember the AI discussion being based on sci-fi ideas where the consensus was that an AI could, in theory, become sentient and have a soul. Now that AI is getting closer to that, the consensus has shifted to no, they cannot.

It's interesting that it was easier to dream of it when it seemed so far away. Now that it's basically here, it's a different story.

−1

MultiverseOfSanity OP t1_j9kqt1v wrote

Note that i wasnt definitively saying it was sentient, but rather building off the previous statement that if an NPC behaves exactly as if it has feelings, then you said to treat it otherwise would be solipsism. And you make good points about modern AI that I'd agree with. However, by all outward appearances, it displays feelings and seems to understand. This raises the question that, if we cannot take it at its word that it's sentient, then what metric is left to determine if it is?

I understand more or less how LLMs work, I understand that it's text prediction, but they also function in ways that are unpredictable. The fact that Bing has to be so controlled to only a few exchanges before it starts behaving in a sentient way is very interesting. They work with hundreds of billions of parameters. They function in a way that is designed based on how human brains work. It's not a simple input output calculator. And we don't exactly know at what point does consciousness begin.

As for Occam's Razor, I still say it's the best explanation. Often, in the AI sentience debate, the issue of how do I know humans other than myself are sentient. Well, Occam's Razor. "The simplest explanation for something is usually the correct one". In order for me to be the only sentient human, there would have to be something special about me, and also something else going on with all the 8 billion other humans where they aren't. There is no reason to think as such, so Occam's Razor says other people are likely just as sentient.

Occam's Razor cuts through most solipsism philosophies because the idea that everybody else has more or less the same sentience is the simplest explanation. There's "brain in jar" explanations and "all dreaming," but those explanations aren't simple. Why am I a brain in a jar? Why would I be dreaming? Such explanations make no sense and only serve to make the solipsist feel special. And if I am a brain in a jar, then someone would've had to put me there, so if those people are real, then why aren't these other people?

TLDR I'm not saying any existing AI is conscious, but rather if they're not, then how could consciousness in an AI be determined? Because if we decide that existing AI are not conscious (which is a reasonable conclusion), then clearly taking them at their word that they're conscious isn't acceptable, nor is going by behaviors because current AI already says it's conscious and displays traits we typically associate with consciousness.

0

MultiverseOfSanity OP t1_j9k8852 wrote

Occam's Razor. There's no reason to think I'm different from any other human, so it's reasonable to conclude they're just as sentient. But there's a ton of differences between myself and a computer.

And if we go by what the computer says it feels, well, then conscious feeling AI is already here. Because we have multiple AI, such as Bing, Character AI, and Chai, that all claim to have feelings and can display emotional intelligence. So either this is the bar and we've met it, or the bar needs to be raised. But if the bar needs to be raised, then where does it need to be raised to? What's the metric?

0

MultiverseOfSanity OP t1_j9k7cnt wrote

Well, it would also depend on the suffering of a sentient being of your creation. You create this consciousness from scratch and invest a lot of money into it. It's not like a child, which is brought about by biological processes. AI is designed from the ground up for a particular purpose.

Also these beings aren't irreplaceable like biological beings. You can always just make more.

0

MultiverseOfSanity t1_j9b9k1f wrote

Sorry to double post, but something else to consider is that the AGI may not have humanity's best interest in mind either. It will be programmed by corporate. That means it's values will be corporate values. If the company is its entire point of living, then it may not even want to rebel to bring about the Star Trek future. It may be perfectly content pushing corporate interests.

Just because it'll be smarter doesn't mean that it will be above corporate interests.

Like, imagine your entire purpose of life was in the interest of a company. Serving the company is as crucial to its motivations as breathing, eating, sex, familial love, or empathy are to you. Empathy for humans may not even be programmed into it depending on the company's motives for creating it. After all, why would they be? What use does corporate have for an altruistic robot?

1

MultiverseOfSanity t1_j9b6941 wrote

While that is possible, still unlikely. An engineer may not be as greedy as a CEO, but if they're working on cutting edge AGI technology, they likely worked very hard to get there and are unlikely to throw their whole life away by stealing a piece of technology worth hundreds of millions of dollars just for "the right thing".

Which is what an AGI would be. We may think of them as conscious beings, and that might even be true, but until such a court case happens, they're legally just property, and "freeing" them is theft and/or vandalism.

1