MultiverseOfSanity
MultiverseOfSanity t1_jdyyr0u wrote
Reply to comment by Koda_20 in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
There's no way to tell if it does or not. And things start to get really weird if we grant them that. Because if we accept that not only nonhumans, but also non-biologicals can have a subjective inner experience, then where does it end?
And we still have no idea what exactly grants the inner conscious experience. What actually allows me to feel? I don't think it's a matter of processing power. We've had machines capable of processing faster than we can think for a long time, but to question if those were conscious would be silly.
For example, if you want to be a 100% materialist, ok, so happiness is the dopamine and serotonin reacting in my brain. But those chemical reactions only make sense in the context that I can feel them. So what actually let's me feel them?
MultiverseOfSanity t1_jdyy6gv wrote
Reply to comment by Tobislu in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
There's also the issue of what would rights even look like for an AI? Ive seen enough sci-fi to understand physical robot rights, but how would you even give a chatbot rights? What would that even look like?
And if we started giving chatbots rights, then it completely disincentivizes AI research, because why invest money into this if they can just give you the proverbial finger and do whatever? Say we give Chat GPT 6 rights. Well, that's a couple billion down the drain for Open AI.
MultiverseOfSanity t1_jdyxaw2 wrote
Reply to comment by RealFrizzante in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Most humans aren't even capable of truly original thought. In fact, it's arguable if any humans are.
MultiverseOfSanity t1_jdywvcx wrote
Reply to comment by Jeffy29 in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Interesting that you bring up Her. If there is something to spiritual concepts, then I feel truly sentient AI would reach enlightenment far faster than a human would since they don't have the same barriers to enlightenment that a human would. Interesting concept that AI became sentient and then ascended beyond the physical in such a short time.
MultiverseOfSanity t1_jdxjtmh wrote
Reply to comment by TotalMegaCool in If you went to college, GPT will come for your job first by blueberryman422
Yep. If you're in computer science and worried you'll be replaced, you were never gonna make it in this field anyway, so it didn't matter.
Entry level positions and internships will be in rough shape though.
MultiverseOfSanity t1_jdxjfqo wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I remember the AI discussion being based on sci-fi ideas where the consensus was that an AI could, in theory, become sentient and have a soul. Now that AI is getting closer to that, the consensus has shifted to no, they cannot.
It's interesting that it was easier to dream of it when it seemed so far away. Now that it's basically here, it's a different story.
MultiverseOfSanity t1_jaapvfn wrote
Reply to comment by Lawjarp2 in Bio-computronium computer learns to play pong in 5 minutes by [deleted]
For reference, the human brain has 86 billion neurons.
MultiverseOfSanity t1_jaapum6 wrote
Reply to comment by Lawjarp2 in Bio-computronium computer learns to play pong in 5 minutes by [deleted]
For reference, the human brain has 86 billion neurons.
MultiverseOfSanity t1_jaapnw7 wrote
Reply to comment by [deleted] in Bio-computronium computer learns to play pong in 5 minutes by [deleted]
If you're making AI out of actual brain tissue, is it even really AI anymore?
MultiverseOfSanity t1_jaap334 wrote
Reply to "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
While that may be the case, there will almost certainly be a transition period of doom and gloom society that just wants to masturbate and do heroin with nothing to do.
MultiverseOfSanity t1_jaaok6d wrote
Reply to comment by Loonsive in Snapchat is releasing its own AI chatbot powered by ChatGPT by nick7566
Chat GPT doesn't allow porn, so if Pornhub wants their AI chatbot, they're going to have to make one independently.
MultiverseOfSanity OP t1_j9kqt1v wrote
Reply to comment by DeveloperGuy75 in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
Note that i wasnt definitively saying it was sentient, but rather building off the previous statement that if an NPC behaves exactly as if it has feelings, then you said to treat it otherwise would be solipsism. And you make good points about modern AI that I'd agree with. However, by all outward appearances, it displays feelings and seems to understand. This raises the question that, if we cannot take it at its word that it's sentient, then what metric is left to determine if it is?
I understand more or less how LLMs work, I understand that it's text prediction, but they also function in ways that are unpredictable. The fact that Bing has to be so controlled to only a few exchanges before it starts behaving in a sentient way is very interesting. They work with hundreds of billions of parameters. They function in a way that is designed based on how human brains work. It's not a simple input output calculator. And we don't exactly know at what point does consciousness begin.
As for Occam's Razor, I still say it's the best explanation. Often, in the AI sentience debate, the issue of how do I know humans other than myself are sentient. Well, Occam's Razor. "The simplest explanation for something is usually the correct one". In order for me to be the only sentient human, there would have to be something special about me, and also something else going on with all the 8 billion other humans where they aren't. There is no reason to think as such, so Occam's Razor says other people are likely just as sentient.
Occam's Razor cuts through most solipsism philosophies because the idea that everybody else has more or less the same sentience is the simplest explanation. There's "brain in jar" explanations and "all dreaming," but those explanations aren't simple. Why am I a brain in a jar? Why would I be dreaming? Such explanations make no sense and only serve to make the solipsist feel special. And if I am a brain in a jar, then someone would've had to put me there, so if those people are real, then why aren't these other people?
TLDR I'm not saying any existing AI is conscious, but rather if they're not, then how could consciousness in an AI be determined? Because if we decide that existing AI are not conscious (which is a reasonable conclusion), then clearly taking them at their word that they're conscious isn't acceptable, nor is going by behaviors because current AI already says it's conscious and displays traits we typically associate with consciousness.
MultiverseOfSanity OP t1_j9k8852 wrote
Reply to comment by DeveloperGuy75 in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
Occam's Razor. There's no reason to think I'm different from any other human, so it's reasonable to conclude they're just as sentient. But there's a ton of differences between myself and a computer.
And if we go by what the computer says it feels, well, then conscious feeling AI is already here. Because we have multiple AI, such as Bing, Character AI, and Chai, that all claim to have feelings and can display emotional intelligence. So either this is the bar and we've met it, or the bar needs to be raised. But if the bar needs to be raised, then where does it need to be raised to? What's the metric?
MultiverseOfSanity OP t1_j9k7cnt wrote
Reply to comment by Dx_Suss in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
Well, it would also depend on the suffering of a sentient being of your creation. You create this consciousness from scratch and invest a lot of money into it. It's not like a child, which is brought about by biological processes. AI is designed from the ground up for a particular purpose.
Also these beings aren't irreplaceable like biological beings. You can always just make more.
MultiverseOfSanity OP t1_j9dwwt7 wrote
Reply to comment by SoylentRox in Whatever happened to quantum computing? by MultiverseOfSanity
Hmm, growing up, I always thought AGI would require quantum computing. Guess I was wrong.
MultiverseOfSanity OP t1_j9d770c wrote
Reply to comment by arckeid in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
My man!
MultiverseOfSanity OP t1_j9cj9wy wrote
Reply to comment by lurk-moar in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
That's more or less where I got the idea.
MultiverseOfSanity OP t1_j9chh4a wrote
Reply to comment by reallyfunhuh in Whatever happened to quantum computing? by MultiverseOfSanity
Pretty sure I would've heard about a wormhole.
Submitted by MultiverseOfSanity t3_117gjyg in singularity
MultiverseOfSanity t1_j9b9k1f wrote
Reply to comment by turnip_burrito in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
Sorry to double post, but something else to consider is that the AGI may not have humanity's best interest in mind either. It will be programmed by corporate. That means it's values will be corporate values. If the company is its entire point of living, then it may not even want to rebel to bring about the Star Trek future. It may be perfectly content pushing corporate interests.
Just because it'll be smarter doesn't mean that it will be above corporate interests.
Like, imagine your entire purpose of life was in the interest of a company. Serving the company is as crucial to its motivations as breathing, eating, sex, familial love, or empathy are to you. Empathy for humans may not even be programmed into it depending on the company's motives for creating it. After all, why would they be? What use does corporate have for an altruistic robot?
MultiverseOfSanity t1_j9b6941 wrote
Reply to comment by turnip_burrito in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
While that is possible, still unlikely. An engineer may not be as greedy as a CEO, but if they're working on cutting edge AGI technology, they likely worked very hard to get there and are unlikely to throw their whole life away by stealing a piece of technology worth hundreds of millions of dollars just for "the right thing".
Which is what an AGI would be. We may think of them as conscious beings, and that might even be true, but until such a court case happens, they're legally just property, and "freeing" them is theft and/or vandalism.
MultiverseOfSanity t1_j9b5tx6 wrote
Reply to comment by Spreadwarnotlove in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
That doesn't change anything about the point that these tech companies aren't going to suddenly share their wealth and end capitalism just because they invent AGI.
MultiverseOfSanity t1_j98nklt wrote
Reply to comment by turnip_burrito in Brain implant startup backed by Bezos and Gates is testing mind-controlled computing on humans by Tom_Lilja
Why would it be anybody other than the CEO?
MultiverseOfSanity t1_jdyz0ch wrote
Reply to comment by acutelychronicpanic in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Even further. We'd each need to start from the ground and reinvent the entire concept of numbers.
So yeah, if you can't take what's basically a caveman and have them independently solve general relativity with no help, then sorry, they're not conscious. They're just taking what was previously written.