This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November
Submitted by _dekappatated t3_10kgr6b in singularity
Reply to comment by SoylentRox in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
Yes. You're not really appreciating the notion of 'what most humans could do'. I'm not talking about what one little homo sapiens animal could do; that's fairly tiny and feeble in the overall consideration.
I'm talking about what humanity does, collectively; that's where intelligence really comes from, and what it is for; there's a lot more to intelligence than mere cunning and creativity.
Think about imagination, and poetry, and philosophy, and science, and all the crazy things our species is and has done. Think about what a crazy ride it has been, even just in the geologically short span of time since the Pyramids were built. There's no way anyone could build a singular AI that could come close to doing all of that.
Mostly because we did it first, they're our ideas; if the AI did them again it would just be copying us for no good reason. The AI will inherit our stories from us and use them to start telling new ones of its own, why wouldn't it work that way? Why is your conception of a solipsistic, narcissistic pyschopathic AI more 'reasonable'?
Von Neumann wasn't even talking about anything to do with supposed 'dangers of intelligence', he was talking about the danger of building singular machines that can self replicate without any intelligence at all, mindlessly 'eating' the universe, the 'grey goo' notion.
But real biological evolution has tried this sort of strategy a bunch of times, it never works. It's the evolutionary equivalent of hubris; believing that one's own form is perfection achieved, and that adaptation and change is no longer neccessary. Other, more efficient self-replicators will emerge through randomness, and competition will create an ecosystem that moderates and limits any self-replicator's 'habitat' in this way.
Also, how in the heck can you have a 'non emotional argument'? What even is that? I was captain of my high school debate team way back when, I take a keen interest in politics, I have studied university level maths and chemistry and watched professors dispute with each other, but I have never, ever seen a non emotional argument before.
Are you trying to pretend that you don't have any emotions when you 'think rationally', because you, unlike me, and the rest of the 'common rabble', are a 'clear and intelligent thinker'? That's cute if so; very quaint.
>Also, how in the heck can you have a 'non emotional argument'? What even is that? I was captain of my high school debate team way back when, I take a keen interest in politics, I have studied university level maths and chemistry and watched professors dispute with each other, but I have never, ever seen a non emotional argument before.
>
>Are you trying to pretend that you don't have any emotions when you 'think rationally', because you, unlike me, and the rest of the 'common rabble', are a 'clear and intelligent thinker'? That's cute if so; very quaint.
Arrguments like "numbers, math, irreducible complexity. Saying there isn't enough compute. Saying that AI companies right now are soon going to hit a wall because <your reason> and that funding will get pulled."
When you say you studied "university level maths and chemistry" but you didn't mention CS or machine learning, you're making a weak non emotional argument. (because you aren't actually qualified to have the opinion you claim)
When you say " That's cute if so; very quaint." that's an appeal to emotion.
Or "Are you trying to pretend that you don't have any emotions when you 'think rationally', because you, unlike me, and the rest of the 'common rabble', are a 'clear and intelligent thinker'?". Same thing. Because sure, everyone has emotions but some people are able to do math and determine if an idea is going to work or not.
Haha, you're so overconfident and smug, it's adorable. You need to watch out for your hubris, it doesn't actually make you smarter than everyone else.
Your magical 'math' does not just sit on top of emotion, all superior and shiny. You'll figure this out someday, or die trying.
But it looks like my attempts to persuade you that Cartesian tautologies are not the same thing as wisdom are never going to cut through; you're just going to keep accusing anyone you disagree with of being 'too emotional'.
That's called 'gaslighting', mate, and it's not a legitimate debate tactic. It doesn't look good on you, you really need to work on not doing that, or it will get you into real trouble in real life.
There's no point arguing with a gaslighter who just dismisses your every argument as 'emotional', so I bid you goodbye for now. I wish you luck in figuring out how to do cynicism and wisdom properly.
>Your magical 'math' does not just sit on top of emotion, all superior and shiny.
From a theoretic perspective, it does. For example, you probably do know that if you're gambling in a card game, it doesn't matter how you feel. It's only the information that you have available to you and an algorithm someone validated in a simulation that should determine your actions.
Even for a game like Poker, it turns out AI is better than humans because apparently world class poker players bluff perfectly enough that other humans can't tell.
As an individual human, with an evolved meatware brain, am I above emotion? Of course not. But from a factual perspective, arguing with math is more likely to be correct (or less wrong)
>Yes. You're not really appreciating the notion of 'what most humans could do'. I'm not talking about what one little homo sapiens animal could do; that's fairly tiny and feeble in the overall consideration.
This is what the AGI is.
We're saying we can make an AI that has a set of skills broad enough, as measured by points on test benches that both humans and the machine can play - and the test bench is very broad covering a huge range of skills - that it beats the average human.
That's AGI. It is empirically as smart as an average human.
No one is claiming it will be smarter than more than 1 'little homo sapiens animal' in version 1.0, though obviously we expect to be able to do lots better at an accelerating rate.
I expect we may see agi before 2030, by this definition.
As for self replicating and taking over the universe: there is a reason to think the industrial tasks for factories, etc, are easier than say original art. So even the first AGI would be able to do all the robotic control tasks that could take over the universe, albeit it likely wouldn't have the data for many of the steps that humans didn't write down.
The phrase
>That's AGI. It is empirically as smart as an average human.
Contains nothing that makes any sense to me. This is where your whole argument falls down. There's nothing 'empirical' about that claim at all, and what human brains and AI synthetic personalities do to generate apparent intelligence is so vastly, incomprehensibly different that it's ridiculous to compare the two like that.
Language is the only common factor between humans and AI. The actual 'cognitive processes' are vastly different, and we can't just expect our solipsitic human 'individual animal' based game-theory mumbo-jumbo to map onto an AI mind so easily. AI is a type of a mind that is all social context, and zero true individuality.
We are being stupid to reason as if it would do anything like what 'a human would do'; it doesn't think like that at all. AI will be nothing like a 'superintelligent human', I fully expect the first truly 'self aware' AI to be an airheaded, schizophrenic, autistic-similating mess of a personality. It's what I think I'm seeing early signs of with these Large Language Models; extreme 'cleverness', but no idea what to do with any of it.
>Contains nothing that makes any sense to me. This is where your whole argument falls down. There's nothing 'empirical' about that claim at all,
Here's what the claim is.
Right now, Gato demonstrated expert performance or better on a set of tasks. https://www.deepmind.com/blog/a-generalist-agent .
So Gato is an AI. You might call it a 'narrow general AI' because it's only better than humans at about 200 tasks, and the average living human likely has a broader skillset.
Thus an AGI - an artificial general intelligence - is one where it's as good as the average human on a set of tasks consistent with the breadth of skills an average living person has.
Basically, make the benchmark larger. 300,000 tasks or 3 million or 30 million. Whatever it has to be. The first machine to do as well as the average human on the benchmark is the world's first AGI.
A score on a cognitive test that you have humans also tested on is an empirical measurement of intelligence.
Arguably, you might also expect generality, simplicity of architecture, and online learning. You would put a lot of points in the benchmark on with-held tasks that use skills other tasks require but in a way the machine won't have seen.
Because we cannot benchmark tasks that can't be automatically graded, this makes it difficult for the AGI to learn things like social interactions. So you are correct, it might be 'autistic'.
It will probably not even have a personality. It's basically a robot where if you tell it to do something, and that something is similar enough to things it has practiced doing, it will be able to do it successfully.
It has no values or morals or emotions - lots of things. Just breadth of skills.
[deleted]
Viewing a single comment thread. View all comments