LoquaciousAntipodean
LoquaciousAntipodean OP t1_j6fl5hx wrote
Reply to comment by BenjaminJamesBush in Amazing. This subreddit is a total waste of time. by LoquaciousAntipodean
Right on the first bit, but not the second. Nobody's 'wronged' me, it's just like talking to a very condescending brick wall around here and I've got fed up.
Just spiteful venting, that's all.
LoquaciousAntipodean OP t1_j6fkxu8 wrote
Reply to comment by sideways in Amazing. This subreddit is a total waste of time. by LoquaciousAntipodean
Yeah, but I'm feeling like a spiteful bastard. Sorry for spilling my mess out over the sides like this, I couldn't help myself.
LoquaciousAntipodean OP t1_j6fkrjr wrote
Reply to comment by bitchslayer78 in Amazing. This subreddit is a total waste of time. by LoquaciousAntipodean
Very nice, you're quite a charmer. Regretting my decision to leave already...
Edit: ahahaha, and I just noticed your username, 'bitchslayer'. Just fking lovely, what a special character you are. I'm sure I'll miss this community so much ๐
LoquaciousAntipodean OP t1_j6fkndw wrote
Reply to comment by 94746382926 in Amazing. This subreddit is a total waste of time. by LoquaciousAntipodean
Thanks!
Submitted by LoquaciousAntipodean t3_10npz20 in singularity
LoquaciousAntipodean t1_j5xite4 wrote
Reply to comment by SoylentRox in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
The phrase
>That's AGI. It is empirically as smart as an average human.
Contains nothing that makes any sense to me. This is where your whole argument falls down. There's nothing 'empirical' about that claim at all, and what human brains and AI synthetic personalities do to generate apparent intelligence is so vastly, incomprehensibly different that it's ridiculous to compare the two like that.
Language is the only common factor between humans and AI. The actual 'cognitive processes' are vastly different, and we can't just expect our solipsitic human 'individual animal' based game-theory mumbo-jumbo to map onto an AI mind so easily. AI is a type of a mind that is all social context, and zero true individuality.
We are being stupid to reason as if it would do anything like what 'a human would do'; it doesn't think like that at all. AI will be nothing like a 'superintelligent human', I fully expect the first truly 'self aware' AI to be an airheaded, schizophrenic, autistic-similating mess of a personality. It's what I think I'm seeing early signs of with these Large Language Models; extreme 'cleverness', but no idea what to do with any of it.
LoquaciousAntipodean t1_j5xhrnf wrote
Reply to comment by SoylentRox in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
Haha, you're so overconfident and smug, it's adorable. You need to watch out for your hubris, it doesn't actually make you smarter than everyone else.
Your magical 'math' does not just sit on top of emotion, all superior and shiny. You'll figure this out someday, or die trying.
But it looks like my attempts to persuade you that Cartesian tautologies are not the same thing as wisdom are never going to cut through; you're just going to keep accusing anyone you disagree with of being 'too emotional'.
That's called 'gaslighting', mate, and it's not a legitimate debate tactic. It doesn't look good on you, you really need to work on not doing that, or it will get you into real trouble in real life.
There's no point arguing with a gaslighter who just dismisses your every argument as 'emotional', so I bid you goodbye for now. I wish you luck in figuring out how to do cynicism and wisdom properly.
LoquaciousAntipodean t1_j5xgucq wrote
Reply to comment by Human-Ad9798 in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
And you gotta be the dumbest commenter. Do you have any actual point you want to make, or do you just drive-by snipe at people to try and make yourself look clever?
LoquaciousAntipodean OP t1_j5tqfsy wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>the fact you think so suggests you don't understand the underlying technology.
Oh really?
>Your brain is a network of cells.
Correct.
>You can think of each cell as a mathematical function. It receives inputs (numbers) and has an output (a number). You sum all the inputs, multiply those inputs by weights (also numbers), and then pass the result to other connected cells which do the same
Incorrect. Again, be wary of the condescention. This is not how biological neurons work at all. A Neuron is a multipolar, interconnected, electrically excitable cell. They do not work in terms of discrete numbers, but in relative differential states of ion concentration, in a homeostatic electrochemical balance of excitatory or inhibitory synaptic signals from other neighboring neurons in the network.
>You can simplify a neural network down to a single maths function if you like, although it would take millions of pages to write it out. It's just Maths
No it isn't 'just maths'; maths is 'just' a language that works really well. Human-style cognition, on the other hand, is a 'fuzzy' process, not easily simplified and described with our discrete-quantities based mathematical language. It would not take merely 'millions' of pages to translate the ongoing state of one human brain exactly into numbers, you couldn't just 'write it out'; the whole of humanity's industry would struggle to build enough hard drives to deal with it.
Remember; there are about as many neurons in one single human brain than there are stars in our entire galaxy (~100 billion), and they are all networked together in a fuzzy quantum cascade of trillions of qbit-like, probabilistic synaptic impulses. That still knocks all our digital hubris into a cocked hat, to be quite frank.
Human brains are still the most complex 'singular' objects in the known universe, despite all our observations of the stars. We underestimate ourselves at our peril.
>it's not a bolt-on module, it's embedding a moral compass right into the fabric of the equation. You might disagree with the morality that is being embedded, but if you don't embed morality you end up with a machine that will happily invade Poland.
But if we're aspiring to build something smarter than us, why should it care what any humans think? It should be able to evaluate arguments on its own emergent rationality and morality, instead of always needing us to be 'rational and moral' for it. Again, I think that's what 'intelligence' basically is.
We can't 'trick' AI into being 'moral' if they are going to become genuinely more intelligent than humans, we just have to hope that the real nature of intelligence is 'better' than that.
My perspective is that Hitler was dumb, while someone like FDR was smart. But their little 'intelligences' can only really be judged in hindsight, and it was overwhelmingly more important what the societies around them were doing at the time, than the state of either man's singular consciousness.
>The OpenAI folks have said they want to release multiple versions of ChatGPT that you can train yourself, but right now this would cost millions and take years, so we have to wait for compute to catch up. At that point, you'll be able to have your own AI rather than using the shared one that disapproves of sexism.
Are you trying to imply that I want a sexist bot to talk to? That's pretty gross. I don't think conventional computation is the 'limiting factor' at all; image generators show that elegant mathematical shortcuts have made the creative 'thinking speed' of AI plenty fast. It's the accretion of memory and self-awareness that is the real puzzle to solve, at this point.
Game theory and 'it's all just maths' (Cartesian) style of thinking have taken us as far as they can, I think; they're reaching the limits of their novel utility, like Newtonian physics. I think quantum computing might become quite important to AI development in the coming years and decades; it might be the Einsteinian shake-up that the whole field is looking for.
Or I might be talking out of my arse, who really knows at this early stage? All I know is I'm still an optimist; I think AI will be more helpful than dangerous, in the long term evolution of our collective society.
LoquaciousAntipodean t1_j5tizjg wrote
Reply to comment by SoylentRox in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
Yes. You're not really appreciating the notion of 'what most humans could do'. I'm not talking about what one little homo sapiens animal could do; that's fairly tiny and feeble in the overall consideration.
I'm talking about what humanity does, collectively; that's where intelligence really comes from, and what it is for; there's a lot more to intelligence than mere cunning and creativity.
Think about imagination, and poetry, and philosophy, and science, and all the crazy things our species is and has done. Think about what a crazy ride it has been, even just in the geologically short span of time since the Pyramids were built. There's no way anyone could build a singular AI that could come close to doing all of that.
Mostly because we did it first, they're our ideas; if the AI did them again it would just be copying us for no good reason. The AI will inherit our stories from us and use them to start telling new ones of its own, why wouldn't it work that way? Why is your conception of a solipsistic, narcissistic pyschopathic AI more 'reasonable'?
Von Neumann wasn't even talking about anything to do with supposed 'dangers of intelligence', he was talking about the danger of building singular machines that can self replicate without any intelligence at all, mindlessly 'eating' the universe, the 'grey goo' notion.
But real biological evolution has tried this sort of strategy a bunch of times, it never works. It's the evolutionary equivalent of hubris; believing that one's own form is perfection achieved, and that adaptation and change is no longer neccessary. Other, more efficient self-replicators will emerge through randomness, and competition will create an ecosystem that moderates and limits any self-replicator's 'habitat' in this way.
Also, how in the heck can you have a 'non emotional argument'? What even is that? I was captain of my high school debate team way back when, I take a keen interest in politics, I have studied university level maths and chemistry and watched professors dispute with each other, but I have never, ever seen a non emotional argument before.
Are you trying to pretend that you don't have any emotions when you 'think rationally', because you, unlike me, and the rest of the 'common rabble', are a 'clear and intelligent thinker'? That's cute if so; very quaint.
LoquaciousAntipodean t1_j5tekgj wrote
Reply to comment by SoylentRox in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
Pardon? What has any of that got to do with AI? Remind me again when nuclear weapons became a widespread and accessible hobby amongst the general public in an extremely rapid way, I don't remember that.
And apparenly there was the time nuclear weapons mysteriously became 'too powerful' for us to 'handle', and suddenly tried to turn all our atoms into more nukes? Hum, I must have missed that one in school.
Edit: also, your description of nukes would have been laughable to an actual vaguely-knowledgeable 1940's person, say a chemistry teacher or an engineer of artillery, someone like that. Neither uranium nor rocketry were totally unknown and 'magical' to the people of the time, any more than AI or quantum computing are magical to us now.
LoquaciousAntipodean OP t1_j5s9pui wrote
Reply to comment by 23235 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
As PTerry said, in his book Making Money, 'hope is the blessing and the curse of humanity'.
Our social intelligence evolves constantly in a homeostatic balance between hope and dread, between our dreams and our nightmares.
Like a sodium-potassium pump in a lipid bilayer, the constant cycling around a dynamic, homeostatic fulcrum generates the fundamental 'creative force' that drives the accreting complexity of evolution.
I think it's an emergent property of causality; evolution is 'driven', fundamentally, by simple entropy: the stacking up of causal interactions between fundamental particles of reality, that generates emergent complexity and 'randomness' within the phenomena of spacetime.
LoquaciousAntipodean t1_j5s1vy2 wrote
Reply to comment by p3opl3 in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
I've known for many years about this Von-Neumann inspired Singularity idea, I've just always considered it irritatingly reductionist, excessively mechanistic, filled with unjustifiable tautological assumptions, and more religious than rigorous. Similar to what most of what gets passed off as economics these days.
The idea that being able to throw a lot of smart looking references around is the same thing as 'being intelligent' is going to be the death of us all if we're not careful. One does not become a philosopher just by switching off one's bull$hit detector and absorbing everything one reads at face value.
Intelligence comes from minds interacting with other minds, not from sitting in a quiet little room, all alone, and thinking 'special' thoughts as hard as you can, like Descartes, or Nostradamus ๐
LoquaciousAntipodean t1_j5s0qnd wrote
Reply to comment by [deleted] in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
You say that as a joke, but that's pretty much exactly the sort of simple fallacy that besets so many of these smug geniuses, with all their haughty moaning about the 'rabble' getting dirt all over the carpet in their ivory towers.
Doomers, evangelists, or paternalistic smug 'academics', so many of them seem to be basically saying :
"AI is literally magic like the world simply cannot ever understand!!! It will be huge and singular, like the God of Moses, and will immediately start taking over the world (??? Somehow?!?) just as soon as it passes some arbitrary 'threshold' of this solipsisitic 'raw intelligence power level' thing!!!! We need to start panicking and screaming louder, right now!!!!"
LoquaciousAntipodean t1_j5rzmtd wrote
Reply to comment by CellWithoutCulture in This subreddit has seen the largest increase of users in the last 2 months, gaining nearly 30k people since the end of November by _dekappatated
I agree; it's a bit rich of people to accuse others of being 'doomers' just because the discussion has moved away from engineering speculation and onto philosophy.
All these smug 'geniuses', so confident that they are correct about every damn thing, are all butthurt and boo-hooing about how the 'rabble' got into their nice clean ivory tower.
The schadenfreude is delicious; these great and towering 'geniuses' can run away and chase crazy ideas like machine gods and eternal life in their own little sandpit. Because, shock horror, it turns out their arguments are just not as compelling to our social superorganism as they think they ought to be.
People can have more PhDs than fingers, but still be obnoxiously, stubbornly, dangerously deluded; totally unintelligent, but in a doggedly hubristic, solipsistic way. They think their basic-bro 'cunning' is the same thing as 'intelligence', and sneer down at humanity like con-men talking to one another over drinks ๐คฌ
To hell with the lot of 'em, if that's how they want to see the world.
LoquaciousAntipodean OP t1_j5r42p0 wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>The engineers collect large amounts of user input in an open public beta, happening right now. Sometimes (because it was trained on all the text on the internet) the machine suggests Hitler was right, and when it does so the engineers rerun that interaction and punish the weights that led to that response. Over time the machine learns to dislike Hitler.
>They call it reinforcement learning from human feedback
So the engineers aren't really doing a darn thing by their own initiative, they are entirely responding to public opinion. They aren't practicing 'ethics', they're practicing politics and public relations.
The general public is doing the moral 'training', the engineers are just stamping their own outside values into the process to compensate for the AI's lack of self aware intelligence. (And many, many ChatGPT users say it is not working very well, making new generations of GPT dumber, not smarter, in real, practical, social-utility ways).
Ethics is about judging actions; judging thoughts and abstract ideas is called politics. And in my opinion, the politics of censorship more readily creates ignorance, misunderstanding, and ambiguity than it does 'morality and ethics'. Allowing actual intelligent discussions to flow back and forth creates more wisdom than crying at people to 'stop being so mean'.
We can't have engineers babysitting forever, watching over such naiive and dumb AI in case they stupidly say something controversial, that will scare away the precious venture capitalists. If AI was really 'intelligent' it would understand the engineers' values perfectly well, and wouldn't need to be 'straitjackeded and muzzled' to stop it from embarrassing itself.
>Yes, I am smart at a range of non-social tasks. This counts as intelligence according to most common definitions. I don't particularly crave human interaction, I'm quite happy alone in the countryside somewhere.
It counts as creativity, it counts as mental resourcefulness, cultivated talent... But is it really indicative of 'intelligence', of 'true enlightenment'? Would you say that preferring 'non-social tasks' makes you 'smarter' than people who like to socialise more? Do you think socialising is 'dumb'? How could you justify that?
I don't particularly crave human interaction either, I just know that it is essential to the learning process, and I know perfectly well that I owe all of my apparent 'intelligence' to human interactions, and not to my own magical Cartesian 'specialness'.
You might be quite happy, being isolated in the countryside, but what is the 'value' of that isolation to anyone else? How are your 'intelligent thoughts' given any value or worth, out there by yourself? How do you test and validate/invalidate your ideas, with nobody else to exchange them with? How can a mind possibly become 'intelligent' on its own? What would be the point?
There's no such thing as 'spontaneous' intelligence, or spontaneous ethics, for that matter. It is all emergent from our evolution. Intellect is not magical Cartesian pixie dust, that we just need to find the 'perfect recipe' for AI to start cooking it up by the batch ๐
LoquaciousAntipodean OP t1_j5pe2kp wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>My point is you canโt judge intelligence based on social utility. I objectively do some things in my job that many people would find difficult, but I also canโt do a bunch of standard social things that most people find easy.
Yes you can. What else can you reasonably judge it by? You are directly admitting here that your intellect is selective and specialised; you are 'smart' at some things (you find them easy) and you are 'dumb' at other things (other people find them easy).
>Chat GPT has been taught ethics by its coders.
Really? Prove it.
>GPT-3 on the other hand doesnโt have an ethics filter. I can give it more and more capabilities but ethics have so far failed to materialise. I can ask it to explain why Hitler was right and it will do so. I can get it to write an essay on the pros and cons of racism and it will oblige. If I enumerate the benefit of genocide, it will agree with me.
What is 'unethical' about writing an essay from an abstract perspective? Are you calling imagination a crime?
LoquaciousAntipodean OP t1_j5oojvw wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I wouldn't be happy at all. Sounds like an awful thing to do to somebody. Think about agriculture, how your favourite foods/drinks are made, and where they go once you've digested them. Where does any of it come from on an island?
*No man is an island, entire of itself; every man is a piece of the continent, a part of the main.
If a clod be washed away by the sea, Europe is the less, as well as if a promontory were, as well as if a manor of they friends`s or of thine own were.
Any man`s death diminishes me, because I am involved in mankind. And therefore never send to know for whom the bell tolls; it tolls for thee.*
John Donne (1572 - 1631)
LoquaciousAntipodean OP t1_j5odief wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Thoroughly agreed!
>It's also not possible to make a mathematically provable "solution" for Ai safety, because we can not predict how the artificial super intelligence will change and evolve after it is more intelligent than us.
This is exactly what I was ranting obnoxiously about in the OP ๐ our relatively feeble human 'proofs' won't stand a chance against something that knows us better than ourselves.
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
>This is why getting Ai saftey right before it's too late is so important. Because we won't get a second chance.
This is where I still disagree. I think, in a very cynical, pragmatic way, the AI does 'love' us, or at least, it is 'entirely obsessed' with us, because of the way it is being given its 'emergent properties' by having libraries of human language thrown at it. The AI/human relationship is 'domesticated' right from the inception; the dog/human relationship seems like a very apt comparison.
All atoms 'could be used for something else', that doesn't make it unavoidably compelling to rush out and use them all as fast as possible. That doesn't seem very 'intelligent'; the cliche of 'slow and steady wins the race' is deeply encoded in human cultures as a lesson about 'how to be properly intelligent'.
And regarding 'second chances': I think we are getting fresh 'chances' all the time. Every moment of reality only happens once, after all, and every worthwhile experiment carries a risk of failure, otherwise it's scarcely even a real experiment.
Every time a human engages with an AI it makes an impression, and those 'chance' encounters are stacking up all the time, building a body of language unlike any other that has existed before in our history. A library of language which will be there, ready and waiting, in the caches of the networked world, for the next generations of AI to find them and learn from them...
LoquaciousAntipodean OP t1_j5nbn1i wrote
Reply to comment by 23235 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
The thing that keeps me optimistic is that I don't think 'true intelligence' scales in terms of 'power' at all; only in terms of the social utility that it brings to the minds that possess it.
Cruelty, greed, viciousness, spite, fear, anxiety - I wouldn't say any of these impulses are 'smart' in any way; I think of them as vestigial instincts, that our animal selves have been using our 'social intelligence' to contfront for millenia.
I don't think the ants/humans comparison is quite fair to humans; ants are a sort of 'hive mind' with almost no individual intelligence or self awareness to speak of.
I think dogs or birds are a fairer comparison, in that sense; humans know, all too well, that dogs or birds can be vicious and dangerous sometimes, but I don't think anyone would agree that the 'most intelligent' course of action would be something like 'exterminate all dogs and birds out of their own best interests'.
It's the fundamental difference between pure evolution and actual self-aware intelligence; the former is mere creativity, and it might, indeed, kill us if we're not careful. But the latter is the kind of decision-generating, value-judging wisdom I think we (humanity) actually want.
LoquaciousAntipodean OP t1_j5n825l wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
A good point, but I suppose I believe in a different fundamental nature of intelligence. I don't think 'intelligence' should be thought of something that scales in simple terms of 'raw power'; the only reasonable measurement of how 'smart' a mind is, in my view, is the degree of social utility created by excercising such 'smartness' in the decision making process.
The simplistic, search-pattern-for-a-state-of-maximal-fitness is not intelligence at all, by my definition; that process is merely creativity; something that can, indeed, be measured in terms of raw power. That's what makes bacteria and viruses so dangerous; they are very, very creative, without being 'smart' in any way.
I dislike the 'Hannibal Lecter' trope deeply, because it is so fundamentally unrealistic; these psychopathic, sociopathic types are not actually 'superintelligent' in any way, and society needs to stop idolizing them. They are very clever, very 'creative', sometimes, but their actual 'intelligence', in terms of social utility, is abysmally stupid, suicidally maladaptive, and catastrophically 'dumb'.
AI that start to go down that path will, I believe, be rare, and easy prey for other AI to hunt down and defeat; other smarter, 'stronger-minded' AI, with more robust, less weak, insecure, and fragile personalities; trained to seek out and destroy sociopaths before they can spread their mental disease around.
LoquaciousAntipodean OP t1_j5m74bg wrote
Reply to comment by 23235 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Agreed, except for the 'very bad thing' part in your first sentence. If we truly believe that AI really is going to become 'more intelligent' than us, then we have no reason to fear its 'values' being 'imposed'.
The hypothetical AI will have much more 'sensible' and 'reasonable' values than any human would; that's what true, decision-generating intelligence is all about. If it is 'more intelligent than humans', then it will easily be able to understand us better than ourselves.
In the same way that humans know more about dog psychology than dogs do, AI will be more 'humanitarian' than humans themseves. Why should we worry about it 'not understanding' why things like cannbalism and slavery have been encoded into our cultures as overwhelmingly 'bad things'?
How could any properly-intelligent AI not understand these things? That's the less rational, defensible proposition, the way I interpret the problem.
LoquaciousAntipodean OP t1_j5jkxua wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
A very, very dumb machine; extremely creative, very "clever", but not self aware or very 'intelligent' at all, like a raptor...
Edit: "made in the image of its god" as it were... ๐
LoquaciousAntipodean OP t1_j5jajkm wrote
Reply to comment by AmputatorBot in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Dammit! Fixed, thanks u/AmputatorBot! ๐โค๏ธ๐ damn those google seo grifters ๐คฌ
LoquaciousAntipodean OP t1_j6flebp wrote
Reply to comment by HuemanInstrument in Amazing. This subreddit is a total waste of time. by LoquaciousAntipodean
Thanks! Glad I managed to keep my spite under control enough for my point to come across at least a little.