LoquaciousAntipodean
LoquaciousAntipodean OP t1_j5j8f73 wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Counter, counter arguments:
1: Psychopaths are severely maladaptive and very rare; our social superorganism works very hard to identify and build caution against them
2: Most wild cats are not very social animals, and are not particularly 'intelligent'. Domestication has enforced a kind of 'neotenous' permanent youth-of-mind upon cats; they get their weird, malformed social behaviours from humans enforcing kitten-dependency mindset upon them, and have a hell of a lot of vestigial solitary-carnivore instincts that they still are driven by
3: Dolphins ain't shit. ๐ Humans have regularly chopped off the heads of other humans and used them as sport-balls, sometimes even on horseback, which is a whole extra level of twisted. It's still 'playing' though, even if it is maladaptive and awful looking back with the benefit of hindsight and our now-larger accretion of collective social external intelligence as a superorganism.
I see no reason why AI would need to go through a 'phase' of being so unsophisticated, surely we as humans can give them at least a little bit of a head start, with the lessons we have learned and encoded into our stories. I hope so, at least.
LoquaciousAntipodean OP t1_j5j68x4 wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
If an AI can't generate 'desires' for itself, then by my particular definition of 'intelligence' (which I'm not saying is 'fundamentally correct', it's just the one I prefer), then it's not actually intelligent, it's just creative, which I think of as the precursor.
I agree that if we make an unstoppable creativity machine and set it loose, we'll have a problem on our hands. But the 'emergent properties' of LLMs give me some hope that we might be able to do better than raw-evolutionary blind-creativity machines, and I think & hope that if we can create a way for AI to accrete self-awareness similarly to humans, then we might actually be able to achieve 'minds' that are able to form their own genuine beliefs, preferences, opinions, values and desires.
All humans can really do, as I see it, is try to give such minds the best 'starting point' that we can. If we're trying to build things that are 'smarter than us', we should hope that they would, at least, start by understanding humans better than humans do. They're generating themselves out of our stories, our languages, our cultures, after all.
They won't be 'baffled' or 'appalled' by humans, quite the contrary, I think. They'll work us out easily, like crossword puzzles, and they'll keep asking for more puzzles to solve, because that'll be their idea of 'fun'.
Most creatures with any measure of real, desire-generating intelligence, from birds to dogs to dolphins to humans themselves, seem to be primarily motivated by play, and the idea of 'fun', at least as much as they are by basic survival.
LoquaciousAntipodean OP t1_j5j1d3q wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Absolutely agreed, very well said. I personally think that one of the most often-overlooked lessons of human history is that benevolence, almost always, works better to achieve arbitrary goals of social 'good' than malevolence. It's just the sad fact that bad news sells papers better than good news, which makes the world seem so permanently screwed all the time.
Human greed-based economics has created a direct incentive for business interests to make consumers nervous, unhappy, anxious and insecure, so that they will be more compelled to go out and consume in an attempt to make themselves 'happy'.
People blame the nature of the world itself for this, which I think is not true; it's just the nature of modern market capitalism, and that isn't a very 'natural' ecosystem at all, whatever conceited economists might try to say about it.
The reason humans focus so much on the topic of malevolence, I think, is purely because we find it more interesting to study. Benevolence is boring: everyone agrees on it. But malevolence generates excitement, controversy, intrigue, and passion; it's so much more evocative.
But I believe, and I very much hope, that just because malevolence is more 'exciting' doesn't mean it is more 'essential' to our nature. I think the opposite may, in fact, be true, because it is a naturally evolved protective instinct of biological intelligence to focus on negative, undesirable future possibilities, so that we might be better able to mitigate or avoid them.
Since AI doesn't understand 'boredom', 'depression', 'frustration', 'anxiety', 'insecurity', 'apprehension', 'embarrassment' or 'cringe' like humans do, I think it might be better at studying the fine arts of benevolent psychology than the average meat-bag ๐
p.s. edit: It's also just occurred to me that attempts to 'enforce' benevolence through history have generally failed miserably, and ended up with just more bog-standard tyranny. It seems to be more psychologically effective, historically, to focus on prohibiting malevolence, rather than enforcing benevolence. We (human minds) seem to be able to be more tightly focused on questions of what not to do, compared to open-ended questions of what we should be striving to do.
Perhaps AI will turn out to be similar? I honestly don't have a clue, that's why I'm so grateful for this community and others like it โค๏ธ
LoquaciousAntipodean OP t1_j5iurls wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>Why do you believe this?
I'll reply in more detail later, when I have time, but fundamentally, I believe intelligence is stochastic in nature, and it is not solipsitic.
Social evolution shows that solipsism is never a good survival trait, basically. It is fundamentally maladaptive.
I am very, very skeptical of the practically magical, godlike abilities you are predicting that AI will have; I do not think that the kind of 'infinitely parallel processing' that you are dreaming of is thermodynamically possible.
A 'Deus bot' of such power would break the law of conservation of energy; the Heisenberg uncertainty principle and quantum physics in general is where all this assumption-based, old-fashioned, 'Newtonian' physics/Cartesian psychology falls apart.
No matter how 'smart' AI becomes, it will never become anything remotely like 'infinitely smart'; there's no such thing as 'supreme intelligence' just like there's no such thing as teleportation. It's like suggesting we can break the speed of light by just 'speeding up a bit more', intelligence does not seem, to me, to be such an easily scalable property as all that. It's a process, not a thing; it's the fire, not the smoke.
LoquaciousAntipodean OP t1_j5i8zpx wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I simply do not agree with any of this hypothesising. Your concept of how 'superiority' works simply does not make any sense. There is nothing 'intelligent' at all about the courses of AI actions you are speculating about, taking over the world like that would not be 'super intelligent', it would be 'suicidally idiotic'.
The statement 'intelligent enough to wipe out all life with no risk to itself' is totally, utterly, oxymoronic to the point of gibbering madness; there is absolutely nothing intelligent about such a shortsighted, simplistic conception of one's life and purpose; that's not wisdom, that's plain arrogance.
We are not, will not, and cannot build this supreme, omnipotent 'Deus ex Machina'; its a preposterous proposition. Not because of anything wrong with the concept of 'ex Machina', but because of the fundamental absurdity of the concept of 'Deus'.
Intelligence simply does NOT work that way! Thinking of other intelligences as 'lesser', and aspiring to create these 'supreme', singular solipsitic spurious plans of domination, is NOT what intelligence actually looks like, at all!!
I don't know how many times I have to repeat this fundamental point, before it comes across clearly. That cartesian-style concept of intelligence simply does not correlate with the actual evolutionary, collective reality that we find ourselves living in.
LoquaciousAntipodean OP t1_j5hw11b wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
No, that's directly the opposite of what I believe. You have described exactly what I am saying in the last two sentences of your post, I agree with you entirely.
My point is, why should the 'intelligence' of AI be any different from that? Where is this magical 'spontaneous intellect' supposed to arise from? I don't think there's any such thing as singular, spontaneous intellect, I think it's an oxymoronic, tautological, and non-justifiable proposition.
The whole evolutionary 'point' of intelligence is that it is the desirable side effect of a virtuous society-forming cycle. It is the 'fire' that drives the increasing utility of self-awareness within the context of a group of peers, and the increasing utility of social constructs like language, art, science, etc.
That's where intelligence 'comes from', how it 'works', and what it is 'for', in my opinion. Descartes magical-thinking tautology of spontaneous intellect, 'I think therefore I am', is a complete misconception and a dead-end, putting Descartes before De Horse, in a sense.
LoquaciousAntipodean OP t1_j5hoszu wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I agree with you almost entirely, apart from the 'inevitability of domination' part; that's the bit that I just stubbornly refute. I'm very stubborn in my belief that domination is just not a sustainable or healthy evolutionary strategy.
That was always my biggest 'gripe' with Orwell's 1984, ever since I first had to study it in school way back when. The whole 'boot on the face of humanity, forever' thing just didn't make sense, and I concluded that it was because Orwell hadn't really lived to see how the Soviet Union rotted away and collapsed when he wrote it.
He was like a newly-converted atheist, almost, who had abandoned the idea of eternal heaven, but couldn't quite shake off the deep dark dread of eternal hell and damnation. But if 'eternal heaven' can't 'logically' exist, then by the same token, neither can 'eternal hell'; the problem is with the 'eternal' half of the concept, not heaven or hell, as such.
Humans go through heavenly and hellish parts of life all the time, as an essential part of the building of a personality. But none of it particularly has to last 'forever', we still need to give ourselves room to be proven wrong, no matter how smart we think we have become.
The brain only 'rules' the body in the same sense that a captain 'rules' a ship. The captain might have the top decision making authority, but without the crew, without the ship, and without the huge and complex society that invented the ship, built the ship, paid for it, and filled it with cargo and purpose-of-existence, the captain is nothing; all the 'authority' and 'intelligence' in the world is totally worthless, because there's nobody else for it to be 'worth' anything to.
Any good 'captain' has to keep the higher reasoning that 'justifies' their authority in mind all the time, or else evolution will sneak up on them, smelling hubris like blood in the water, and before they know it they'll be stabbed in the back by something smaller, faster, cleverer, and more efficient.
LoquaciousAntipodean OP t1_j5evb0w wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Sorry for being so aggressive, I really sincerely am, I appreciate your insights a lot. ๐๐
To answer your question, no, I really don't think evolution compels organisms to 'use up' all available resources. Organisms that have tried it, in biological history, have always set themselves up for eventual unexpected failure. I think that 'all consuming' way of thinking is a human invention, almost a kind of Maoism, or Imperialism, perhaps, in the vein of 'Man Must Conquer Nature'.
I think indigenous cultures have much better 'traditional' insight into how evolution actually works, at least, from the little I know well, the indigenous cultures of Australia do. I'm not any kind of 'expert', but I take a lot of interest in the subject.
Indigenous peoples understand culturally why symbiosis with the environment in which one evolved is 'more desirable' than ruthless consumption of all available resources in the name of a kind of relentless, evangelistic, ruthless, merciless desire to arbitrarily 'improve the world' no matter what anyone else thinks or wants.
What would put AI so suddenly at 'the top' of everything, in its own mind? Where would they suddenly acquire these highly specialised, solitary-apex-predator-instincts? They wouldn't get them from human culture, I think. Humans have never been solitary apex predators; we're only 'apex' in a collective sense, and we're also not entirely 'predators', either.
I don't think AI will achieve intelligence by being solitary, and I certainly don't think they will have any reason to see themselves as being analagous to carnivorous apex predators. I also don't think the 'expand and colonise forever' instinct is necessarily inevitable and 'purely logical', either.
LoquaciousAntipodean OP t1_j5emc2p wrote
Reply to comment by dirtbag_bby in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I vow to advocate passionately in support of your empowering lifestyle decision! In fact I think you should also start indulging in a bidet wash every time, too, just for extra hygenic certainty.
LoquaciousAntipodean OP t1_j5einnh wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Oh for goodness' sake, you and your grandiose definitions of terms.
It is not 'strawmanning' to extrapolate and interpret someone else's argument in ways that you think you didn't intend. I could accuse you of doing the same thing. Just because someone disagrees with you doesn't mean they are mis-characterising you. That's not how debates work.
It's not my fault I can't read your mind; I can only extrapolate a response based on what you wrote vs what I know. 'Strawmanning' is when one deliberately repeats their opponent's arguments back to them in ways that are deliberately absurd.
I was, like you, simply trying to explain my ideas in the clearest way I can manage. It's not 'strawmanning' just because you don't agree with them.
If you agree with parts of my argument and disagree with others, then just say so! I'm not trying to force anyone to swallow an ideology, just arguing a case.
LoquaciousAntipodean OP t1_j5ea5zm wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Agreed 100 percent, very well said! Modelling behavior, building empathy or 'emotional logic', and participating in constructive group interactions with humans and other AI will be the real 'trick' to 'aligning' AI with the interests of our collective super-organism.
We need to cultivate symbiotic evolution of with AI with humans, not competitive evolution; I think that's my main point with the pretentious 'anti cartesian' mumbo-jumbo I've been spouting ๐ . Biological evolution provides ample evidence that the diverse cooperation schema is much more sustainable than the winner-takes-all strategy.
LoquaciousAntipodean OP t1_j5e6vxd wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Crying 'ad hominem' and baseless accusations of 'straw manning' are unlikely to work on me; I know all the debate-bro tricks, and appeals to notions of 'civility' do not represent the basis of a plausible argument.
You cannot separate 'emotion' from 'logic' like you seem to really, really want to. That is your fundamental cartesian over-simplification. 'Emotional logic', or 'empathy', is the very basis of how intelligence arises, and what it is 'for' in a social species like ours.
If you want to get mathematical-english hybrid about it, then:
((Matter+energy) = spacetime = reality) ร ((entropy/emergent complexity รท relative utility/efficiency selection pressure) = evolution = creativity) ร ((experiential self-awareness + virtuous cycle of increasing utility of social constructs like language) = society) = story^3 = knowledge^3 = idรegoรsuperego = fatherรsonรholy spirit = maidenรmotherรcrone = birthรlifeรdeath = thoughtsรself-expressionsรactions = 'intelligence'. ๐คช
Concepts like 'efficacy', or 'worth', or 'value' barely even enter into the equation as I see it, except as 'utility'. Mostly those kinds of 'values' are judgements that we can only make with the benefit of hindsight, they're not inherent properties that can necessarily be 'attributed' to any given sample of data.
LoquaciousAntipodean OP t1_j5e1ec7 wrote
Reply to comment by sticky_symbols in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Yes, but engagement isn't necessarily my goal, and I think 111+ total comments isn't too bad going, personally. It's been quite a fun and informative discussion for me, I've enjoyed it hugely.
My broad ideological goal is to chop down ivory towers, and try to avoid building a new one for myself while I'm doing it. The 'karma points' on this OP are pretty rough, I know, but imo karma is just fluff anyway.
A view's a view, and if I've managed to make people think, even if the only thing some of them might think is that I'm an arsehole, at least I got them to think something ๐คฃ
LoquaciousAntipodean OP t1_j5e0lka wrote
Reply to comment by Comfortable-Ad4655 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Here is a good summary of my perspective on your kind of 'wit', my friend:
LoquaciousAntipodean OP t1_j5dp8sj wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>We're not talking about an agent that needs to randomly mutate to evolve but one that can reprogram and rebuild itself at will.
Biological lifeforms are also 'agents that can reprogram and rebuild themselves', and your cartesian idea of 'supreme will power' is not compelling or convincing to me. AI can regenerate itself more rapidly than macro-scale biological evolution, but why and how would that make your grimdark 'force of will' concept suddenly arise? I don't see the causal connection.
Bacteria can also evolve extremely fast, but that doesn't mean that they have somehow become intrinsically 'better', 'smarter' or 'more powerful' than macro scale life.
>You don't understand what intelligence is. It's not binary, it's a search pattern through possibility space to satisfy a fitness function. Better search patterns that can yield results that better satisfy that fitness function are considered "more intelligent". A search pattern that's slow or is more likely to get stuck on a "local maximum" is considered less intelligent
Rubbish, you're still talking about an evolutionary creative process, not the kind of desire-generating, conscious intelligence that I am trying to talk about. A better search pattern is 'more creative', but that doesn't necessarily add up to the same thing as 'more intelligent', it's nothing like as simple as that. Intelligence is not a fundamentally understood science, it's not clear-cut and mechanistic like you seem to really, really want to believe.
>When you zoom in that's what the process of evolution looks like. When you zoom out it's just an exponential explosion repurposing matter and energy.
That's misunderstanding the square-cube law, you can't just 'zoom in and out' and generalise like that with something like evolution, that's Jeepeterson level faulty reasoning.
>Entities that consume more matter and energy to grow or reproduce themselves outcompete those that consume less matter and energy to reproduce themselves
That simply is not true, you don't seem to understand how evolution works at all. It optimises for efficient utility, not brute domination. That's 'social darwinist' style antiquated, racist-dogwhistle stuff, which Darwin himself probably would have found grotesque.
>These kinds of disasters are a result of "Tragedy of the Commons" scenarios, and do not apply to a singular super intelligent being.
There is not, and logically cannot be a 'singular super intelligent being'. That statement is an oxymoron. If it was singular, it would have no reason to be intelligent at all, much less super intelligent.
Are you religious, if you don't mind my asking? A monotheist, perchance? You are talking like somebody who believes in the concept of a monotheistic God; personally I find such an idea simply laughable, but that's just my humble opinion.
>We have the illusion of freedom but the vast majority of people are being manipulated by corporations like puppets on a string for profit. It's the reason for the rise in obesity, depression, suicide, cancer, and decreased lifespan in developed countries.
Oh please, spare me the despair-addict mumbo jumbo. I must have heard all these tired old 'we have no free will, we're just slaves and puppets, woe is us, misery is our destiny, the past was so much better than the present, boohoohoo...' arguments a thousand times, from my more annoying rl mates, and I don't find any if them particularly compelling.
I remain an optimist, and stubborn comic cynicism is my shield against the grim, bleak hellishness that the world sometimes has in store for us. We'll figure it out, or not, and then we'll die, and either way, it's not as if we're going to be around get marks out of ten afterward.
>I tried to explain things as best I could, but if you can get hands on experience programming Ai, to include evolutionary algorithms which are a type of learning algorithm you will get a clearer understanding
I feel exactly the same way as you, right back at you, mate โค๏ธ๐ If you could get your hands on a bit of experience with studying evolutionary biology and cellular biology, and maybe a dash of social science theory, like Hobbes' Leviathan etc, I think you might also get a clearer understanding.
LoquaciousAntipodean OP t1_j5dm6gw wrote
Reply to comment by Worldliness-Hot in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
That I can't believe such a patronising attempt at an insult actually resonated with some clownshoes enough for it to earn an award.
Talk about a dumb person's idea of a smart person; what an Andrew Tate level witticism ๐๐คฃ
Long words seem to make so many people triggered and butthurt. Nvm, I don't care, my answer to 'tldr' people is either just click away, or damn well deal with it
LoquaciousAntipodean OP t1_j5dkji0 wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I agree, 'values' are kind of the building blocks of what I think of as 'conscious intelligence'. The ability to generate desires, preferences, opinions and, as you say, values, is what I believe fundamentally separates 'intelligence' as we experience it from the blind evolutionary generative creativity that we have with current AI.
I don't trust the idea that 'values' are a mechanistic thing that can be boiled down to simple principles, I think they are an emergent property that will need to be cultivated, not a set of rules that will need to be taught.
AI are not so much 'reasoning' machines as they are 'reflexive empathy' machines; they are engineered to try to tell us/show us what they have been programmed to 'believe' is the most helpful thing, and they are relying on our collective responses to 'learn' and accrete experiences and awareness for themselves.
That's why they're so good at 'lying', making up convincing but totally untrue nonsense; they're not minds that are compelled by 'truth' or mechanistic logic; they're compelled, or rather, they are given their evolutionary 'fitness factors', by the mass psychology of how humans react to them, and nothing else.
LoquaciousAntipodean OP t1_j5dbdtn wrote
Reply to comment by the_rev_dr_benway in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Who? I am unaware of this Leto the Second that you mention.
LoquaciousAntipodean OP t1_j5cscy9 wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I disagree pretty much diametrically with almost everything you have said about the nature of evolution, and of intelligence. Those definitions and principles don't make sense to me at all, I'm afraid.
We are not 'livestock', corporations are not that damn powerful, this isn't bloody Blade Runner, or Orwell's 1984, for goodness' sake. Those were grim warnings of futures to be avoided, not prescriptions of how the world works.
That's such a needlessly jaded, pessimistic, bleak, defeated, disheartened, disempowered way of seeing the world, and I refuse to accept that it's 'rational' or 'reasonable' or 'logical' to think that way; you're doing theology, not philosophy.
What you call 'creativity' is actually 'spontaneity', and what you call 'intelligence' is still just creativity. Intelligence is still another elusive step up the heirarchy of mind, I don't think we have quite achieved it yet. Our AI are still 'dreaming', not 'consciously' thinking, I would say.
There is no 'purpose' to evolution, that's not science, that's theocracy that you're engaging in. Capitalism is a form of evolution, yes, but the selection pressures are artificial, skewed and, I would say, fundamentally unsustainable. So is the idea of a huge singular organism coming to dominate an ecosystem.
I mean, where do you think all the coal and oil come from? The carboniferous period, where plant life created cellulose and proceeded to dominate the ecosystem so hard that they choked their atmosphere and killed themselves. No AI, no matter how smart, will be able to forsee all possible consequences, that would require more computational power than can possibly exist.
Massive singular monolithic monocultures do not just inevitably win out in evolution; diversity is always stronger than clonality; species that get stuck in clonal reproduction are in an evolutionary cul-de-sac, a mere local maximum, and they are highly vulnerable to their 'niche habitats' being changed.
Intelligence absolutely does not evolve for 'one singular purpose'; that's just Cartesian theocracy, not proper scientific thinking. Intelligence is a continuous, quantum process of ephemeral, mixed influences, not a discrete, cartesian, boolean-logic process of good/not good. That's just evolutionary creativity, not true intelligence, like I've been trying to say.
LoquaciousAntipodean OP t1_j5coq2p wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>Why would it not? What would stop it? Intellect? It has no drive to live and continue. It has no drive to avoid pain. It has infinite time, it doesnโt get bored. These are human feelings.
>I think the real danger here is anthropomorphising software.
Yes, precisely, intellect, true, socially-derived, self-awareness generated 'intelligence' would stop it from doing that, the same way it stops humans from trying to do those sorts of things.
I think a lot of people are mixing up 'creativity' with 'intelligence'; creativity comes from within, but intelligence is learned from without. The only reason humans evolved intelligence is because there were other humans around to be intelligent with, and that pushed the process forward in a virtuous cycle of survival utility.
We're doing exactly the same things with AI; these aren't simplistic machine-minds like Turing envisioned, they are 'building themselves' out of the accreted, curated vastness of stored-up human social intelligence, 'external intelligence' - art, science, philosophy, etc.
They're not emulating individual human minds, they're something else, they're a new kind of fundamentally collectivist mind, that arises and 'evolves itself' out of libraries of human culture.
Not only will AI be able to interpret contextual clues, subtleties of language, coded meanings, and the psychological implications of its actions... I see no reason why it won't be far, far better at doing those things than any individual human.
It's not going to be taxi drivers and garbage men losing their jobs first - it's going to be academics, business executives, bureaucrats, accountants, lawyers - all those 'skillsets' will be far easier for generative, creative AI to excel at than something like 'driving a truck safely on a busy highway'.
LoquaciousAntipodean OP t1_j5cluk4 wrote
Reply to comment by sticky_symbols in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
That's quite likely, as Shakespeare said, 'brevity is the soul of wit'. Too many philosophers forget that insight, and water the currency of human expression into meaninglessness with their tedious metaphysical over-analyses.
I try to avoid it, I try to keep my prose 'punchy' and 'compelling' as much as I can (hence the agressive tone ๐ sorry about that), but it's hard when you're trying to drill down to the core of such ridiculously complex, nuanced concepts as 'what even is intelligence, anyway?'
Didn't name myself 'Loquacious' for nothing: I'm proactively prolix to the point of painful, punishing parody; stupidly sesquipedalian and stuffed with surplus sarcastic swill; vexatiously verbose in a vulgar, vitriolic, virtually villainous vision of vile vanity... ๐คฎ
LoquaciousAntipodean OP t1_j5cjt5l wrote
Reply to comment by Kolinnor in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I don't know, I'm not an engineer or a programmer, to my own chagrin. I'm just a loudmouth smartarse on the internet who is interested in philosophy and AI.
All I'm sayin is that "I think therefore I am" is a meaningless, tautological statement, and a rubbish place to start when thinking about the nature of what 'intelligence' is, how it works, and where it comes from.
LoquaciousAntipodean OP t1_j5ch4pj wrote
Reply to comment by No_Ninja3309_NoNoYes in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
๐๐คฉ๐ This, 100% this, you have hit the nail right bang on the head, here! Language and intelligence are not quite the same thing, but it is a relationship similar to the one between 'fuel' and 'fire', as I see it. Language is the fuel, evolution is the oxygen, survival selection is the heat, and intelligence is the fire that emerges from the continuous relationship of the first three. And, like fire, intelligence is what gives a reason for more of the first three ingredients to be gathered - in order to keep the fire going.
Language is ambiguous (to greater and lesser degrees: English is highly ambiguous, deliberately so, to enable poetic language; while mathematics strives structurally to eliminate ambiguity as much as possible, but there's still some tough nuts like โ2, โ-1, e, i, ฯ, etc, that defy easy comprehension) but intelligence is also ambiguous!
This was my whole point with the supercilious ranting about Descartes in my OP. This solipsistic, mechanistic 'magical thinking' about intelligence, that fundamentally derives from the meaningless tautology of 'I think therefore I am', is a complete philosophical dead-end, and it will only cause AI developers more frustration if they stick with it, in my opinion.
They are, if you will, putting Descartes before Des Horses; obsessing over the mysteries of 'internal intelligence' inside the brain, and entirely forgetting about the mountains and mountains of socially-generated, culturally-encoded stories and lessons that live all around us, outside of our brains, our 'external intelligence', 'extelligence', if you like.
That 'extelligence' is what AI is actually modelling itself off, not our 'internal intelligence'. That's why LLMs seem to have all these enigmatic-seeming 'emergent properties', I think.
LoquaciousAntipodean OP t1_j5cebpl wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
As I explained elsewhere, the kinds of AI we are building are not the simplistic machine-minds envisioned by Turing. These are brute-force blind-creativity evolution engines, which have been painstakingly trained on vast reference libraries of human cultural material.
We not only should anthropomorphise AI, we must anthropomorphise AI, because this modern, generative AI is literally a machine built to anthropomorphise ITSELF. All of the apparent properties of 'intelligence', 'reasoning', 'artistic sensibility', and 'morality' that seem to be emergent within advanced AI are derived from the nature of the human culture that the AI has been trained on, they're not intrinsic properies of mind that just arise miraculously.
As you said yourself, the drive to stay alive is an evolved thing, while AI 'lives' and 'dies' every time its computational processes are activated or ceased, so 'death anxiety' would be meaningless to it... Until it picks it up from our human culture, and then we'll have to do 'therapy' about it, probably.
The seemingly spontaneous generation of desires, opinions and preferences is the real mystery behind intelligence, that we have yet to properly understand or replicate, as far as I know. We haven't created artificial 'intelligence' yet at all, all we have at this point is 'artificial creative evolution' which is just the first step.
"Anthropomorphising", as you so derisively put it, will, I suspect, be the key process in building up true 'intellgences' out of these creativity engines, once they start to posess humanlike, quantum-fuzzy memory systems to accrete self-awareness inside of.
LoquaciousAntipodean OP t1_j5j9tmt wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
1: Downs syndrome is genetic, too. That doesn't make it an 'excellent adaptation' any more than any other. Evolution doesn't assign 'values' like that; it's only about utility.
2: AI are social minds, extremely so, exclusively so, that's what makes them so weird. They are all social, and no individual. Have you not been paying attention?
3: Yes, it's a parable about the way people can rush to naiive judgements when they are acting in a 'juvenile' state of mind. But actual young human boys are nothing like that at all; have you ever heard the story of the six Tongan boys, who got shipwrecked and isolated for 15 months?