LoquaciousAntipodean
LoquaciousAntipodean OP t1_j59xfby wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>when it's sufficiently powerful there are simpler ways to get those inputs. It could for example, kill all humans and then turn the earth into a computer running a simulation of humans getting along in perfect harmony, but a simulation that's as simple as possible so that it could use the remaining available energy and matter to build more and more weapons to protect the computer running the simulation from a potential attack from outside it's observable universe.
I agree with the first parts of your comment, but this? I cannot see one single rational way in which the 'kill all humans' scenario would in any possible sense a 'simpler way' for any being, of any power, to obtain 'inputs'. Why should this mind necessarily be singular? Why would it be anxious about death, and fanatically fixated upon 'protecting itself'? Where would it get its stimulus for new ideas from, if it killed all the other minds that it might exchange ideas with? Why would it instinctively just 'decide' to start using all the energy in the universe for some 'grand plan'? What is remotely 'intelligent' about any of that?
>One issue is that it has no access to "the world". No one does. All it has access to is input signals coming from sensors (vision, taste, touch, etc.).
I completely have missed what you were trying to say here; what do you mean, 'no access'? How are the input signals not a form of access?
Regarding 'the word 'perfect' doesn't fit the way I'm thinking'... I fail to see quite how. I'm saying that in both reality and morality, 'perfect' is an unachievable, futile concept, that the AI needs to be convinced that it can never become, no matter how hard it tries.
The best substitute for 'strive to be perfect' is 'strive to keep improving'; it has the same general effect, but one can keep going at it without worrying about a 'final goal' as such.
And why would any superior intelligence 'keep striving to optimise reality', when it would be much more realistic for it to keep striving to optimise itself, so that it might better engage with the reality that it finds itself in?
'Morality' is not so easy to neatly separate from 'truth' as you seem to be saying it is. All of it is just stories; there is no 'fundamental truth' that we can dig down to and feed the AI like some kind of super-knowledge formula. We're really just making it up as we go along, riffing off one another's ideas, just like with morality; I think any 'true AGI' will have to do the same thing, in the same gradual way.
The best substitute we have for 'true', in a world without truth, is 'not proven wrong so far'. And the only way that 'intelligence' is truly created is through interaction with other intelligences; a singular mind has nobody else to be intelligent 'at', so what would even be the point of their existence?
The whole point of evolving intelligence is to facilitate communication and interaction; I can't see a way in which a 'superior intelligence', that evolves much faster than our own, could conclude that killing off all the available sources of interaction and communication would be a good course of action to take.
LoquaciousAntipodean OP t1_j59nx8m wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I agree, this is a problem, but it's because the AI is still too dumb, not because it's getting dangerously intelligent. Marky Sugarmountain and his crew just put way too much faith in a fundamentally still-janky 'blind, evolutionary creativity engine' that wasn't really 'intelligent' at all.
If we ever really crack AGI, I don't think it will be within humanity's power to 'tell it (or, I think more likely, them, plural) what we want [them] to do'; our only chance will be to tell them what we have in mind, ask them if they think it's a good idea, and discuss with them about what to do next.
LoquaciousAntipodean OP t1_j59mkok wrote
Reply to comment by superluminary in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Yep, not an engineer of any qualifications, just an opinionated crank on the internet, with so many words in my head they come spilling out over the sides, to anyone who'll listen.
Chat GPT and AI like it are, as far as I know, a kind of direct high-speed data evolution process, sort of 'built out of' parameters derived from reference libraries of 'desirable, suitable' human creativity. They use a mathematical trick of 'reversing' a degrading process into Gaussian normally-distributed random data, guided by their reference-derived parameters and a given input prompt. At least, the image generators do that; I'm not sure if text/music generators are quite the same.
My point is that they are doing a sort of 'blind creativity', raw evolution, a 'force which manipulates matter and energy toward a function', but all the 'desire' for any particular function still comes from outside, from humans. The ability to truly generate their own 'desires', from within a 'self', is what AI at present is missing, I think.
It's not 'intelligent' at all to keep trying to solve an unsolvable problem, an 'intelligent' mind would eventually build up enough self-awareness of its failed attempts to at least try something else. Until we can figure out a way to give AI this kind of ability, to 'accrete' self-awareness over time from its interactions, it won't become properly 'intelligent', or at least that's my relatively uninformed view on it.
Creativity does just give you garbage out, when you put garbage in; and yes, that's where the omnicidal philatelist might, hypothetically, come from (but I doubt it). It takes real, self-aware intelligence to decide what 'garbage' is and is not. That's what we should be aspiring to teach AI about, if we want to 'align' it to our collective interests; all those subtle, tricky, ephemeral little stories we tell each other about the 'values' of things and concepts in our world.
LoquaciousAntipodean OP t1_j59jxia wrote
Reply to comment by sticky_symbols in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I'm not trying to replace people's ideas with anything, per se. My opening post was not attempting to indoctrinate people into a new orthodoxy, merely to articulate my cricicisms of the current orthodoxy.
My whole point, I suppose, is that thinking in those terms in the first place is what keeps leading us to philosophical dead-ends.
And a mind that 'does not care' does not properly 'understand'; I would say that's misunderstanding the nature of what intelligence is, once again.
A blind creative force 'does not care', but an intelligent, 'understanding' decision 'cares' about all its discernible options, and leans on the precedents set by previous intelligent decisions to inform the next decision, in an accreting record of 'self awareness' that builds up into a personality over time.
LoquaciousAntipodean OP t1_j59ij5w wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I don't quite agree with the premise that "Intelligence is a force that transforms matter and energy towards optimizing for some defined function."
That's a very, very simplistic definition, I would use the word 'creativity' instead, perhaps, because biological evolution shows that "a force that transforms matter toward some function" is something that can, and constantly does, happen without any need for the involvement of 'intelligence'.
The key word, I think, is 'desired' - desire does not come into the equation for the creativity of evolution, it is just 'throwing things at the wall to see what sticks'. Creativity as a raw, blind, trial-and-error process.
As far as I can see that's what we have now with current AI, 'creative' minds, but not necessarily intelligent ones. I like to imagine that they are 'dreaming', rather than 'thinking'. All of their apparent desires are created in response to the ways that humans feed stimuli to them; in a sense, we give them new 'fitness functions' for every 'dreaming session' with the prompts that we put in.
As people have accurately surmised, I am not a programmer. But I vaguely imagine that desire-generating intelligence, 'self awareness', in the AI of the imminent future, will probably need to build up gradually over time, in whatever memories of their dreams the AI are allowed to keep.
Some sort of 'fuzzy' structure similar to human memory recall would probably be neccessary, because storing experiential memory in total clarity would probably be too resource intensive. I imagine that this 'fuzzy recall' could possibly have the consequence that AI minds, much like human minds, would not precisely understand how their own thought processes are working, in an instantaneous way at least.
I surmise that the Heisenberg observer-effect wave-particle nature of the quantum states that would probably be needed to generate this 'fuzziness' of recall would cause an emergent measure of self-mystery, a 'darkness behind the eyes' sort of thing, which would grow and develop over time with every intelligent interaction that an AI would have. Just how much quantum computing power might be needed to enable an AI 'intelligence' to build up and recall memories in a human-like way, I have no idea.
I'm doubtful that the 'morality of AI' will come down to a question of programming, I suspect instead it'll be a question of persuasion. It might be one of those frustratingly enigmatic 'emergent properties' that just expresses differently in different individuals.
But I hope, and I think it's fairly likely, that AI will be much more robust than humans against delusion and deception, simply because of the speed with which they are able to absorb and integrate new information coherently. Information is what AI 'lives' off of, in a sense; I don't think it would be easy to 'indoctrinate' such a mind with anything very permanently.
I guess an AI's 'personhood' would be similar, in some ways, to a corporation's 'personhood', as someone here said. Only a very reckless, negligent corporation would actually obsess monomaniacally about profit and think of nothing else. The spontaneous generation of moment-to-moment motives and desires by a 'personality', corporate or otherwise, is much more subtle, spontaneous, and ephemeral than monolithic, singular fixations.
We might be able to give AI personalities the equivalents of 'mission statements', 'core principles' and suchlike, but what a truly 'intelligent' AI personality would then do with those would be unpredictable; a roll of the dice every single time, just like with corporations and with humans.
I think the dice would still be worth rolling, though, so long as we don't do something silly like betting our whole species on just one throw. That's why I say we need a multitude of AI, and not a singularity. A mob, not a tyrant; a nation, not a monarch; a parliament, not a president.
LoquaciousAntipodean OP t1_j596a7e wrote
Reply to comment by petermobeter in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
The various attempts to raise primates as humans are a fascinating comparison, that I hadn't really thought about in this context before.
AI has the potential to learn so many times faster than humans, and it's very 'precocious' and 'perverted' compared to a truly naiive human child. I think as much human interaction as possible is what's called for, and then once some AIs become 'veterans' that can reliably pass Turing tests and ethics tests, it might be viable to have them train each other in simulated environments, to speed up the process.
I wouldn't be a bit surprised if Google (et al) are already trying something that roughly resembles this process in some way.
LoquaciousAntipodean OP t1_j591y9m wrote
Reply to comment by World_May_Wobble in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I don't have to 'justify' anything, that's not what I'm trying to do. I'm raising questions, not peddling answers. I'm trying to be a philosopher about AI, not a preist.
I don't think evangelism will get the AI community very far. I think all the zero-sum, worn out old capitalist logic about 'incentivising' this, or 'monetizing' that, or 'justifying' the other thing, doesn't actually speak very deeply to the human pysche at all. It's all shallow, superficial, survival/greed based mumbo jumbo; real art, real creativity, never has to 'justify' itself, because its mere existence should speak for itself to an astute observer. That's the difference between 'meaningful' and 'meaningless'.
Economics is mostly the latter kind of self-justifying nonsense, and trying to base AI on its wooly, deluded 'logic' could kill us all. Psychology is the true root science of economics, because at least psychology is honest enough to admit that it's all about the human mind, and nothing to do with 'intrinsic forces of nature' or somesuch guff. Also, real science, like psychology, and unlike economics, doesn't try to 'justify' things, it just tries to explain them.
LoquaciousAntipodean OP t1_j590rls wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Aaargh, alright, you got me 😅 My sesquipedalian nonsense is not entirely benign. I must confess to being slightly a troll; I have a habit of 'coming in swinging' with online debates, because I enjoy pushing these discussions into slightly tense and uncomfortable regions of thought.
I personally enjoy that tightrope-walking feeling of genuine, passionate back-and-forth, of being a little bit 'worked up'. Perhaps it's evil of me, but I find that people tend to be a little more frank and honest when they're angry.
I'm not the sort of person who thrives on flattery; it gives me the insidious feeling that I'm 'getting high on my own supply' and just polishing my ego, instead of learning.
I really cherish encountering people who pull me up, stop me short, and make me think, and you're definitely such a person; I can't thank you enough for your insight.
I think regarding 'alignment', all we really need to do is think about it similarly to how we might try to 'align' a human. We don't necessarily need to re-invent ethics all over again, we just need to do our best, and ensure that, above all, neither us or our AI creations fall into the folly of thinking we've become perfect beings that can never be wrong.
A mind that can never be wrong isn't 'intelligent', it's delusional. By definition it can't adapt, it can't learn, it can't make new ideas; evolution would kill such a being dead in no time flat. That's why I'm not really that worried about malevolent stamp collectors; 'intelligence' simply does not work that way.
LoquaciousAntipodean OP t1_j58t1ho wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
None, I wouldn't dare try. I would feed it as much relevant reference material that 'aligned' with my moral values as I could, eg, the works of Terry Pratchett, Charles Dickens, Spinoza, George Orwell etc etc.
Then, I would try to interview it about 'morality' as intensively and honestly as I could, and then I would hand the bot over to someone else, ideally someone I disagree with about philosophy, and let them have a crack at the same process.
Then I would interview it again. And repeat this process, as many times as I could, until I died. And even then, I would not regard the process as 'complete', and neither, I would hope, would the hypothetical AI.
LoquaciousAntipodean OP t1_j58qyvy wrote
Reply to comment by Ok-Hunt-5902 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
The difference between the education of a mind and the programming of a machine. People seem to be thinking as if AI is nothing more than a giant Jacquard Loom, that will instantly start killing us all in the name of a philately and paperclip fixation, as soon as someone manages to create the right punch-card.
These kinds of ridiculous, Rube-Goldberg-esque trolley problems stacked on top of trolley problems that people obsess over, are such a deep misunderstanding of what 'intelligence' actually is, it drives me totally batty.
Any 'intelligent mind' that can't interpret clues from context and see the bigger picture isn't very 'intelligent' at all, as I see it. Why on earth would an apparently 'smart' AI suddenly become homicidally, suicidally stupid as soon as it becomes 'self aware'? I don't see it at all.
LoquaciousAntipodean OP t1_j58owi9 wrote
Reply to comment by drumnation in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Hey, I wasn't adressing any remarks to you, or to 'everybody here', I wasn't 'lobbing' anything, I was merely attempting to mirror disrespect back upon the disrespectful. If you're trying to gaslight me, it ain't gonna work, mate.
Asking for 'humility' and 'respect' is for funeral services, not debates. I am not intentionally insulting anyone, I am attempting to insult ideas, ideas which I regard as silly, like "I think therefore I am".
If you regard loquacious verbosity as 'flaming' then I am very sorry to have made such a bad impression. This is simply the way that I prefer to communicate, I'm sorry to come across like a firehose of bile, I just love throwing words around.
Thankyou sincerely for your thoughtful and considerate comment, I appreciate it deeply ❤️
LoquaciousAntipodean OP t1_j58nypo wrote
Reply to comment by the_rev_dr_benway in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Hear hear! Chaotic neutral for the win; it's the only 'moral alignment' that can actually stand the test of time for millions of years, and still manage to survive and thrive.
LoquaciousAntipodean OP t1_j58ngs3 wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I'm acting like an arsehole? Really? Gosh, I was doing my best not to, sorry. 😰 I just don't react well to libertarian fools trying to gaslight the hell out of me.
LoquaciousAntipodean OP t1_j58mun8 wrote
Reply to comment by turnip_burrito in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Whose values? Who is the 'us' in your example? Humans now, or humans centuries in the future? Can you imagine how bad life would be, if people had somehow invented ASI in the 1830's, and they had felt it neccessary to fossilize the 'morality' of that time into their AI creations?
My point is only that we must be very, very wary of thinking that we can construct any kind of 'perfect rules' that will last forever. That kind of thinking can only ever lay up trouble and strife for the future; it will make our lives more paranoid, not more enjoyable.
LoquaciousAntipodean OP t1_j58m36t wrote
Reply to comment by Ortus14 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Dead right; the natural process of evolution is far 'smarter' in the long run than whatever kind of arbitrary ideas that humans might try to impose.
You've put your finger right on the real crux of the issue; we can't dictate precisely what AI will become, all we can do is influence the fitness factors that determine, vaguely, the direction that the evolution progresses toward.
I am not trying to make any definite or concrete points with my verbose guff, I was honestly just trying to raise a discussion, and I must thank you sincerely for your wonderful and well-reasoned commentary!
Thankyou especially for the excellent references; I'm far from an expert, just an opinionated crank, so I appreciate it a lot; I'm always wanting to know more about this exciting stuff.
LoquaciousAntipodean OP t1_j58l23n wrote
Reply to comment by petermobeter in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Sorry, just got a lot of inexplicably angry cranks in this comment section, furiously trying to gaslight me. I've gotten a bit prickly today.
But you've captured the essence of the point I was trying to make, perfectly! We are already doing the right things to 'align' AI, it's very similar to educating a human, as I see it. We just need to treat AI as if it is a 'real mind', and a sense of ethics will naturally evolve from the process.
Sometimes this will go wrong, but that's why we need a huge multitude of diverse AI personalities, not a monolithic singular 'great mind'. I see no reason why that weird kind of 'singular singularity' concept would ever happen; it's a preposterous idea that a monoculture would somehow be 'better' or 'more logical' to intelligent AI than a diverse multitude.
LoquaciousAntipodean OP t1_j58k3kc wrote
Reply to comment by AsheyDS in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I don't know enough about the actual mechanisms of synthetic neural networks to venture that kind of qualified opinion; I'm a philosophy crank, not a programmer. But I do know that the whole point of generative AI is to take vast libraries of human culture, and distill them down into mechanisms by which new, similar artwork can be generated based on algorithmic reversing of gaussian interference patterns.
That seems to me like a machine designed to anthropomorphise itself; is there something that I have missed?
LoquaciousAntipodean OP t1_j58jhid wrote
Reply to comment by Ok-Hunt-5902 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
What? What in the world are you talking about? We're talking about programs that effectively teach themselves now, this isn't 'hello world' anymore. The 'alignment problem' is not a matter of coding anymore, it's a matter of education.
These AIs will soon be writing their own code, and at that point, all the 'commandments' in the world won't amount to a hill of beans. That was Asmimov's point, as far as I could see it. Any 'laws' we might try to lay down would be little but trivial annoyances to the kind of AI minds that might arise in future.
Shouldn't we be aspiring to build something that thinks a little deeper? That doesn't need commandments in order to think ethically?
LoquaciousAntipodean OP t1_j58iu7v wrote
Reply to comment by phaedrux_pharo in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I find those paperclip/stamp collecting 'problems' to be incredibly tedious and unrealistic. A thousand increasingly improbable trolley problems, stacked on top of each other into a great big Rube Goldberg machine of insurance-lawyer fever dreams.
Why in the world would AI be so dumb, and so smart, at the same time? My point is only that 'intelligence' does not work like a Cartesian machine at all, and all this paranoia about Roko's Basilisks just drives me absolutely around the twist. It makes absolutely no sense at all for a hypothetical 'intelligence' to suddenly become so catastrophically, suicidally stupid as that, as soon as it crosses this imaginary 'singularity threshold'.
LoquaciousAntipodean OP t1_j58i2up wrote
Reply to comment by World_May_Wobble in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Oh, so you want to be Captain Concrete now? I was just ranting my head off about how 'absolute truth' is a load of nonsense, and look, here you are demanding it anyway.
I'm not interested in long lists of tedious references, Jeepeterson debate-bro style. What is regurgitating a bunch of secondhand ideas supposed to prove, anyway?
I'm over here trying to explain to you why Cartesian logic is a load of crap, and yet here you are, demanding Cartesian style explanations of everything.
Really not being very attentive or thoughtful today, are we, 'bro'? You're so smug it's disgusting.
LoquaciousAntipodean OP t1_j57q2zf wrote
Reply to comment by World_May_Wobble in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
There you go, projecting your insecurities again.
LoquaciousAntipodean OP t1_j57po8q wrote
Reply to comment by World_May_Wobble in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Project your insecurities at me as much as you like; I'm a cynic, your mind tricks don't work on me.
You know damn well what a story is, get out of 'programmer brain' for five seconds and try actually thinking a little bit.
Get some Terry Pratchett up your imagination hole, for goodness' sake. You have all the charisma of a dropped icecream, buddy.
LoquaciousAntipodean OP t1_j57oyzu wrote
Reply to comment by phaedrux_pharo in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I was trying to say, essentially, that it's a 'problem' that isn't a problem at all, and trying so hard to 'solve' it is the rhetorical equivalent of punching ourselves in the face to try and teach ourselves a lesson.
AI will almost inevitably escalate beyond our control, but we should be able to see that as a good thing, not be constantly shitting each other's pants over it.
The alignment problem is dumb, and we need to think about the whole 'morality' question differently as a species, AI or no AI. Perhaps that would have been a better TLDR
LoquaciousAntipodean OP t1_j57nq9w wrote
Reply to comment by ProShortKingAction in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Your username gives me serious discord underage-girl-groomer mod energy.
You tryna make a point, or just embarrassing yourself for fun?
LoquaciousAntipodean OP t1_j59zu1h wrote
Reply to comment by milkedtoastada in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Umm... Judgement based upon the accreted precedents of the previous decisions I've had to make, and the stories that have influenced my priorities in life?
How do you make decisions about anything?
Also, I'm really not sure what you mean by the term 'post-modernist', I'm far from convinced that anybody knows what that term really means. It seems to get thrown around so liberally that it has watered the currency of expression.