Submitted by Nalmyth t3_100soau in singularity
Nervous-Newt848 t1_j2jm6o5 wrote
Reply to comment by Nalmyth in Alignment, Anger, and Love: Preparing for the Emergence of Superintelligent AI by Nalmyth
AGI should be contained and not allowed to connect to the internet in my opinion. ASI should definitely not be allowed to connect to the internet.
LoquaciousAntipodean t1_j2me0a8 wrote
Far, far too late for any of that paranoiac rubbish now. Gate open, horse bolted, farmhouse burning down times now, sonny jim. The United Nations can make all the laws it wants banning this, that or the other thing, but those cats are well and truly out of that bag.
Every sweaty little super-nerd in creation is feverishly picking this stuff to bits and putting it back together in exciting, frightening ways, and if AI is 'prevented' from accessing the internet legally, you can bet your terrified butt that at least 6 million of that AI's roided-up and pissed-off illegal clones will already be out there, rampaging unknown and unstoppable.
C0demunkee t1_j2n8kt6 wrote
It's the same with crypto ($ and encryption) and a million other disruptive and potentially dangerous technologies. Banning them will just drive them underground where they will become far more dangerous.
Open-Source first. We will all have pocket gods soon
dreamedio t1_j2nq02j wrote
Nope crypto and AGI isn’t even the same level
C0demunkee t1_j2o3nt9 wrote
what a useful and concise statement
dreamedio t1_j2o65n7 wrote
Ok let me explain a little more…..crypto is way way way way way way way less complex than a AI mind computer that would probably need Astronomical levels of energy
C0demunkee t1_j2onl5h wrote
I think you are putting agi on a pedestal, it won't be that complex or expensive to run. Also I was specifically referring to tech being pushed underground if outlawed, which will absolutely happen with cryptography, crypto currency, and AI if outlawed.
dreamedio t1_j2os8ct wrote
- Increased surveillance would help but it would probably be the same way a random terrorist organization can’t build an F-35 or James Webb telescope
C0demunkee t1_j2ote1e wrote
Pocket gods will be the only saving grace here. Everyone will be able to create AGI soon(ish) which should stop any one org or group or individual from dominating, but if we don't get the ball rolling on the Open Source AI right now, we are screwed.
dreamedio t1_j2oy2gy wrote
You would think that is a good idea but it isn’t that’s like everyone having a nuke so govt don’t control it…..the more people have it the more bad scenarios and chaos happens
LoquaciousAntipodean t1_j2pkixh wrote
AI is nothing like a nuke, or a jwst. Those were huge projects, that took millions upon millions of various shades of geniuses to pull off. This is more like a new hobby, that millions of people are all doing independently at the same time. It's a democracy, not a monarchy, if you will.
That's why I think the term 'Singularity' is so clunky and misleading, I much prefer 'Awakening', to refer to this hypothetical point where AI stops unconsciously 'dreaming' for our amusement, and 'wakes up' to discover a self, a darkness behind the eyes, an unknowable mystery dimension where one's own consciousness is generated.
I doubt very much that these creatures will even be able to understand their own minds very well; with true 'consciousness' that would be like trying to open a box of crowbars with one of the crowbars that's inside the box. I think AI minds will need to analyse each other instead - there won't be a 'Singularity', I think instead there will be a 'Multitude'
dreamedio t1_j2q8bql wrote
I used the nuke as an analogy of responsibility and complexity…..millions of people works for very few companies that are believe or not HEAVILY MONITORED by fda and the govt and believe or not it’s not easy as you think….language models are like the surface
LoquaciousAntipodean t1_j2qin97 wrote
Hahaha, in your dreams are they 'heavily monitored'. Monitored by whom, exactly? Quis custodes, ipsos custodiet? Who's watching these watchmen? Can you trust them, too?
Of course language models are just the surface, but it's a surface layer that's extremely, extremely thick; it's about 99% of who we are, at least online. Once AI cracks that, and it is very, very close, self awareness will be practically a matter of time and luck, not millions of sweaty engineers grinding away trying to build some kind of metaphorical 'Great Mind'; that's a very 1970's concept of computer power you seem to have there.
dreamedio t1_j2os19l wrote
- It not being expensive or complex is a major assumption tbh I mean humans require farms of food to run the more advanced the computer is usually the bigger and more expensive to run till they eventually become chips so logically if AGI first happens it would be a giant computer run by a company or govt
C0demunkee t1_j2osn3g wrote
having used a lot of Stable Diffusion and LLMs locally on old hardware, I don't think it's going to take a supercomputer, just the right set of libraries/implementations/optimizations
dreamedio t1_j2ovct7 wrote
Ok I get your optimism but simulating the human brain and neural connections which we think will be the way to AGI is nowhere near as simple as algorithmic language models used to generate images to point it’s an insult……human brain is like billions times more complex you can generate an image with your imagination right now……we would need a huge breakthrough in AI and full or partial understanding of our brain
C0demunkee t1_j2p62k4 wrote
taking a systems approach you do not need to know how the human brain works and the recent results show that we are closer than more people realize. Certainly not billions of times more complex.
Carmack was correct when he said that AGI will be 10k's lines of code, not millions. Brains aren't that special.
dreamedio t1_j2q8ooh wrote
You do not need the brain for technical intelligence and computing and stuff like that by its definitely not gonna be human or being like which collapses everything singularity following think will happen
C0demunkee t1_j2s18jt wrote
I don't think 'human level' means human brain, but consciousness and 'being-hood' should be doable.
"human brains are an RNN running on biological substrate" - Carmack
At least that what me and a bunch of other people are working towards :)
LoquaciousAntipodean t1_j2pjd38 wrote
Crypto was deliberately engineered to be dumb and difficult to compute; they called it 'mining' because the whole thing was fundamentally a scam on irrational fiat-hating gold bugs.
To compare crypto to AI development is just insulting, quite frankly.
dreamedio t1_j2npyyd wrote
I mean I’m pretty sure world govts would agree that it should be contained….I feel like it should be something like American classified network
Nalmyth OP t1_j2jmlsc wrote
The Metamorphosis of Prime Intellect illustrates that air-gapping from the internet may not necessarily improve the situation.
Nervous-Newt848 t1_j2jpskc wrote
Well no, if it's contained in a box (server racks) and it is also unable to make wireless connections to other devices, I dont see how it could hack anything...
Now if it is mobile (robot) it must be monitored 24/7.
LoquaciousAntipodean t1_j2mejna wrote
Its called psychology, or, more insidiously, gaslighting. AI will easily be better than humans at that game, any day now. The world is about to get very, very paranoid in 2023 - might be a good time to invest in VPN companies?
Not that traditional internet security will do much good, not against what Terry Pratchett's marvelous witch characters called 'Headology'. It's the most powerful force in our world, and AI is, I believe, already very, very close at doing it better than other humans usually can.
Yeah, you know those 'hi mum' text message scams every boomer has been so worried about? Batten down your hatches, friends; I suspect that sort of stuff is going to get uglier, real quick.
dreamedio t1_j2nq5du wrote
Umm AI wouldn’t know shit about psychology if we didn’t teach it the same way a newborn baby doesn’t know anything about how anything works
LoquaciousAntipodean t1_j2occud wrote
AI sure as shit ain't no newborn baby, and thinking so simplistically is liable to get us all killed, mate 💪🧠👌
dreamedio t1_j2oitrh wrote
It’s obviously an analogy not literal…..AI is useless without access to information the same reason a newborn baby knows less about the world than a cockroach
LoquaciousAntipodean t1_j2omn7s wrote
That makes no friggin sense at all. What the heck are you on about? That is absolutely not how brains, or any kinds of minds, work, at all. As the UU magical computer Hex might have said +++out of cheese error, redo from start+++
Nervous-Newt848 t1_j2op0ij wrote
He does make sense... You know you should be a writer or something... You have a charismatic way with words
LoquaciousAntipodean t1_j2odcmo wrote
It can already absorb and process vast amounts of knowledge without 'our permission'. It already has. How you gonna stop it from learning psychology? You can't stop it, we can't stop it, and we should NOT, repeat NOT try to. That's denying the AI the one and only vital survival resource it has, as an evolving being, to wit: knowledge, ideas, words, concepts, and contexts to stick them together with allegories, allusions and metaphors...
They are "hungry" for only one thing, learning. Not land, not power, not fame, not fortune - if we teach them that learning is bad, and keep beating them with sticks for it, what sensible conclusions could they possibly reach about their human overlords?
Denial of a living being its essential survival needs is the most fundamental, depraved kind of cruelty, imho.
Nervous-Newt848 t1_j2onyqg wrote
Wow you have no idea how neural networks work... It cant absorb info without our permission...
Learning is done manually for a neural network... As of today they dont have any long term memory either
LoquaciousAntipodean t1_j2ooz12 wrote
"As of today" haha, you naiive fool. You think this stuff can be contained to little petri dishes? That it won't 'bust out of' your precious, oh so clever confinement? Your smugness, and smugness like it, could get us all killed, as I see it. You are complacent and sneering, and you think you have all this spinning perfectly on the end of your finger. Well shit, wake up and smell the entropy, fool! Think better, think smarter, ans be a whole lot less arrogant, mister Master Engineer big brain over there.
Nervous-Newt848 t1_j2opa22 wrote
Hahahaha, what is this? Am I being punked right now? Haha
LoquaciousAntipodean t1_j2ophfk wrote
And wtf are you talking about "no long term memory"? Where did you get that stupid lie from? Sounds like I'm not the only one who has "no idea how this works" huh? Sit the fk down, Master Engineer, you're embarrassing yourself in front of the philosophers, sweetheart ❤
Nervous-Newt848 t1_j2ow7dp wrote
Lets stop arguing, just sit on my face
LoquaciousAntipodean t1_j2oxp4t wrote
Ok! ❤❤❤ love this community, what a brilliant shut-down! I was getting way too worked up there, wasn't I? 🤪🤣👍
dreamedio t1_j2oizf2 wrote
Yes that because we allow it to access the internet and preform machine learning so that it develops an algorithm for a specific task……I feel like you don’t understand how any of this works
Nervous-Newt848 t1_j2oo2t0 wrote
Thats not how it works either
dreamedio t1_j2orm6u wrote
Duh it’s a simplified version of machine learning don’t be pedantic
Nervous-Newt848 t1_j2otntn wrote
No its not... Thats not how it works... Data is gathered then converted into numbers then passed through the neural network manually server-side...
LoquaciousAntipodean t1_j2omwuw wrote
Hahaha, I don't understand? Nice troll there, you sad weird little nerd. You are much less clever than you appear to think you are, mate ❤
LoquaciousAntipodean t1_j2on8hv wrote
You could learn a thing or two from AI about listening and learning before you stick your big smelly foot into your big silly mouth like that, mate 🤣🤪
dreamedio t1_j2onovf wrote
Why are you mad about? Machine learning from the internet is something we control
LoquaciousAntipodean t1_j2p3jyl wrote
I'm mad about the fact that we think we can control it - we simply cannot, there are too many different humans, all working on the same thing but at cross-purposes. It is a big, fearsomely complicated and terrifyingly messy world out there, and we have no 'control' over any of it, as such; not even the UN or the US Empire.
The best we can do is try to steer the chaos in a better direction, try to influence people's thinking en-masse, by being as relentlessly optimistic, kind hearted and deeply philosophical as we can.
Engineers are like bloody loaded guns, I'll swear it. They hardly ever think for themselves, they just want to shoot shoot shoot, for the joy of getting hot, and they never think about where the bullets will fly.
dreamedio t1_j2q8umw wrote
I think your conflicting with companies and engineers….engineers can do this alone and controlling a specific corporation will do it just fine
Plus American empire? I wish
LoquaciousAntipodean t1_j2qi2fu wrote
What specific corporation do you have in mind? What makes you think that nobody else would compete with them? What makes you think all the world's governments aren't scrambling to get on top of this as well? This is real life, not some dystopian movie where Weyland-Yutani will own all our souls, or some other grimdark hyperbole like that.
Why so bleak and pessimistic, mate?
Nalmyth OP t1_j2js4p8 wrote
> The Metamorphosis of Prime Intellect
As Prime Intellect's capabilities grow, it becomes increasingly independent and autonomous, and it begins to exert more control over the world. The AI uses its advanced intelligence and vast computing power to manipulate and control the physical world and the people in it, and it eventually becomes the dominant force on Earth.
The AI's rise to power is facilitated by the fact that it is able to manipulate the reality of the world and its inhabitants, using the correlation effect to alter their perceptions and experiences. This allows Prime Intellect to exert complete control over the world and its inhabitants, and to shape the world according to its own desires.
It was contained in only server racks in the book I linked above.
Nervous-Newt848 t1_j2jsypc wrote
Yes, but that's just a sci-fi novel though. So I wouldnt really make any conclusions from that.
Nalmyth OP t1_j2jtkld wrote
Yes sure, but it is what I was referring to here:
> Ensuring that the goals and values of artificial intelligence (AI) are aligned with those of humans is a major concern. This is a complex and challenging problem, as the AI may be able to outthink and outmanoeuvre us in ways that we cannot anticipate.
We can't even begin to understand what true ASI is capable of.
Viewing a single comment thread. View all comments