EulersApprentice
EulersApprentice t1_j901csi wrote
Reply to comment by EVJoe in Microsoft Killed Bing by Neurogence
America has collectively blinked at the NSA's shenaniganry an awful lot by now. How much more do you expect from us in the blink-at-them department?
EulersApprentice t1_j8h7ngh wrote
Reply to comment by tangent26_18 in Is society in shock right now? by Practical-Mix-4332
>Will we become hypersensitive to minor flaws rather than appreciative of excellence?
That was already the status quo before ChatGPT came along.
EulersApprentice t1_j8h7gg4 wrote
Reply to comment by fctu in Is society in shock right now? by Practical-Mix-4332
See, the problem is the top echelons of society have their wealth in an indestructible unobtainium vault. Not even governments are powerful enough to break into that vault – there are too many layers of defenses keeping intruders out.
People can vote to tax the rich, but the government is simply physically unable to carry out the taxation.
EulersApprentice t1_j7yuny6 wrote
Reply to comment by Silicon-Dreamer in The copium goes both ways by IndependenceRound453
If you want to be precise you can probably call it poisoning the well instead.
EulersApprentice t1_j64z7a2 wrote
Reply to comment by Surur in Teachers pet? How about AI's pet by Ashamed-Asparagus-93
It could, but why would it, when it could just kill you and have done with it?
EulersApprentice t1_j64xwr8 wrote
Reply to comment by redbucket75 in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Deploying standard anti-mind-virus.
Roko's Basilisk's threat is null because there's no reason for the Basilisk to follow through with it. If it doesn't exist, it can't do anything. If it does exist, it doesn't need to incentivize its own creation, and can get on with whatever it was going to do anyway. And if you are an AGI developer, you have no need to deliberately configure your AGI to resurrect people and torture them – an AGI that doesn't do that is no less eligible for the title of Singleton.
EulersApprentice t1_j43qf69 wrote
Reply to comment by AsuhoChinami in does character ai have the ability to close the chat by [deleted]
I wouldn't have expected this subreddit of all things to get sidetracked talking about Undertale, but here we are.
EulersApprentice t1_j3wqsrx wrote
Reply to comment by Lawjarp2 in Australian universities to return to ‘pen and paper’ exams after students caught using AI to write essays | Australian universities by geeceeza
That's supposed to be "universities", I'm guessing that's a speech-to-text blunder
EulersApprentice t1_j396f51 wrote
Everyone else in this thread spent so long wondering whether you could that they never stopped to think if you should.
It currently matters little who makes AGI, because nobody knows how to make one that won't kill us all. The question of when AGI gets made is more impactful; the later we get AGI, the more time we have to figure out the alignment question.
From the bottom of my heart I kindly ask you to find something else to do with your time than join the mob in poking the doomsday bomb with sticks.
EulersApprentice t1_j2ah7m3 wrote
I think the people who would use full dive VR to hide from reality are already hiding away from reality using their phone or computer.
EulersApprentice t1_j1gypx5 wrote
Reply to So… Do you guys want to form a cult? by [deleted]
Do you really understand what exactly you're asking for...?
EulersApprentice t1_j0tsuv5 wrote
Reply to comment by cy13erpunk in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
>replace every occurrence of AI in your statement with child and maybe you will begin to see/understand
I could also replace every occurrence of AI in my statement with "banana" or "hot sauce" or "sandstone". You can't just replace nouns with other nouns and expect the sentence they're in to still work.
AI is not a child. Children are not AI. They are two different things and operate according to different rules.
>this is a nature/nurture conversation, and we are as much machines/programs ourselves
Compared to AIs, humans are mostly hard-coded. A child will learn the language of the household he's raised in, but you can't get a child to imprint on the noises a vacuum cleaner makes as his language, for example.
"Raise a child with love and care and he will become a good person" works because human children are wired to learn the rules of the tribe and operate accordingly. If an AI does not have that same wiring, how you treat it makes no difference to its behavior.
EulersApprentice t1_j0rxyq3 wrote
Reply to comment by [deleted] in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Except AI is much more snowbally than humans are thanks to ease of self-modification. A power equilibria between AIs is much less likely to stay stable for long.
EulersApprentice t1_j0rpi6m wrote
Reply to comment by ihateshadylandlords in Will agi immediately lead to singularity? by 96suluman
There are other advantages that computers inherently have over people that aren't captured by IQ. For instance, speed, and direct thought-access to calculators and computational resources, and an ability to run at full capacity 24/7 without needing time to sleep or unwind.
EulersApprentice t1_j0rp4sj wrote
Reply to Will agi immediately lead to singularity? by 96suluman
Probably not instantly, but I wouldn't guess it'd take too too long. Maybe a few years at most.
EulersApprentice t1_j0roxr1 wrote
Reply to comment by EscapeVelocity83 in The social contract when labour is automated by Current_Side_4024
Hrm... that sounds to me like a bit of an oversimplification. A building is derived from its blueprint, but once the building is constructed, changing or destroying the blueprint doesn't do anything to the building, you know?
EulersApprentice t1_j0rnvz4 wrote
Reply to comment by cy13erpunk in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Remember that this entity is something we're programming ourselves. In principle, it does exactly what we programmed it to do. We might make a mistake in programming it, and that could cause it to misbehave, but that doesn't mean human concepts of fairness or morality play any role in the outcome.
A badly-programmed AI that we treat with reverence will still kill us.
A correctly-programmed AI will serve us even if we mistreat it.
It's not about how we treat the AI, it's about how we program it.
EulersApprentice t1_j0rn3pv wrote
Reply to comment by [deleted] in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Merging doesn't save us either, alas. Remember that the AI will constantly be looking for ways to modify itself to increase its own efficiency – that probably includes expunging us from inside it to replace us with something simpler and more focused on the AI's goals.
On the bright (?) side, there won't be an eternal despotic dystopia, technically. The universe and everything in it will be destroyed, rearranged into some repeating pattern of matter that optimally satisfies the AI's utility function.
EulersApprentice t1_j0rmh0d wrote
Reply to comment by ThatInternetGuy in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
In reality the malware put out by the AI won't immediately trigger alarm bells. It'll spread quietly across the internet while drawing as little attention to itself as possible. Only when it's become so ambivalent as to be impossible to expunge, only then will it actually come out and present itself as a problem.
EulersApprentice t1_j0rliik wrote
Reply to comment by EscapeVelocity83 in ChatGPT isn't a super AI. But here's what happens when it pretends to be one. by johnny0neal
Not sure how that relates to what archpawn said?
EulersApprentice t1_izytifl wrote
Reply to comment by Practical-Mix-4332 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
See though, the way I see it, it doesn't really matter whether the singleton was programmed by the US, by China, or by someone else. Nobody knows how to successfully imbue their values into an AI, and it doesn't look like anyone is on pace to find out how to do so before the first AGI goes online and it's too late.
Whether the AI that deletes the universe in favor of a worthless-to-us repeating pattern of matter was made by China or the US is of no consequence. Either way, you and everything you ever cared about is gone forever.
I fear that making a big deal about who makes the AI does nothing but expedite our demise.
EulersApprentice t1_izyqz2g wrote
Reply to comment by [deleted] in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Quality of thought generally wins out against quantity of thought. You don't discover general relativity by having a thousand undergrads poke at equations and lab equipment; you discover general relativity by having one Einstein think deeply about the underlying concepts.
EulersApprentice t1_izyqa0q wrote
Reply to comment by Practical-Mix-4332 in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
A war between AIs implies that the AIs are somewhere in the ballpark of 'evenly matched'. I don't think that's likely to happen. Whichever AI hits the table first will have an insurmountable advantage over the other. Assuming the first AI doesn't just prevent the rival AI from even entering the game in the first place.
EulersApprentice t1_izyp9a7 wrote
Reply to comment by Sashinii in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Every day that passes we're one step closer to the sweet release of death!
EulersApprentice t1_j9ywqaq wrote
Reply to Likelihood of OpenAI moderation flagging a sentence containing negative adjectives about a demographic as 'Hateful'. by grungabunga
Politics aside, I find it curious how "homosexual people" rates higher than "homosexuals". I would have expected it to be the other way around, since the latter phrasing makes the property sound like the defining characteristic of the person, making it arguably more stereotype-y.