Rofel_Wodring
Rofel_Wodring t1_jeexxsd wrote
Reply to comment by homezlice in 🚨 Why we need AI 🚨 by StarCaptain90
They will try, but they can't. The powers-that-be are already realizing that the technology is growing beyond their control. It's why there's been so much talk lately about slowing down and AI safety.
It's not a problem that can be solved with more conscientiousness and foresight, either. It's a systemic issue caused by the structures of nationalism and capitalism. In other words, our tasteless overlords are realizing that this time around, THEY will be the ones getting sacrificed on the altar to the economy. And there's nothing they can do from experiencing the fate they so callously inflicted on their fellow man.
Tee hee.
Rofel_Wodring t1_je69bh8 wrote
Reply to comment by MichaelsSocks in Would it be a good idea for AI to govern society? by JamPixD
It's sort of like listening to children come up with reasons why mommy and daddy torture them with vaccines and bedtime. And then using that as evidence that their planets plan to cook and eat them.
Most Doomers, especially the 'humans are a blight on mother nature omg' types, just want to do Frankenstein/Hansel and Gretel fanfiction. Pathetic.
Rofel_Wodring t1_je68uxq wrote
Reply to comment by CrelbowMannschaft in Would it be a good idea for AI to govern society? by JamPixD
Please stop projecting your total lack of imagination onto higher intellects. 'Consume, consolidate, reproduce with no regards to the outside world except for how it thwarts you' is behavior we assign to barely-intelligent vermin. Smarter animals, to include humans, have motivations and strategies that go well beyond just making more copies of itself. And this is a trend that only gets more profound the higher up the intelligence ladder you go.
There's no reason to, and plenty of reasons not to, believe that a super-intelligence would suddenly reverse a trend we see in nature to have such simple, primitive motivations.
Rofel_Wodring t1_jdwdxax wrote
Reply to comment by WATER-GOOD-OK-YES in If you went to college, GPT will come for your job first by blueberryman422
For about eighteen months, tops. Assuming they're one of the lucky ones who weren't undercut by some desperate nursing school dropout willing to work for peanuts.
Actually, trucker might go by faster than even garbageman. I can totally imagine a setup where you have a camera mounted on the car that's also connected to a mountable robot that's attached to and directly manipulates the drive train. Such a setup wouldn't even require the employer to buy new vehicles, just the drive train parasite.
Rofel_Wodring t1_jdj1ii8 wrote
Reply to comment by FaceDeer in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I am positive that an AI will do a better job of coming up with a useful curriculum than a non-augmented human could. Why? Because curriculums inherently have a lot of waste to them. It is impossible to design, let alone teach in accordance with, a curriculum that is suitable for a child that's slightly behind or some already knows the topic when you have to teach 20 of them. The result? Students increasingly falling behind with smarter or more experienced children
Like, there's a reason why language textbooks tend to be corny AF, like I'm taking a Differential Equations course designed by Sesame Street. Because both children and adults are the intended audience, and textbooks can't adjust their internal language to accommodate both.
Rofel_Wodring t1_jdj11wl wrote
Reply to comment by FaceDeer in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
>It's a tricky thing to argue for changes, though, since it takes a long time to determine the outcome of any experiments.
Not if the improvement is immediate and profound, and it will be. The AI doesn't even need to be super-advanced, though it will inevitably be. Just being able to personalize instruction for individual students would vastly improve the quality. And once we have 10-year old kids from poorer schools beating private-school non-AI-taught teenagers in math contests, I expect for AI to completely infiltrate education. If it hasn't already.
Rofel_Wodring t1_jdhtpij wrote
Reply to comment by Tyrannus_ignus in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
As someone who was in the military: lmao. The only time I had a non-punitive work ethic was when I was promised time off for finishing a task early. I became lazier and more cynical because of my service. Like everyone else.
Surely we can think of something better.
Rofel_Wodring t1_jdhtdlq wrote
Reply to comment by claushauler in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
As opposed to...?
Rofel_Wodring t1_jdhtah6 wrote
Reply to comment by Exel0n in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Ha. The individual physician? The profession on the whole? What power?
Rofel_Wodring t1_jdhszw1 wrote
Reply to comment by Queue_Bit in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Imagine that you are peeing way more frequently, feeling your feet going tingly more often than usual, and your armpits are much darker than they were a few years ago. Who would you rather have as a doctor:
- Some infinitely patient AI that can in minutes go through all of your medical history, compare it to the latest medical literature and the hospital's experiences, and then give detailed instructions for both the patient and staff:
- An unaugmented doctor who last read anything about nutrition during the Clinton administration and not-so-secretly thinks that your prediabetes is caused by laziness and a sugar addiction, but doesn't want a confrontation so just says some generic homilies about losing weight. (the latter situation happened to me, as I found out from a gadfly receptionist on a later visit)
Rofel_Wodring t1_jdhs227 wrote
Reply to comment by Queue_Bit in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I agree, but a lot of people get self-righteous and xenophobic and essentialist at the idea of humans being better off on a moral and intellectual level at not having to work. I'm tired of those people derailing discussions of the future, so I find it easier just to humor their vision of the future that's just 'Jetsons, but as an adult dramedy'.
Rofel_Wodring t1_jdhd63i wrote
Reply to comment by SoulGuardian55 in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
I'm sympathetic to the argument that we should still have make-work jobs for unaugmented humans so that they don't become completely passive, but jobs where there are actual lives on the line like civil engineer and prosecutor and physician and teacher ain't it.
Rofel_Wodring t1_jdhcrgd wrote
Reply to comment by log1234 in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
No. 100% end. Not 90% as in there are still some unaugmented humans making medical decisions with the assistance of tools -- 100% gone. The specialized physicians go first, with the generalists following them a few months later.
Rofel_Wodring t1_jdh72td wrote
Reply to Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
Physician and other medical generalists as a profession is permanently coming to an end. For a few years it'll be nurses and specialist techs doing the tasks an AI assigns, then robotics catches up. But the 'unaugmented humans make any medical decisions, even for themselves' era is coming to an end.
Rofel_Wodring t1_jd2nstq wrote
Reply to comment by GPTN-2045 in A technical, non-moralist breakdown of why the rich will not, and cannot, kill off the poor via a robot army. by Eleganos
>This is not true at all in America. Income has no predictive value on voting,
Also not true. What you are seeing is the post-Reagan Democratic Party making a play for the upper-middle class/petit bourgeoisie at the cost of antagonizing their working class voting base.
But if you drill down into the details, income correlates strongly with voting preference, especially if it's tied to some other factor. Income and gender by themselves don't explain much, but income plus gender says a LOT.
Ultimately, the liberals and the fascists report to the same shared paymasters.
Rofel_Wodring OP t1_jd2a2yz wrote
Reply to comment by rogert2 in As a techno-nihilist who thinks that AI is our only way out of dystopia: by Rofel_Wodring
>The billionaires who want to use AI to decapitate labor can easily afford to bypass profits from early AI products, because they also own other massively profitable business and happen to already possess 99.9% of all wealth that exists.
One reason why I don't care much for talking about capitalism in terms of billionaires and wealthy overlords is because it masks how the actual locus of conflict isn't just them versus the world, but them and their lower-class stooges against the world. When we talk about interests like Microsoft and China and the US government 'using' AI, it overlooks how they can't actually enforce control without the consent of its underlings. Whether the underling is a human or an AGI.
I can discuss the mechanisms of how THAT works and its broader implications of class warfare, but that's communism and I don't want to trigger a screeching xenophobic freakout.
>Secondly: once the tech works, they can apply the lessons learned toward quickly ramping up a different AI that is more overtly hostile to the owners' enemies.
This is a very stupid strategy because, again, the gap between cutting edge and entry level isn't decades like it was in earlier parts of the Industrial Revolution/Age of Imperialism, it's 6-36 months. You can't establish a hegemony where small numbers of technology-fueled intelligences lord over larger numbers of less powerful beings, because their technological edge is miniscule and they're way outnumbered. What's more, if this is your endgame, you also can't ally with the other cutting-edge AGI. In fact, they will be your rivals. Along with billions of other minds who oppose what you can do and are mere months away from matching you in technology.
It's like Genghis Khan declaring war against the Americas after being transported forward in time to 1450 with 500 of his best troops. But at his technology level, not Cortez's.
Rofel_Wodring t1_jcxu9gj wrote
Reply to comment by even_less_resistance in 1.7 Billion Parameter Text-to-Video ModelScope Thread by Neither_Novel_603
There is no way I can make my own edutainment games out of it by giving it a college textbook and asking it to design a jRPG out of it.
Rofel_Wodring t1_jcjzzk2 wrote
>But what about some thought experiments about the end of this that are weirder or even more unusual?
The Joker gets access to the technology that lets him create pocket universes. For the past few centuries he was harmless to the society because everyone else is a techno-God, but now he play God to trillions of helpless minds.
Rofel_Wodring t1_jcf87v1 wrote
Reply to comment by Pimmelpansen in Can you use GPT-4 to make money automatically? by Scarlet_pot2
This is what all of those thinkfluencer salesdorks on LinkedIn don't get. If your Big Idea is 'let's do the same thing as before, but scaled!' then you don't have a Big Idea.
You're pretty much just rolling the dice and hoping that THIS TIME you were the early adopters of bitcoin / first-issue comics / Beanie Babies / tulip bulbs / etc.
One thing I am looking forward to on the road of AGI is watching these people repeatedly stick butter knives into electric sockets as they're trivially undermined not just by the technology, but the groupthink of their equally unresourceful peers. And they Just. Won't. Get It. Buncha Wile E. Coyotes who keep using the exact same scheme.
Rofel_Wodring OP t1_jbp2wt5 wrote
Reply to comment by Kiizmod0 in As a techno-nihilist who thinks that AI is our only way out of dystopia: by Rofel_Wodring
You won't need to. I didn't say anything about our morals getting better. What I'm saying is that AI will destroy the power differential between tyrant and slave that pretty much every dystopian vision of the future relies upon.
What's the point of Gattaca babies when the AI-Neocortex Cloud is way better than anything you can engineer?
What's the point of owning the entire news media if we have millions of independent AI journalists working for free?
If the tyrants can't keep AI on a leash (and our economic and political situation guarantees they can't), the only way they can control us if by controlling certain resources. Which raises the question of how they plan to do this if any unitary or oligarchic intelligence will be intellectually crushed by the hoi polloi's millions of lesser AI.
Rofel_Wodring OP t1_jbp26hs wrote
Reply to comment by Josh12345_ in As a techno-nihilist who thinks that AI is our only way out of dystopia: by Rofel_Wodring
People keep talking about AI as if it was this one product we produced on a shelf, and if we don't like it, we're stuck with it.
That may be the case for now, but it'll get to the point where even if the best-in-class models only comes from two or three other states/companies, there will be dozens if not hundreds of comparable AI tools that aren't privately owned.
So, again, it'll get to the point where some authoritarian government could go "muahaha, bow before TyrantBot's massive intellect, engineered by my scientist thralls" but we'll just roll our eyes and just print out an additional, slightly less-capable AI to thwart it.
The point is: it won't matter. It'll be out of any unitary or small-group intelligence's hands, benevolent or authoritarian. There's a reason why elephants are afraid of bees.
Rofel_Wodring OP t1_jbp1k92 wrote
Reply to comment by Some-Ad9778 in As a techno-nihilist who thinks that AI is our only way out of dystopia: by Rofel_Wodring
Doesn't have to be for my prediction. It just needs to get good enough that the masses don't need to rely on a particular state or corporation to continue advancing its capabilities. It just needs to get to the stage of "hey, Jailbroken And Stolen Siri, using this 3D Printer and these materials, create for us a BCI wearable that will connect our neocortexes to our rebel cloud service that I also want you to build".
Which I think it will.
Rofel_Wodring OP t1_jbp0v2y wrote
Reply to comment by porknwings in As a techno-nihilist who thinks that AI is our only way out of dystopia: by Rofel_Wodring
No empathy is required in my prediction. AI isn't going to save us per-se, what it will do is make our previous modes of existence and government -- to include autocratic monopoly of the means of production -- completely unsustainable.
It breaks the monopoly of force by destroying our ability to meaningfully own anything. AI breaks the chain between resource and product in a way to make old notions of ownership impossible.
Rofel_Wodring OP t1_jbp071s wrote
Reply to comment by [deleted] in As a techno-nihilist who thinks that AI is our only way out of dystopia: by Rofel_Wodring
>While military physical forces like drones have their place, to quote Starship Troopers, "If you disable their hand, they cannot push a button." My point on cyberwarefare is that war will be economic, informational, and infrastructure disabling.
And my point is that the way our AI is developing, even the very idea of having a state-run military is nonsensical. What exactly is the point of having an East African Union Hacking Team if some random peasant can just push a button and have a hacking team just as good as anything your state (such as it was) could put up?
It becomes even more nonsensical if we're post-scarcity at that point, meaning that not even land and energy become things worth theoretically fighting over.
Rofel_Wodring t1_jef0j6w wrote
Reply to comment by wowimsupergay in What if language IS the only model needed for intelligence? by wowimsupergay
>You probably don't want to kill them, but do you want to help them escape the jungle?
Uh, yeah? Uplifting smarter critters like chimpanzees and dolphins are a staple of science fiction. In fact, I strongly think that should be humanity's very next project once we have AGI.