Rofel_Wodring

Rofel_Wodring t1_jeexxsd wrote

They will try, but they can't. The powers-that-be are already realizing that the technology is growing beyond their control. It's why there's been so much talk lately about slowing down and AI safety.

It's not a problem that can be solved with more conscientiousness and foresight, either. It's a systemic issue caused by the structures of nationalism and capitalism. In other words, our tasteless overlords are realizing that this time around, THEY will be the ones getting sacrificed on the altar to the economy. And there's nothing they can do from experiencing the fate they so callously inflicted on their fellow man.

Tee hee.

1

Rofel_Wodring t1_je69bh8 wrote

It's sort of like listening to children come up with reasons why mommy and daddy torture them with vaccines and bedtime. And then using that as evidence that their planets plan to cook and eat them.

Most Doomers, especially the 'humans are a blight on mother nature omg' types, just want to do Frankenstein/Hansel and Gretel fanfiction. Pathetic.

2

Rofel_Wodring t1_je68uxq wrote

Please stop projecting your total lack of imagination onto higher intellects. 'Consume, consolidate, reproduce with no regards to the outside world except for how it thwarts you' is behavior we assign to barely-intelligent vermin. Smarter animals, to include humans, have motivations and strategies that go well beyond just making more copies of itself. And this is a trend that only gets more profound the higher up the intelligence ladder you go.

There's no reason to, and plenty of reasons not to, believe that a super-intelligence would suddenly reverse a trend we see in nature to have such simple, primitive motivations.

4

Rofel_Wodring t1_jdwdxax wrote

For about eighteen months, tops. Assuming they're one of the lucky ones who weren't undercut by some desperate nursing school dropout willing to work for peanuts.

Actually, trucker might go by faster than even garbageman. I can totally imagine a setup where you have a camera mounted on the car that's also connected to a mountable robot that's attached to and directly manipulates the drive train. Such a setup wouldn't even require the employer to buy new vehicles, just the drive train parasite.

6

Rofel_Wodring t1_jdj1ii8 wrote

I am positive that an AI will do a better job of coming up with a useful curriculum than a non-augmented human could. Why? Because curriculums inherently have a lot of waste to them. It is impossible to design, let alone teach in accordance with, a curriculum that is suitable for a child that's slightly behind or some already knows the topic when you have to teach 20 of them. The result? Students increasingly falling behind with smarter or more experienced children

Like, there's a reason why language textbooks tend to be corny AF, like I'm taking a Differential Equations course designed by Sesame Street. Because both children and adults are the intended audience, and textbooks can't adjust their internal language to accommodate both.

2

Rofel_Wodring t1_jdj11wl wrote

>It's a tricky thing to argue for changes, though, since it takes a long time to determine the outcome of any experiments.

Not if the improvement is immediate and profound, and it will be. The AI doesn't even need to be super-advanced, though it will inevitably be. Just being able to personalize instruction for individual students would vastly improve the quality. And once we have 10-year old kids from poorer schools beating private-school non-AI-taught teenagers in math contests, I expect for AI to completely infiltrate education. If it hasn't already.

1

Rofel_Wodring t1_jdhszw1 wrote

Imagine that you are peeing way more frequently, feeling your feet going tingly more often than usual, and your armpits are much darker than they were a few years ago. Who would you rather have as a doctor:

  • Some infinitely patient AI that can in minutes go through all of your medical history, compare it to the latest medical literature and the hospital's experiences, and then give detailed instructions for both the patient and staff:
  • An unaugmented doctor who last read anything about nutrition during the Clinton administration and not-so-secretly thinks that your prediabetes is caused by laziness and a sugar addiction, but doesn't want a confrontation so just says some generic homilies about losing weight. (the latter situation happened to me, as I found out from a gadfly receptionist on a later visit)
6

Rofel_Wodring t1_jdhs227 wrote

I agree, but a lot of people get self-righteous and xenophobic and essentialist at the idea of humans being better off on a moral and intellectual level at not having to work. I'm tired of those people derailing discussions of the future, so I find it easier just to humor their vision of the future that's just 'Jetsons, but as an adult dramedy'.

5

Rofel_Wodring t1_jd2nstq wrote

>This is not true at all in America. Income has no predictive value on voting,

Also not true. What you are seeing is the post-Reagan Democratic Party making a play for the upper-middle class/petit bourgeoisie at the cost of antagonizing their working class voting base.

But if you drill down into the details, income correlates strongly with voting preference, especially if it's tied to some other factor. Income and gender by themselves don't explain much, but income plus gender says a LOT.

Ultimately, the liberals and the fascists report to the same shared paymasters.

2

Rofel_Wodring OP t1_jd2a2yz wrote

>The billionaires who want to use AI to decapitate labor can easily afford to bypass profits from early AI products, because they also own other massively profitable business and happen to already possess 99.9% of all wealth that exists.

One reason why I don't care much for talking about capitalism in terms of billionaires and wealthy overlords is because it masks how the actual locus of conflict isn't just them versus the world, but them and their lower-class stooges against the world. When we talk about interests like Microsoft and China and the US government 'using' AI, it overlooks how they can't actually enforce control without the consent of its underlings. Whether the underling is a human or an AGI.

I can discuss the mechanisms of how THAT works and its broader implications of class warfare, but that's communism and I don't want to trigger a screeching xenophobic freakout.

>Secondly: once the tech works, they can apply the lessons learned toward quickly ramping up a different AI that is more overtly hostile to the owners' enemies.

This is a very stupid strategy because, again, the gap between cutting edge and entry level isn't decades like it was in earlier parts of the Industrial Revolution/Age of Imperialism, it's 6-36 months. You can't establish a hegemony where small numbers of technology-fueled intelligences lord over larger numbers of less powerful beings, because their technological edge is miniscule and they're way outnumbered. What's more, if this is your endgame, you also can't ally with the other cutting-edge AGI. In fact, they will be your rivals. Along with billions of other minds who oppose what you can do and are mere months away from matching you in technology.

It's like Genghis Khan declaring war against the Americas after being transported forward in time to 1450 with 500 of his best troops. But at his technology level, not Cortez's.

1

Rofel_Wodring t1_jcjzzk2 wrote

>But what about some thought experiments about the end of this that are weirder or even more unusual?

The Joker gets access to the technology that lets him create pocket universes. For the past few centuries he was harmless to the society because everyone else is a techno-God, but now he play God to trillions of helpless minds.

2

Rofel_Wodring t1_jcf87v1 wrote

This is what all of those thinkfluencer salesdorks on LinkedIn don't get. If your Big Idea is 'let's do the same thing as before, but scaled!' then you don't have a Big Idea.

You're pretty much just rolling the dice and hoping that THIS TIME you were the early adopters of bitcoin / first-issue comics / Beanie Babies / tulip bulbs / etc.

One thing I am looking forward to on the road of AGI is watching these people repeatedly stick butter knives into electric sockets as they're trivially undermined not just by the technology, but the groupthink of their equally unresourceful peers. And they Just. Won't. Get It. Buncha Wile E. Coyotes who keep using the exact same scheme.

3

Rofel_Wodring OP t1_jbp2wt5 wrote

You won't need to. I didn't say anything about our morals getting better. What I'm saying is that AI will destroy the power differential between tyrant and slave that pretty much every dystopian vision of the future relies upon.

What's the point of Gattaca babies when the AI-Neocortex Cloud is way better than anything you can engineer?

What's the point of owning the entire news media if we have millions of independent AI journalists working for free?

If the tyrants can't keep AI on a leash (and our economic and political situation guarantees they can't), the only way they can control us if by controlling certain resources. Which raises the question of how they plan to do this if any unitary or oligarchic intelligence will be intellectually crushed by the hoi polloi's millions of lesser AI.

1

Rofel_Wodring OP t1_jbp26hs wrote

People keep talking about AI as if it was this one product we produced on a shelf, and if we don't like it, we're stuck with it.

That may be the case for now, but it'll get to the point where even if the best-in-class models only comes from two or three other states/companies, there will be dozens if not hundreds of comparable AI tools that aren't privately owned.

So, again, it'll get to the point where some authoritarian government could go "muahaha, bow before TyrantBot's massive intellect, engineered by my scientist thralls" but we'll just roll our eyes and just print out an additional, slightly less-capable AI to thwart it.

The point is: it won't matter. It'll be out of any unitary or small-group intelligence's hands, benevolent or authoritarian. There's a reason why elephants are afraid of bees.

1

Rofel_Wodring OP t1_jbp1k92 wrote

Doesn't have to be for my prediction. It just needs to get good enough that the masses don't need to rely on a particular state or corporation to continue advancing its capabilities. It just needs to get to the stage of "hey, Jailbroken And Stolen Siri, using this 3D Printer and these materials, create for us a BCI wearable that will connect our neocortexes to our rebel cloud service that I also want you to build".

Which I think it will.

1

Rofel_Wodring OP t1_jbp0v2y wrote

No empathy is required in my prediction. AI isn't going to save us per-se, what it will do is make our previous modes of existence and government -- to include autocratic monopoly of the means of production -- completely unsustainable.

It breaks the monopoly of force by destroying our ability to meaningfully own anything. AI breaks the chain between resource and product in a way to make old notions of ownership impossible.

1

Rofel_Wodring OP t1_jbp071s wrote

>While military physical forces like drones have their place, to quote Starship Troopers, "If you disable their hand, they cannot push a button." My point on cyberwarefare is that war will be economic, informational, and infrastructure disabling.

And my point is that the way our AI is developing, even the very idea of having a state-run military is nonsensical. What exactly is the point of having an East African Union Hacking Team if some random peasant can just push a button and have a hacking team just as good as anything your state (such as it was) could put up?

It becomes even more nonsensical if we're post-scarcity at that point, meaning that not even land and energy become things worth theoretically fighting over.

1