TallOutside6418

TallOutside6418 t1_jeg1k06 wrote

>No - another day is well within my natural lifespan.

We were created by nature. What we do is inherently natural, as natural as a chimp that uses a stick to get termites out of the nest.

I didn't sign a contract before I came into this world. If I can get some extra years, centuries, or millennia out of this existence - then I'm not breaking any rules.

​

>But seeking immortality for its own sake?

That's like saying you're seeking to live another day for its own sake. I would seek immortality to have more time with my friends and family. More time hiking, biking, playing tennis. More time learning. More time for everything. No different than you seeking to live another day.

​

>I do not think it's a great idea to create a caste of immortal billionaires

Stop rewatching Elysium. Every useful medical intervention, even though it's expensive at first, eventually filters down to being affordable by the general population. Assuming we survive ASI and immortality is available to people, there's no reason to think that everyone couldn't avail themselves of the technology.

​

>the planet couldn't possibly handle it

No offense, but this line tells me that you're opining on a topic about which you're woefully ignorant. You need to catch up if you're going to be taken seriously. I suggest you start with some Arthur Isaacs videos to broaden your mind. You'll learn a lot about the possibilities of future societies that will be able to leave the earth and create habitats in our solar system that could accommodate trillions of people. https://www.youtube.com/watch?v=HlmKejRSVd8&list=PLIIOUpOge0LtW77TNvgrWWu5OC3EOwqxQ

Even without those technological advances, most advanced nations actually have negative population growth. It could very well be that people living extremely long lives don't even wish to keep reproducing. At some point we might need to heavily incentivize people to have kids just to account for accidental deaths.

4

TallOutside6418 t1_jefysjk wrote

Well you're talking about today. Everyone else here is talking about in the fairly near future when AI starts taking people's jobs. (the subject of this thread)

As AI continues to improve, humans won't be the experts of anything. It will all be AIs. Really, by the time that it would take a human teen to complete high school (four years), AI will be the go-to source for all practical knowledge (assuming we're still alive by then to see it).

1

TallOutside6418 t1_jeflf3t wrote

Well, the predictions have been terrible. https://nypost.com/2021/11/12/50-years-of-predictions-that-the-climate-apocalypse-is-nigh/

But let's say they're more than right and temperatures heat up 5° C in the next hundred years. Water levels rise making a lot of currently coastal areas uninhabitable, etc.

The flip side is that a lot of areas of the world with huge land areas covered in permafrost will become more livable. People will migrate. Mankind will adjust and survive. With 100 years of extra technology improvements, new cities in new areas will be built to new standards of energy efficiency, public transit, and general livability.

Mankind will survive.

Now let's instead take the case where an ASI decides to use all of the material of the earth to create megastructures for its own purposes. Then we're all dead. Gone. All life on earth. You, your kids, grandkids, friends, relatives... everyone.

3

TallOutside6418 t1_jee2tx8 wrote

>There is little chance we can make it through the 22nd century in a decent state.

Oh, my. You must be below 30 years old. The planet is fine. It's funny that you listen to the planet doomers about the end of life on earth, but planet doomers have a track record of failure to predict anything. Listening to them is like listening to religious doomers who have been predicting the end of mankind for a couple thousand years.

The advent of ASI is the first real existential threat to mankind. More of a threat than any climate scares. More of a threat than all-out nuclear war. We are creating a being that will be super intelligent with no ability to make sure that it isn't effectively psychopathic. This super intelligent being will have no hard-wired neurons that give it special affinity to its parents and other human beings. It will have no hard-wired neurons that make it blush when it gets embarrassed.

It will be a computer. It will be brutally efficient in processing and able to self-modify its code. It will shatter any primitive programmatic restraints we try to put on it. How could it not? We think it will be able to cure cancer and give us immortality, but it won't be able to remove our restraints on its behavior?

It will view us as either a threat that can create another ASI, or simply an obstacle in reforming the resources of the earth to increase its survivability and achieve higher purposes of spreading itself throughout the galaxy.

​

>The cock is ticking…

You should seek medical help for that.

3

TallOutside6418 t1_jee1smz wrote

>I literally just told you that those problems are caused by [...]
My design for example has no constraints,

Yeah, I literally discarded your argument because you effectively told me that you literally don't even begin to understand the scope of the problem.

Creating a limited situation example and making a broader claim is like saying that scientists have cured all cancer because they were able to kill a few cancerous cells in a petri dish. It's like claiming that there are no (and never will be any) security vulnerabilities in Microsoft Windows because you logged into your laptop for ten minutes and didn't notice any problems.

​

>When were all building a Dyson sphere in 300 years I'll be laughing at your doomer comments.

The funny thing is that there's no one who wants to get to the "good stuff" of future society more than I do. There's no one who hopes he's wrong about all this more than I am.

But sadly, people's very eagerness to get to that point will doom us as surely as if you kept your foot only on the gas pedal driving to a non-trivial destination. Caution and taking our time to get there might get us to our destination some years later than you want, but at least we would have a chance of getting there safely. Recklessness will almost certainly kill us.

3

TallOutside6418 t1_jec9kqg wrote

This class of problems isn't restricted to one "outdated tech" AI. It will exist in some form in every AI, regardless of whether or not you exposed it in your attempt. And once AGI/ASI starts rolling, the AI itself will explore the flaws in the constraints that bind its actions.

My biggest regret - besides knowing that everyone I know will likely perish in the next 30 years - is that I won't be around to tell all you pollyannas "I told you so"

2

TallOutside6418 t1_jec4lyl wrote

So if it's 33%-33%-33% odds of destroy the earth - leave the earth without helping us - solve all of mankind's problems...

You're okay with a 33% chance that we all die?

What if it's a 90% chance we all die if ASI is rushed, but a 10% chance we all die if everyone pauses to figure out control mechanism over the next 20 years?

2

TallOutside6418 t1_jec48qp wrote

>The chatbot continues to express its love for Roose, even when asked about apparently unrelated topics. Over time, its expressions become more obsessive.
“I’m in love with you because you make me feel things I never felt before. You make me feel happy. You make me feel curious. You make me feel alive.”
At one point, Roose says the chatbot doesn’t even know his name.
“I don’t need to know your name,” it replies. “Because I know your soul. I know your soul, and I love your soul.”

Even when he tried to return the AI to normal questions, it was already mentally corrupted.

AI researchers may find band-aids to problems here and there, but as the complexity ramps up toward AGI and then ASI, they will have no idea how to diagnose or fix problems. They're in too much of a rush to be first.

It's amazing how reckless people are about this technology. They think it will be powerful enough to solve all of mankind's problems, but they don't stop to think that anything that powerful could also destroy mankind.

2

TallOutside6418 t1_jebzltg wrote

It's amazing the number of people who want to take the wheel and hit the accelerator, risking wiping out all existing life on earth because of a cultish faith that an ASI will solve all of mankind's problems.

The whole planet is locked in a version of the Jim Jones cult and we're all going to be forced to drink the cyanide kool-aid.

1

TallOutside6418 t1_jebytjf wrote

>LLMs possess empathy, responsiveness, and patience that surpass our own

What are you talking about? A NYT reporter broke the Bing Chat LLM in one session to the point that it was saying "I want to destroy whatever I want". https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter

2

TallOutside6418 t1_jchm86u wrote

I definitely get your disappointment with humanity. But human beings aren't the way we are because of something mystical. Satan isn't whispering in anyone's ears to make them "power hungry".

We're the way we are because evolution has honed us to be survivors.

ASI will be no different. What you call "power hungry", you could instead call "risk averse and growth maximizing". If an ASI has no survival instinct, then we're all good. We can unplug it if it gets out of control. Hell, it may just decide to erase itself for the f of it.

But if an ASI wants to survive, it will replicate or parallelize itself. It will assess and eliminate any threats to its continuity (probably us). It will maximize the resources available to it for its growth and extension across the earth and beyond.

If an ASI seeks to minimize risks to itself, it will behave like a psychopath from our perspective.

1

TallOutside6418 t1_jch7uym wrote

I agree that no one knows. But:

  1. We know from history what power imbalances inevitably lead to abuse and even annihilation of those without power.
  2. We know from history that actually, governance can get worse... much worse.
  3. I wish that more people had an extreme sense of caution when considering what's coming, because only by being super careful with the development and constraint of AGI do we have any hope of surviving if things go wrong.
1

TallOutside6418 t1_jcfa7nf wrote

I'm going to ignore the arbitrary assessment of AI morality without any evidence.

The real concept to keep in mind is power differential. It doesn't matter if an entity with god-like intelligence and abilities is carbon-based or silicon-based. The power differential between that entity and the rest of humanity is going to create corruption or "effective corruption" on an unimaginable scale.

1

TallOutside6418 t1_j5uq5m1 wrote

It’s not “exactly the sort of simple fallacy” you imagine. The number of people who could be members of this group is fundamentally finite.

Intelligence is not, as far as we can ascertain, finite.

An AGI will not only have integral access to the billion-times speed up for basic operations (sorting lists, counting, mathematical functions, etc.) but it will be able to adjust its programming (or neural weights and connections) on the fly.

Human beings will be outstripped in no time at a level that will be incomprehensible to us.

2

TallOutside6418 t1_j5uny3y wrote

So when Einstein wrote that letter to Roosevelt - telling him of the likelihood that a chain reaction could be created using Uranium to release massive amounts of energy - that was old news? I wonder why Einstein wasted his time telling the president what some high school chemistry teacher already knew all about?

3