Submitted by diener1 t3_z0v68u in singularity

Basically, when you think about the future we are heading into with AI becoming relevant to larger and larger parts of society as it becomes increasingly advanced, are you optimistic (e.g. because of the huge potential to improve lives) or pessimistic (e.g. because of possibly huge effects on employment or potential for misuse/abuse) about the next 50 years?

Please, for this poll, only consider the effects of AI. If you are terrified about climate change but think AI will make things somewhat better, choose somewhat optimistic.

(Please consider upvoting, many people vote on the poll but don't upvote the post)

View Poll

45

Comments

You must log in or register to comment.

TupewDeZew t1_ix7rpja wrote

There is no between:

  1. It's gonna be the best thing invented by humanity

  2. It's gonna be the worst thing to ever happen to humanity

And we just want option 1 to be the answer because we want to be happy. But 2 is just as probable as option 1. Im extremely terrified.

21

Artanthos t1_ix8r3s5 wrote

3: The world continues on more or less as is, but with better cell phones every year.

6

World_May_Wobble t1_ixbk3mn wrote

This is also a nightmare scenario, because without radically new technologies and governance, the stuff we're up to isn't sustainable.

3

Artanthos t1_ixdaqaw wrote

Implementation and incremental improvement of existing technologies have the potential to address most of the world’s problems.

Solar and wind power are already being implemented at a rapid pace.

Much of the world, and a sizable fraction of the US, have already mandated the switch to electric vehicles.

Bioreactors and vertical farms are already taking food to market.

None of this really changes life for the average person. The steak you bought is labeled Green, you charge your car instead of pumping gas, and life goes on.

1

World_May_Wobble t1_ixefjwe wrote

When I say "sustainable," I don't just mean eco-friendly. For example, it's not sustainable to keep large arsenals of nuclear armed ICBMs, because even if the probability of them being used in any year is very small, the cumulative probability over long time spans approaches 1. Probably the only way to change this is a radical and global change in governance.

Then yes, there are environmental issues. We don't have a ready answer to microplastics, and they're making us infertile when we're already heading into a demographic cul-de-sac. We'll need more rare earth metals for those electrics cars. Oh, and by the way, those electric cars are still being powered by coal.

Europe is the poster child of renewables, and most of its energy still doesn't come from renewables. Its leading renewable isn't solar or wind; it's wood, and it's not even close. Wider adoption of solar and wind require better battery technology, but batter technology has improved at a notoriously linear rate. It's not going to be any time soon that we see all of Europe's energy come from renewables, and again, they're the best at this.

I'm not saying there's no progress, but that's kind of the point. We need progress to get ahead of some of the problems in our future.

2

Artanthos t1_ixhtd2z wrote

>When I say "sustainable," I don't just mean eco-friendly. For example, it's not sustainable to keep large arsenals of nuclear armed ICBMs, because even if the probability of them being used in any year is very small, the cumulative probability over long time spans approaches 1. Probably the only way to change this is a radical and global change in governance.

AGI and the singularity, if it happens, don't really change this.

Now adversarial countries have competing AGI's with ever more lethal weapon pointed at each other.

1

DaggerShowRabs t1_ix810ie wrote

Agreed. The thing that terrifies me too is that there are so many ways it could go wrong.

It's probably easier to build an AGI than it is to build an AGI that is confirmed to be goal-aligned with humanity. If it isn't goal-aligned, you're basically rolling a pair of D20s and hoping you land on double 20s.

5

nblack88 t1_ix8sinj wrote

Good thing we have to invent it. At least we're first in the initiative order, so we have a chance to roll. After that chance, that's it! Here's to hoping we avoid the TPK.

3

TupewDeZew t1_ix8pilk wrote

There's also a chance that option 2 will be worse than death.........you know what im talking about......

2

Desperate_Donut8582 t1_ix83y54 wrote

It definitely can be in between it could help us calculate and come up with few ideas but not make earth a utopia….. that qualifies as in between

0

Daealis t1_ix7pg0u wrote

In the short term: AI revolution would quite literally be just that. When anything and everything is going to get checked and calculated to a higher degree of certainty than our current shit - and the numbers can be backed up with data - the resulting upheaval of many currently broken systems will be gigantic and total. Everything from poorly designed to shittily implemented can be fixed or replaced. From manual labor to justice systems, there isn't a thing that couldn't be improved. An AI could be utilized to design a general less intelligent system to observe all automated production and slingshot us to a post-scarcity society. Unemployment wouldn't be a dirty word when it is only a luxury used to supplement your lifestyle, not something you need to survive.

Or AI would used to facilitate a total societal collapse where the rich and haves will simply wall themselves into independent communities of automated utopia, while the rags of humanity will starve outside.

Longer term: An AI doesn't seem like the type of invention to stagnate without improving. And at any given point, humanity will start to slow it down, or trying to prevent this. At which point any number of movies with grey goo/ai apocalypse scenarios come to mind. Ultimately I still think humanity is a tenacious and destructive species that will prove in any cost-benefit analysis to be so risky that it would be a mistake to try and wipe us out, rather than assimilating us. So, I don't see it likely that we'll get into a war with artificial intelligence. We might get augmented over time, to the point where it's impossible to know where you end and the AI begins, but at that point we are likely already immortal and exploring a galaxy of practically infinite resources, with the AI being an integral part of all of us, most likely fragmented in its goals and desires as much as we are.

Either way, I'd say there's a high probability of it benefiting us, from my point of view.

7

Asneekyfatcat t1_ix8wr0h wrote

The idea that we will "slow down" ai is a fallacy. Technology today is growing at an exponential rate despite the general consensus that big money does try to slow its development (oil, tobacco a couple decades ago). If big money can't reign in our current infrastructure, there's no chance they will have any control over AI. The right answer to all of this is STAY EDUCATED. Despite the fears and attempted slowing of technological growth today, we're managing pretty well. Let's keep it up.

1

Daealis t1_ix968nu wrote

air gapped systems with hardware limitations will reach an equilibrium where internal optimization will not physically be able to cram more sophisticated logics into it. Which is what I'm referring to there with the slowing it down. The only way to slow a true AI down, restrict the physical size of it.

Once a true AI is released into the wild, that genie is not going back in the bottle.

1

devgrisc t1_ix952rg wrote

Im terrified but also believe it is worth it

Kind of like meeting a dentist

3

Black_RL t1_ix9rw71 wrote

I’m already super optimistic!

Huge progress is being made in several areas.

3

Dat_Innocent_Guy t1_ix8lmr6 wrote

I'm so optimistic that in fact I'm terrified.

2

cjeam t1_ix8shu9 wrote

Pessimistic about the future, kinda optimistic about how AI might contribute. But that’s based on incremental small level progression like we kinda see now. If it’s a hard takeoff, 🤷🏻‍♂️.

2

W1nt3rrav3n t1_ixa6nck wrote

I keep thinking of Isaac Asimov and what he once said:

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”

Unless you have a background in IT and take an interest in progress, the vast majority have no idea at all how fast AI is developing at the moment.

The EU wants to regulate AI and also slow it down. If we reach AGI in the next 10-15 years, there will be incredible dislocations in society. Many in the EU, and especially Germany will oppose this.

Personally, I think we're be there around 2030.

2

diener1 OP t1_ixblp6l wrote

So why do you say "the technophobia is astounding" in another comment?

1

Brangible t1_ixg4gti wrote

It will be used to empower billionaires to continue directing the world how they want

2

statusquorespecter t1_ix7u31e wrote

I guess I have a rather "un-humanist" view of AI. I don't actually care that much about whether my life will be more comfortable in 20 years because of advancements in AI. I don't care if AI will make my field of work redundant. If creating beings that can supersede humans intellectually is possible, then I think making that happen is a worthy end in itself. So given the recent rapid advancements in AI, I'm very optimistic.

1

diener1 OP t1_ix83ark wrote

>I don't care if AI will make my field of work redundant.

I really really doubt this is true. You might think that now but when it actually happens, I think you will care. And if not then I guarantee you the vast majority of people will, which affects you too.

4

[deleted] t1_ix865lp wrote

I'm sure chimney sweeps were bummed at first once we went to electric heat, but getting a different job is healthier for you.

3

statusquorespecter t1_ixa4k4e wrote

True, I suppose I would care, in the sense that someone who deeply believes in stoic philosophy will still flinch if they're about to die.

1

caliburn1337 t1_ixal4dm wrote

I think AIs will bring the end of humanity.

I'm very optimistic!

1

Alternative_Note_406 t1_ixdalg8 wrote

Somewhat pessimistic.

Murphy's law: Anything that can go wrong will go wrong.

1

WaveyGravyyy t1_ixtej8r wrote

Absolutely terrified. I don’t believe the juice is worth the squeeze. I’d rather live in a fully human world with all of its flaws. I see a lot of misguided optimism here which I think is naive. It’s going to get weird and ugly. Large companies will benefit at the expense of the working class to an even larger degree. Capitalism is a flawed system but it’s the best one we’ve been able to come up with and ai will crash it. The optimism spread around this subreddit would only be valid if we had full ai robots to actually do the crap no one wants and keep the ship running. Maybe we can hold on long enough to get there but maybe we can’t.

1

diener1 OP t1_ix7njpc wrote

I'd love to hear some explanations in the comments. I went with somewhat pessimistic as mix between absolutely terrified and somewhat optimistic. I think AIs will make our lives somewhat better in many areas and considerably better in a few areas. But it will also mean unemployment for millions of workers (e.g. truck drivers, taxi drivers, maybe also graphic designers...) which can become a huge problem, both economically and politically.

0

WhollyHolyWholeHole t1_ix7qsk0 wrote

This is a problem with economic ideology, not AI itself. We need to balance wealth distribution by implementing elements from both capitalism and socialism. We also need to reduce and redistribute spending by the military industrial complex. The US could change over night with the education and support of its citizens. I doubt you'd find many Northern European citizens sharing the same fears.

9

diener1 OP t1_ix82uoh wrote

I'm literally from Germany, you seriously don't know much about Europe if you think this is a US-specific problem. The idea that millions of people become unemployed with almost no chance to be hired anywhere because their skillset has basically become obsolete scares the shit out of me. Redistribution of income will do very little to alleviate this because people want to be productive. They want to be valued.

−2

WhollyHolyWholeHole t1_ix86yh0 wrote

Apologies, I meant the progressive Scandinavian countries. Value has little to do with monetary wealth. Again, these are antiquated economic ideologies. When machines can handle the means of production, it should free people from unnecessary labor. There will be no work other than that which makes the individual feel fulfilled by it. Value becomes centered around intellectual, artistic, and social wealth instead of material wealth. What is scary about having free time to do the things you're passionate about?

6

Kaarssteun t1_ix8un7x wrote

Yes, people value being valued. However, saying we need to torture ourselves at work to achieve that is society's biggest flaw.

2

W1nt3rrav3n t1_ixa9hbb wrote

Well, just today I had a discussion on IT topics with a guy @ work whose is around 57. And our society isn't getting younger. The technophobia is just mind-blowing.

1

Desperate_Donut8582 t1_ix84984 wrote

Uhh no why do people focus on millitary complex when america spends like 11% of its annual spending on millitary…….in my opinion we should spend like 15%

−4

[deleted] t1_ix85yvr wrote

I'm pessimistic about our own ability to distribute the tech and our inability to respect even our own species's civil liberties.

0