NonDescriptfAIth

NonDescriptfAIth t1_jefz4dt wrote

I'm not concerned with AGI being unaligned with human's. Quite the opposite really. I'm worried that our instructions to an AI will not be aligned with our desired outcomes.

It will most likely be a government that finally crosses the threshold into self improving AI. Any corporation that gets close will be semi-nationalised such their controls become replaced with the government that helped fund it.

I'm worried about humans telling the AI to do something horrifying, not that AI will do it of it's own volition.

This isn't sci-fi and it certainly isn't computer programming either.

The only useful way to discuss this possible entity is simply as a super intelligent being, predicting it's behaviour is near impossible and the implications of this are more philosophical in nature than scientific.

1

NonDescriptfAIth t1_jefvvpl wrote

Thanks man that's pretty cool.

On reflection my comment, unfortunately, needed to be much longer, not shorter.

I'm writing a book at the moment based around my original comment.

I mistakenly gave off the impression that I think AGI will be evil outright. That my position is one of some terminator like takeover.

The reality is I think we building an entity that could possibly be God like to us, we had better be careful what we tell it to do.

1

NonDescriptfAIth t1_jee75o3 wrote

AGI research is a race. If we run. We die.

Unlike the arms race that lead us to nuclear weapons, the development of AGI can occur largely in secret, both at a corporate and government level.

Even if there is a "successful" call to throw the brakes on current corporate AGI software development. The global infrastructure that drives all the other developments in the digital space will continue to roll onwards.

Chips get smaller. Data gets accrued and cleansed. Software becomes more intricate. The science of cognition more well understood.

There is no need for a Manhattan project. For this arms race we don't need behemoth underground facilities enriching uranium, instead we have a decentralised army of humans purifying data as they complete captchas to access their email.

This isn't even without ruminating on what unknown developments are taking place within the military backed regions of AGI development.

Telling the government to slow down is a non starter, it only opens up the opportunity to be outpaced by a rival state.

Corporations are racing each other.

Governments are racing each other.

Consumers are driving it forward by demanding ever better products and services.

Money, time and effort is being thrown at this singular goal like nothing ever before.

If aliens were to stand back and objectively sum up the activities of the human race. This would be it. This is what takes precedent above all else.

We don't care about climate change. We don't care about poverty. We don't care about leisure.

We want to make that chip smaller and faster and smarter than us. That is the sum goal of the human endeavour.

We are already locked in a race. A race in which crossing the finish line 'incorrectly' might mean that all participants lose.

I am often exasperated at the language that surrounds the possibility of AGI development going wrong.

Many act as if this race ending in global disruption is unlikely. There are some that think this won't even effect employment opportunities that significantly.

Allow me to be incredibly clear. If we continue on the path we are on. We will die.

China will either pre-emptively strike the US with nuclear weapons out of fear they are nearly complete with their development of AGI.

Likewise the US would not tolerate the prospect of an artificial super intelligence that operates under the instruction of the Chinese communist party.

Think that's unrealistic?

Fine, lets assume that America has the advantage and sneakily unleashes it's AGI on the globe without sparking a thermonuclear Armageddon.

Well what exactly will they ask the AGI to do? We are racing towards the construction of a tool for which we have no clear and defined use of.

Do you think the US military industrial complex will be satisfied with unleashing a trillion dollar digital mind on the world without specifying that it prioritize the lives of it's own citizens above the lives of others?

Don't think that's a big deal?

That there is an all knowing, all powerful entity that prioritizes the lives of some over the lives of others?

The only distinction between God and Satan is that God is all loving and Satan is not.

We must tread carefully on what we unleash on ourselves.

Must I continue to explain how corporations won't likely have the greater good in mind if they cross the finish line first either?

The most powerful algorithms in existence today are the likes of YouTube, TikTok and Meta. All of which generate profit by leveraging our internal dopamine pathways against us. The only goal of the most powerful AI systems that we interact with us is to steal away our lives with consecutive shitty videos.

There is no stopping this race. We are collectively gunning for a hard take off as fast as possible.

Our only chance of survival is make sure that the super intelligent God that we create is a kind one. Not a sociopathic machine specifically tasked to kill and exploit human beings.

The only way we can achieve this is by having a global dialogue about what we want this AGI to do for humanity.

Without a global alignment on the goals and formulation of this entity, we are certain to bake into it our own flaws, our human paranoia and aggression and indifference.

Yet this is exactly where we are heading at break neck pace.

If you want to help change this reality, drop me a message and we can start planning.

−5

NonDescriptfAIth t1_jds2d28 wrote

>these are very pretty assumptions that i don't think take into account the automated kill drones that are going to be around this time.

OP asked for practical advice to do with retirement. I mused on the possibility that things go very wrong by saying:

>this would be a dystopian place unworthy of any meaningful planning now beyond 'buy land and build a bunker'.

I'm well aware of the negative possibilities that stem from AI.

If we continue on our current trajectory the odds that this doesn't end in a nuclear apocalypse is near 0.

1

NonDescriptfAIth t1_jdrmjox wrote

This is a very age dependent question.

I think to be prudent, one should assume that very little will change in the next 15 years.

Beyond that, I think it would be borderline ludicrous to assume that the economy will function in a way that even slightly resembles what it does today.

Universal basic income seems like a natural path. I won't entertain a reality in which AGI is realized and UBI does not exist in some form, this would be a dystopian place to exist unworthy of any meaningful planning now beyond 'buy land and build a bunker'.

The majority of white collar jobs will be gone. With only niche or heavily modified roles remaining.

New jobs will emerge to promote human wellness. Things like the government paying people to go walking together or to spend time with the elderly.

Technical labour jobs will probably be the last things to go, things like electricians, plumbers, fireman, paramedics. Expect these positions to be highly esteemed and massively compensated financially.

There will be a lot of work to do in rolling out this tech internationally. Eliminating poverty globally and the like will probably be a priority for the newly redundant white collar professionals looking for something meaningful to do with their time and money.

I think the very notion of retirement will become fuzzy and quick. What exactly is retirement in a world where there is no work to retire from?

Realistically 'stopping working' will become equivalent to 'reducing daily activity'. This will probably be discouraged massively given that daily engagement in both physical and mentally challenging tasks is a huge predictor of health in older age. People usually die soon after retirement. If the nature of 'work' is enjoyable and promotes a healthy lifestyle, why not extend it as long as possible?

Apologies to the Frenchmen reading this.

14