Submitted by dracount t3_zwo5ey in singularity
Calm_Bonus_6464 t1_j1vy8mc wrote
I don't know why you're assuming we have a choice. If we have beings infinitely more intelligent than us, there's no possible way we can retain control. In a worst case scenario, AI could even be hostile towards humans and destroy our species, which is precisely what people like Stephen Hawking warned us about.
AI governance is inevitable, and there's nothing we can do to stop it. For the first time in 300,000 years we will no longer be Earths rulers, and we ill have to come to accept this.
mootcat t1_j1wfb3v wrote
Indeed. This sub has major issues conceptualizing superintelligence, thinking we will get all our wishes fulfilled as a guarantee.
We are functionally growing a God. There is no containing it and we better hope our efforts at alignment before the point of explosive recursive growth were enough.
Just from the simple system we've seen so far, we have witnessed countless examples of misalignment and systems working literally as intended, but against the desires of the programmers.
This Rumsfeld quote always comes to mind
"Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know."
Any one of these unknown unknowns can result in utter decimation of life in an AI superpower.
Dickenmouf t1_j1xeosa wrote
I wonder if AI might be the answer to the Fermi paradox. If AGI is inevitable and likely exponential when it happens, then maybe most civilizations that create it won’t last long after its creation. Whether that be because of self-destruction, annihilation by the Ai or absorption/enlightenment, the result is the end of that progenitor species. A highly advanced AI might not want to seek contact with other lesser intelligent lifeforms.
TheLastSamurai t1_j1x5bpx wrote
No we don’t. We can stop it literally right now. Governments are overthrown, corporations are dismantled, organized motivated and angry people change the course of history. This quasi religious fate acompli attitude is very bizzare
Calm_Bonus_6464 t1_j1x5pvl wrote
So you want to stop the development of AI? Because AGI/ASI and Singularity inevitably means the above happens. The only way to stop it is to halt technological progress.
TheLastSamurai t1_j1x5t1x wrote
I’d be ok with that yes
Calm_Bonus_6464 t1_j1x64vo wrote
Well, that's not going to happen to be blunt. And given all the possibilities for good, I wouldn't want AI progress to stop.
TheLastSamurai t1_j1x680o wrote
Massive regulation is coming
Calm_Bonus_6464 t1_j1x6ega wrote
Even if it came in the West, China isn't going to stop developing AI simply because the West chooses to regulate it.
And good luck regulating a being more intelligent than you once ASI happens.
Webemperor t1_j1xwzus wrote
China is unironically more likely to regulate AI than any other government in the world in the off chance that one of their corporations make greater advancements in AI than them and overthrows them.
In West this is extremely unlikely since Western governments are essentially owned by corporations
Calm_Bonus_6464 t1_j1xympq wrote
> In West this is extremely unlikely since Western governments are essentially owned by corporations
US perhaps, but not Europe. I could actually see the EU attempting to regulate it.
Webemperor t1_j1y3zjj wrote
> I could actually see the EU attempting to regulate it.
Bro EU was literally created to serve coal and energy companies, EU is just as corporately owned as US is, at best only slightly less.
Calm_Bonus_6464 t1_j1y5so5 wrote
It depends, countries like France and Portugal probably aren't that different from the US, but northern European countries like Denmark, Finland, Sweden, Switzerland, Germany etc have the lowest levels of corruption in the world and are Europe's leaders in AI and big playmakers in EU decisions.
WikiSummarizerBot t1_j1y5tpq wrote
>The Corruption Perceptions Index (CPI) is an index which ranks countries "by their perceived levels of public sector corruption, as determined by expert assessments and opinion surveys". The CPI generally defines corruption as an "abuse of entrusted power for private gain". The index is published annually by the non-governmental organisation Transparency International since 1995. The 2021 CPI, published in January 2022, currently ranks 180 countries "on a scale from 100 (very clean) to 0 (highly corrupt)" based on the situation between 1 May 2020 and 30 April 2021.
^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)
Webemperor t1_j1y6uc6 wrote
Levels of corruption doesn't mean jackshit when corporate interference is pretty much baked into the system.
Calm_Bonus_6464 t1_j1y6wmx wrote
Can you give an example of that in Finland or Denmark?
Webemperor t1_j1y73ak wrote
The fact that they are a part of EU? I already told you, the organization was literally created, and exists, to protect corporate interests throughout the continent.
Calm_Bonus_6464 t1_j1y7brz wrote
Agree to disagree I guess.
No_Ask_994 t1_j1zeh8y wrote
Maybe.
The thing is, the country that doesn't stop ai development Will become the World leader in a few years/decades (depending on the starting position and resources....)
So I dont Think that any country Will stay out of the party. It might get regulated and controlled by goverments, and so, slow down, but AI Will keep going.
Anyway even uf they wanted it's imposible to stop without controlling computing power. In 20 years one Will probably be able to train a gpt-3 sized model in minutes in a personal computer
AsheyDS t1_j1wm5cr wrote
>If we have beings infinitely more intelligent than us, there's no possible way we can retain control.
Infinitely more intelligent, sure. But no AI/AGI is going to be infinitely intelligent.
GalacticLabyrinth88 t1_j1x37lm wrote
Theoretically, AI/AGI can and will become infinitely intelligent relative to our organic perspective, because it will possess the ability of recursive self-improvement. It's already happening with AI art: the AIs responsible used to train from art produced by humans to create its own artworks, now it's using its previously created AI artworks to train on in order to create even better AI art, and so and so forth. AI will become more and more intelligent on an exponential scale because of how quickly it will be able to advance, able to think millions of times faster than the human brain, and arrive at solutions faster as well.
AI is like Pandora's Box. Once it's been opened, it can't be closed again.
No_Ask_994 t1_j1zbr9k wrote
Tbh training ai art in AI art isn't giving good results, at least for now.
It might be posible in the future with good ai filtering on the ai datasets to pick only the really good ones? Maybe....
But for now, it's a bad idea
Viewing a single comment thread. View all comments