Submitted by Gari_305 t3_z6ynq6 in Futurology
Comments
RoyStrokes t1_iy44x91 wrote
If we can get these things to perfect science and shit without doing a terminator thing then that would be cool
BigFatJuicyCocks420 t1_iy59arw wrote
We need an ai to take care of humanity. We essentially must create our own benevolent god
ruach137 t1_iy65h5a wrote
That is the "best" case scenario.
egowritingcheques t1_iy7msyu wrote
Which other benevolent God is there?
Specific_Main3824 t1_iy7wuxl wrote
The one created for our programming 😳
qdtk t1_iy8j89t wrote
I know how this ends. I’ve seen iRobot.
[deleted] t1_iy64uym wrote
[removed]
dogisgodspeltright t1_iy3wdat wrote
>AI experts are increasingly afraid of what they’re creating
So, ..... maybe stop.
JenMacAllister t1_iy3yvmx wrote
Telling humans to stop.
When has that ever worked?
CalamitousCakeDoll t1_iy434kj wrote
No one's paying them to stop, lots of money telling them to continue
Quiet_Dimensions t1_iy46r32 wrote
Not really possible. Would require literally every person on Earth to stop research into AI. Not gonna happen. The incentives don't align to that. Someone is going to keep working on it, so you might as well too in order to get there first.
JoaqTheLine t1_iy6p5bb wrote
The protein thing is rather beautiful because protein “folding” is driven by electromagnetic forces that have mathematical properties and are designed to work well together, which ultimately make organisms…
How can we harness this knowledge to generate a world we all function together.
How can we guide AI to work in synchrony with the global species?
cascadecanyon t1_iy4oq97 wrote
A lesser light asks Ummon, What are the activites of a sramana? Ummon answers, I have not the slightest idea. The dim light than says, Why haven't you any idea? Ummon replies, I just want to keep my no-idea.
BigBadMur t1_iy5mmgk wrote
So they should be afraid of what they are creating because whatever they put into AI will eventually change it and make it into something that was not originally intended.
FuturologyBot t1_iy3u1jd wrote
The following submission statement was provided by /u/Gari_305:
From the Article
>In 2018 at the World Economic Forum in Davos, Google CEO Sundar Pichai had something to say: “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” Pichai’s comment was met with a healthy dose of skepticism. But nearly five years later, it’s looking more and more prescient.
>
>AI translation is now so advanced that it’s on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021’s Breakthrough of the Year.
>
>You can even see it in the first paragraph of this story, which was largely generated for me by the OpenAI language model GPT-3.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/z6ynq6/ai_experts_are_increasingly_afraid_of_what_theyre/iy3puxb/
[deleted] t1_iy3zzua wrote
[deleted]
jlks1959 t1_iy4hd91 wrote
I think it’s more than clickbait.
NotAnAlreadyTakenID t1_iy4jpd1 wrote
Agreed, but it’s nice, in an uncomfortable way, to monitor the approaching train.
[deleted] t1_iy45zkg wrote
[removed]
[deleted] t1_iy5m3ns wrote
[removed]
[deleted] t1_iy5m8er wrote
[removed]
[deleted] t1_iy70pm5 wrote
[removed]
Artistic-Traffic-638 t1_iy7ftdt wrote
do you want ultron? because this is how you get ultron
Iron-Doggo t1_iy43anw wrote
People are dumb. They actually think developing AI is a good idea.
Orc_ t1_iybnr51 wrote
Working on things greater than ourselves is the opposite of dumb.
It is human supremacy which is extremely dumb, along with fear of it's replacement.
Iron-Doggo t1_iyd48h9 wrote
If a hundred or so years from now we end up as slaves to a super intelligent AI, it will be because enough people like you thought it was a good idea to try developing AI that it was enabled through public opinion. At least nuclear weapons can’t think for themselves.
Orc_ t1_iydq8r1 wrote
> If a hundred or so years from now we end up as slaves to a super intelligent AI
Dumb assumption, why would barely-clever monkeys be "slaves" we would simply be ignored. There's nothing we can contribute to just a being, it would be as a God.
> It will be because enough people like you thought it was a good idea to try developing AI that it was enabled through public opinion.
I wouldn't just "think" it's a good idea, I would go to war for it.
Just because you form your opinions based on mass-produces media (shitty scifi) doesn't mean the rest of us should cower in fear of technology. Independent sentient AI isn't even possible.
Gari_305 OP t1_iy3puxb wrote
From the Article
>In 2018 at the World Economic Forum in Davos, Google CEO Sundar Pichai had something to say: “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” Pichai’s comment was met with a healthy dose of skepticism. But nearly five years later, it’s looking more and more prescient.
>
>AI translation is now so advanced that it’s on the brink of obviating language barriers on the internet among the most widely spoken languages. College professors are tearing their hair out because AI text generators can now write essays as well as your typical undergraduate — making it easy to cheat in a way no plagiarism detector can catch. AI-generated artwork is even winning state fairs. A new tool called Copilot uses machine learning to predict and complete lines of computer code, bringing the possibility of an AI system that could write itself one step closer. DeepMind’s AlphaFold system, which uses AI to predict the 3D structure of just about every protein in existence, was so impressive that the journal Science named it 2021’s Breakthrough of the Year.
>
>You can even see it in the first paragraph of this story, which was largely generated for me by the OpenAI language model GPT-3.