kalavala93 t1_j66e2iy wrote
With how flawed man is I'm trying to figure out how AI Won't kill us. It just seems like it's mandate at this point.
RobbieQuarantino t1_j66q3lh wrote
Not sure why you're being downvoted.
Dumb algorithms are already being used to drive a wedge between groups of people, so I'm not sure what the futurists think will happen if and when AGI hits the scene.
kalavala93 t1_j66yhtf wrote
I'm being down voted because people don't like to hear negative things. I mean...this is the singularity subreddit. It's a subreddit who's purpose is reliant on an AGI bringing us there.
Suggesting the likely reality that AI is going to kill us ruins the singularity for everyone.
It's like when you tell Christians that their salvation is contingent on Christ coming back to redeem mankind but then telling them he's coming back to commit mass human genocide. Doesn't sit to well with them.
That said. I don't want AGI to do this, and I hope it doesn't. But AGI research is exploding and alignment research has gone NO WHERE meaningful at all. So yes it is likely AGI will kill us. But there is a chance it wont.
LiveComfortable3228 t1_j6765jg wrote
Conceptually, I understand the alignment problem and why its important. From a practical PoV however, I think this problem is completely overblown. Happy to understand why an AGI is likely to kill us all.
My main concern is really the impacts of AGI in society, corporations and the future of work. I think it will have a MASSIVE impact everywhere, in all areas, most people will not be able to re-adapt / re-learn and UBI is not going to be a viable answer.
I dont believe in utopias of AGIs working for us while we pasture, play and create art.
rixtil41 t1_j67h6lb wrote
Does the AI alignment means that it has to agree with us in everything ?
kalavala93 t1_j687cdg wrote
To me ai alignment means it at a minimum has to not kill us. The problem with getting it to agree with us is we can't even agree with each other. We don't even have a unified go on what ai alignment looks like...ai alignment in China could look like "help China, fight usa". That makes things very complicated.
Viewing a single comment thread. View all comments