Submitted by Rumianti6 t3_y0hs5u in singularity
Neburtron t1_irtmgk6 wrote
Makes no sense. We don’t know the specifics of human intelligence, but we do know where it came from. We know why it appeared. That is nowhere near where we are.
Then, you are designing the scenario. What it learns off of. If you don’t want it to pause the game, don’t let it. If you don’t want an ai to try to kill people, have that as a fail case in training. This is an oversimplification of a very basic solution extremely applicable.
Not so much with modern image gen ai, because it’s not simulated via rules, but how close 1 ai can get to images + how well a second can recognize a real one. Only problem there is deciding what to give them is automatic.
Everyone is scared because people have built stories, on the fear of the unknown AI can supply, repeatedly. We are making optimized programs for image compression and predicting how text will continue. Be more scared of automatic social engineering, misinformation, hacking, and more real stuff.
Viewing a single comment thread. View all comments