Submitted by StarCaptain90 t3_127lgau in singularity
No_Ninja3309_NoNoYes t1_jegeqgy wrote
There's one thing you learn pretty quickly about programming: programs almost never do what you want on the first try. So we can expect AI to fail in some way that we can't predict too. If it's a simple program with nothing at stake, it's no big deal. But if you expose young children and adults with issues, known or unknown at the moment, it could lead to bad outcomes. Furthermore organised crime and terrorist groups are a threat we shouldn't underestimate.
If history has taught us anything, it's that almost anything can be turned into a weapon. Each weapon will be used sooner or later. Personally, I need AI, but not at any cost. For example if third world countries suffer because they can't compete, I think we have to fix this issue first.
Viewing a single comment thread. View all comments