Submitted by iSpatha t3_y3aqj7 in singularity
AI in my opinion will not have a singular set of goals. Given that AI is being developed from multiple vectors, there are going to be multiple AI intelligences. That begs the question of how AI will interact with each other. What if one AI thinks humans are a blight on the planet that needs to be eradicated while the other believes we're some sort of errant child that needs guidance and protection? Would they go to war with each other? Would they use their higher intelligence to figure out a compromise?
​
AI is such a multi-faceted technology and while I'm ultimately optimistic about its development and application, there are many concerns I have. I feel like we're about to unleash something completely uncontrollable and something that very well might be the next step in evolution.
Ortus12 t1_is7rgjp wrote
Ai's will be more cognitively diverse than human beings, so we will see every possible interaction between them.
Some ai's will program other Ai's to do jobs, some will contract out other Ai's (because they don't have access to their source code or data), some will go to war with other ones, some will strike deals and negotiate, some will hack other ones to try to control them or to get their data and code, some Ai's will merge with other ones, some Ai's will blackmail other ones, etc.