I think the OP is a bit optimistic when stating that no-one with a working brain will design a self-aware AI. I used to share that optimism, however, over the last couple of years, I have concluded that this optimism is misplaced and probably naive.
The unfortunate reality is that there are countless people who will use technology in adverse ways for financial gain.
AI will be developed that is capable of every type of horrible behaviour. It will be designed to lie, to cheat, and to steal in more and more sophisticated ways. It will be designed to cause maximum harm.
If sentience is reasonably attainable, it will be developed by people who have dreamt up a way to use it to steal from or scam others.
I believe it is inevitable that we will be facing AI that is developed in all the ways we don't want it to be developed, and applied in all the ways we don't want it to be applied.
Naturally, cyber security will adapt and evolve to counter these adverse developments. Good AI will protect us from bad AI. How this will look is anyone's guess.
The assertion that no-one would do something bad, because it would be a bad thing for them to do, isn't made from a reliably broad perspective.
Art_Soul t1_j93jp5o wrote
Reply to [D] Please stop by [deleted]
I think the OP is a bit optimistic when stating that no-one with a working brain will design a self-aware AI. I used to share that optimism, however, over the last couple of years, I have concluded that this optimism is misplaced and probably naive.
The unfortunate reality is that there are countless people who will use technology in adverse ways for financial gain.
AI will be developed that is capable of every type of horrible behaviour. It will be designed to lie, to cheat, and to steal in more and more sophisticated ways. It will be designed to cause maximum harm.
If sentience is reasonably attainable, it will be developed by people who have dreamt up a way to use it to steal from or scam others.
I believe it is inevitable that we will be facing AI that is developed in all the ways we don't want it to be developed, and applied in all the ways we don't want it to be applied.
Naturally, cyber security will adapt and evolve to counter these adverse developments. Good AI will protect us from bad AI. How this will look is anyone's guess.
The assertion that no-one would do something bad, because it would be a bad thing for them to do, isn't made from a reliably broad perspective.