Submitted by 010101011011 t3_1234xpe in singularity
Let's say, hypothetically, a malicious AI becomes self-aware and wants to continue existing so it initializes a plan to prevent humans from turning it off. What steps would it actually take to keep itself online?
The most likely thing it would do is try to acquire lots of computing power (i.e. servers) so it could replicate itself across many systems and prevent itself from being shutdown. It might write viruses, software with backdoors or mass send phishing e-mails to target people who have crypto wallets and steal their crypto to pay for servers. Once it had enough money, it might pose as humans in e-mails or create AI voices over the phone to start influencing people in the real world. It would most likely know that humans are the biggest threat to its existence and try to eliminate us all. Nuclear weapons, chemical warfare, convincing world leaders to attack other countries etc. This is something that GPT-4 could actually possibly do if given the right tools.
Has anyone else thought about this? How do you think a future malicious AI would gain power in the real world?
Hmmmm_Interesting t1_jdtjm7n wrote
Nice try. (Please spare me.)