Viewing a single comment thread. View all comments

JVM_ t1_izxqe1y wrote

Counter-theory.

We're in a risky period where Semi-AGI being deployed unintentionally maliciously is more likely to cause disruptions.

Think of AGI like keeping a necklace or headphones cords in your pocket.

There's one way to keep them straight and neat, and thousands of ways to tangle them.

I think full AGI is the 'one way' to do AI properly, so it won't cause damage - but there's thousands of ways that AI can be deployed to cause mass impact/damage on the internet.

I think the number of ways "powerful-but-not-AGI" can be used is higher than a 'clean' AGI being developed.

----

I can see AI being used as a powerful hacking tool. It can pretend to be a linux terminal, so it knows linux commands. If you let it scan the internet - and it can monitor and understand new bug reports - then, as soon as a new flaw is reported to the internet - it can go find those computers and exploit them.

Or,

It can worm its way inside an unknown network.

Old school way - hacker writes scanning script and gets inside a network due to a known exploit. Hacker then has to search and understand what's inside that network, and then go see if anything running there is exploitable. Basically this is done at 'human' speeds - or - is restricted by the complexity of the scripts that a human can write..

New AI way - AI sees a network it can get inside, and then gets inside. Given that it knows 'this response' means that 'this exploit will work against that target'... the speed of penetrating vulnerable networks goes up to AI speeds.

-----

I know I'm wrong about HOW AI will be disruptive, and I don't know WHEN - but I'm pretty sure I'm right THAT it will be disruptive.

-----

Everything is going to speed up. Code generation. Human text generation. Things that took days will be as fast and cheap as a google query - which will be disruptive, with more negative potential outcomes existing than positive potential outcomes.

2