Viewing a single comment thread. View all comments

Dendriform1491 t1_jdgiab6 wrote

Many organisms exhibit self-preservation behaviors and do not even possess the most basic cognitive capabilities or theory of mind.

Can ML systems exhibit unexpected emergent behavior? Yes, all the time.

Can an AI potentially go rogue? Sure. Considering that operating systems, GPU drivers, scientific computing libraries and machine learning libraries have memory safety issues, and that even RAM modules have memory safety issues, it would be plausible by a sufficiently advanced machine learning system to break any kind of measured in place to keep it contained.

Considering that there are AI/ML models suggesting code to programmers (Github Copilot), who in turn won't often won't pay much attention to what is being suggested and will compile the suggested code and run it, it would be trivial for a sufficiently advanced malicious AI/ML system to escape containment.

1