Viewing a single comment thread. View all comments

y53rw t1_j9n86st wrote

The danger is that we don't yet know how to properly encode our values and goals into AI. If we have an entity that is more intelligent and more capable than us that does not share our values and goals, then it's going to transform the world in ways that we probably won't like. And if we stand in the way of its goals, even inadvertently, then it will likely destroy us. Note that "standing in it's way" could simply be existing and taking up precious resources like land, and the matter that makes up our bodies.

2

NoidoDev t1_j9nyjqh wrote

>The danger is that we don't yet know how to properly encode our values and goals into AI.

The danger is to give such a system too much power, maybe without delay between "having an idea" and executing it. Also, not having other systems in place to stop it if something would go wrong.

1