Viewing a single comment thread. View all comments

hxckrt t1_j8rh0ey wrote

It's only terrifying that you can't fully control it if it has goals of its own. Without that, it's just a broken product. Who's gonna systematically manipulate someone, the non-sentient language model, or the engineers who can't get it to do what they want?

1

str8grizzlee t1_j8rib5a wrote

We don’t know what it’s goals are. We have a rough idea of the goals it’s been given by engineers attempting to output stuff that will please humans. We don’t know how it could interpret these goals in a way that might be unintended.

1

MuForceShoelace t1_j8rmbnc wrote

It doesn't have "goals", you have to understand how simple this thing is.

3

hxckrt t1_j8rkm9a wrote

So any manipulation isn't going to be goal-oriented and persistent, but just a fluke, a malfunction? Because that was my point.

1