Viewing a single comment thread. View all comments

[deleted] t1_ja9j71z wrote

[deleted]

2

turnip_burrito t1_jabmheb wrote

Not when powerful AI is being fine-tuned to maximize a reward (money).

This is the whole reinforcement learning alignment problem, just with human+AI instead of AI by itself. Unaligned incentives (money vs. human well-being).

1