Submitted by zalivom1s t3_11da7sq in singularity
[deleted] t1_ja9j71z wrote
[deleted]
turnip_burrito t1_jabmheb wrote
Not when powerful AI is being fine-tuned to maximize a reward (money).
This is the whole reinforcement learning alignment problem, just with human+AI instead of AI by itself. Unaligned incentives (money vs. human well-being).
Viewing a single comment thread. View all comments