Jinoc
Jinoc t1_j9umdg2 wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
It’s an example of noticing the misalignment, but the alignment is only a problem insofar as it is a symptom of the deeper problem I mentioned.
EY was very explicit that he doesn’t think GPT-style models are any threat whatsoever (the proliferation of convincing but fake text is possibly a societal problem, but it’s not an extinction risk)
Jinoc t1_j9ujftx wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Yes? I fail to see how that goes against what I’m saying.
Jinoc t1_j9ub6f1 wrote
Reply to comment by VirtualHat in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
What makes an extinction-level event unlikely in your view if you do believe advanced models will act so as to maximise control? Is it that you don’t believe in the capabilities of such a model?
Jinoc t1_j9u8ces wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That’s… not what his followers are saying. The hand-wringing about Bing hasn’t been about its misalignment per se, but about what it proves about the willingness of Microsoft and OpenAI to rush defective product release in an arms race situation. It’s not that the alignment is bad, it’s that clearly it didn’t register as a priority in the eyes of leadership, and it’s dangerous to expect that things will get better as AI get more capable.
Jinoc t1_j9rpo3y wrote
Reply to comment by sticky_symbols in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
That’s a misreading of what the AI alignment people say, they’re quite explicit that agency is not necessary for AI risk.
Jinoc t1_j9uvzpb wrote
Reply to comment by Imnimo in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
But that’s a semantic disagreement on the proper use of “misalignment”, the substantive risk posed by the incentives of an AI arms race are the problem.