Viewing a single comment thread. View all comments

mm_maybe t1_ire5ryy wrote

Ok, I apologize for characterizing you in a non-serious way. You have every reason to be proud of your accomplishments and career... it is a real challenge to get to where you are now, and Horatio Alger stories aside, statistically, people from disadvantaged backgrounds (low-income, non-white, female) are much less likely to become machine learning engineers. Thus I'm not convinced that accomplished experts like yourself who say that the speculative existential risks of AI in the distant future outweigh the concrete distributional risks of asymmetric access to and control over machine learning technology today aren't simply placing a higher value on risks that could affect people like themselves, versus risks that probably won't.

1

tornado28 t1_irtfv6q wrote

Thanks for apologizing but... are you seriously claiming that AI experts are not the right people to evaluate existential risk from AI?

1

mm_maybe t1_irttxfj wrote

I am saying that I would give greater weight to the concerns of those negatively impacted by ML today than to the anxieties of those who only speculatively might be impacted by AGI in the future, and actually benefit from AI adoption in the meantime.

0