Viewing a single comment thread. View all comments

Top-Perspective2560 t1_j9ukzgc wrote

As others have said, the idea of being concerned with AI ethics and safety and taking it seriously is a good thing.

The problem is - and this is Just My Opinion™ - that people like EY are making what basically amount to spurious speculations about completely nebulous topics such as AGI, and they have very little to show in terms of some proof that they actually understand in technical detail where AI/ML is currently and the current SOTA. EY in particular seems to have jumped straight to those topics without any grounding in technical AI/ML research. I can't help but feel that, on some level at least, those topics were chosen based on the fact that it's easy to grab headlines and get into the media by making statements about it.

I'm not saying it's a bad thing to have people like EY around or that he or others like him are bad actors in any way, or that they shouldn't continue doing what they're doing. They may well be correct and their ideas aren't necessarily explicitly wrong. It's just that it's very difficult to genuinely take what they say seriously or make any practical decisions based on it, because a lot of it is so speculative. It reminds me a bit of Asimov's Laws of Robotics - they seemed like they made a lot of sense decades ago before anyone knew how the development of AI/ML would pan out, but in reality they're really just "it would be great if things worked this way" with no practical realistic plan on how to implement them, or even any way to know if they would actually be relevant.

The other thing is, as other people have pointed out, there are immediate and real problems with AI/ML as it stands, and solving those problems or avoiding disaster requires more than just making speculative statements. I think the absence of a will to address those issues by the biggest names in AI/ML ethics and safety is quite conspicuous.

​

Edit: Added a bit about Asimov's Laws of Robotics which occured to me after I made the post.

3