Viewing a single comment thread. View all comments

andreichiffa t1_j9t35a6 wrote

No. As a matter of fact, I consider it harmful, and I am far from being alone in that regard.

What you need to understand is that AI* kills already. Not only military/law enforcement AI that misidentifies people and leads to them being killed / searched & killed / empoisoned & killed in prison, the types of AI that you interact on a daily basis. Recommendation algorithms that promote disinformation regarding vaccines safety and COVID risk killed hundreds of thousands. Medical AIs that are unable to identify sepsis in 70% of cases but are widely used and override doctors in hospitals have killed thousands. Tesla autopilot AIs that kill their passengers on a regular basis. Conversational agent LLMs that will tell the users how to do electric work and kill them in the process.

But here is the thing. Working on the safety of such AIs leads to a conflict - with the engineers and researchers developing them, with execs that greenlight them, with influencers that touted them, with stakeholders who were getting money from additional sales the AI feature has generated. So safety and QA teams get fired, donations get made to universities to get rid of particularly vocal current state of affairs critics, Google de-indexes their works and Facebook randomly and accidentally deletes their posts (Bengio vs LeCun circa 2019, I believe, and the reason the latter moved to Twitter).

The problem with super-human AGI folks (and generally the longtermism/EA, to which Eliezer Yudkowsky belongs), is that they claim that none of those problems matter, because if SH-AGI arises, if it decides to mingle into human affairs, if we don't have an enclaves free from it, and even if it occurs in 100 years, it will be so bad, that it will make everything else irrelevant.

That's a lot of "ifs". And a long timeline. And there are pretty good theoretical reasons to believe that even when SG-AGI arises, its capabilities would not be as extensive as EA crowd claims (impossibility theorems and Solomonoff computability support wrt energy and memory support). And then there are theoretical guarantees as to why we won't be able to prevent it now even if it started to emerge (Godel's incompletness).

But in principle - yeah, sure why not, you never know if something interesting pops along the way.

The problem is that in the way it is currently formulated and advertised, it hits the cultural memes (HAL, A.I., ..) and the A-type personalities of younger engineers and researchers (work on the **most important** problem likely to make you **most famous**) in a way that completely drowns out the problems with AI that are already here - both from the general public's and engineer's perspective.

It is perhaps not a coincidence that a lot of entities that would stand to loose in reputation/income from in-depth looks into current AIs safety and alignment are donating quite a lot to EA/long-termism and lending them of their own credibility.

*To avoid sterile semantic debates, to me an AI is any non-explicitly coded programs that perform decisions on its own. Hence LLMs without a sampler are non-AI ML, whereas generative LLMs with a sampler are AI (generative ML).

3