Submitted by SchmidhuberDidIt t3_11ada91 in MachineLearning
A recent podcast interview of EY's has gone a bit viral, and in it he claims that researchers have dismissed his views without seriously engaging with his arguments, which are described here in relative detail.
I'm aware of on-going AI safety and interpretability research, but the dual use of the term "AI safety" to mean something close to AI ethics, and something close to preventing an existential threat to humanity, makes distinguishing the goals of, say, Anthropic, and the extent to which they consider the latter a serious concern, difficult as a layperson.
I haven't personally found EY's arguments to be particularly rigorous, but I'm not the best suited person to evaluate their validity. Any thoughts are appreciated. Thanks in advance!
MinaKovacs t1_j9ref87 wrote
We are so far away from anything you can really call "AI" it is not on my mind at all. What we have today is simply algorithmic pattern recognition and it is actually really disappointing. The scale of ChatGPT is impressive, but the performance is not. Many many thousands of man-hours were needed to manually tag training datasets. The only place "AI" exists is in the marketing department.