Submitted by kdun19ham t3_111jahr in singularity
Sam Altman recently credited Eliezer Yudkowsky for his contributions to the AI community, yet Yudkowsky regularly expresses we’ve failed in alignment and humans will be dead within 10 years.
Altman has a much rosier picture of AI creating massive wealth and a utopia like world for future generations.
Do they both have sound arguments? Has Altman ever commented on Yudkowsky’s pessimism? Is one viewed as more credible in the AI community?
Asking as a member of the general public who terrifyingly happened upon Yudkowsky doom articles/posts.
ThirdFloorNorth t1_j8ffu9s wrote
Eliezer Yudkowsky is a prominent transhumanist that I disagree with pretty much every single opinion he has ever espoused, yes somehow we are both still transhumanists. His views on, and response to, Roko's Basilisk in particular are fucking embarrassing.
So I'm gonna go with Altman.
In the end, in won't matter either way. Either Altman is right, and we will get a benevolent AI, or Yudkowsky is right, and we're capital-F Fucked.
Either way, AI is coming. All we can do is wait and see.