SleekEagle t1_j9vl7r3 wrote
Reply to comment by wind_dude in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't think anyone believes it will be LLMs that undergo an intelligence explosion, but they could certainly be a piece of the puzzle. Look at how much progress has been made in just the past 10 years alone - imo it's not unreasonable to think that the alignment problem will be a serious concern in the next 30 years or so.
In the short term, though, I agree that people doing bad things with AI is much more likely than an intelligence explosion.
Whatever anyone's opinion, I think the fact that the opinions of very smart and knowledgeable people run the gamut is a testament to the fact that we need to dedicate serious resources into ethical AI beyond the disclaimers at the end of every paper that models may contain biases.
Viewing a single comment thread. View all comments