Viewing a single comment thread. View all comments

d4em t1_ixdroz5 wrote

Oh yeah, this is a whole rabbit hole. There's also algorithms that are being trained by people to identify subjective values, such as "niceness." These are notoriously biased as well, as biased, in fact, as the people who train them. But unlike those people, the opinion of the AI won't be changed by actually getting to know the person it's judging. They give 100% confident, biased, results.

Or the chatbots that interpret written language and earlier conversations to simulate conversation. One of them was unleashed on the internet and was praising Hitler within 3 hours. Another, scientific model designed to skim research papers to give summaries to scientists, answered that vaccines both can and cannot cause autism.

These don't bother me though. They're so obviously broken that no one will think to genuinely rely on them. What bothers me is the idea of this type of tech becoming advanced enough to sound coherent and reliable, because the same issues disrupting the reliability of the AI tech we have today will still be present, it's just the limitation of the technology. Yet even today we have people hailing the computer as our moral savior that's supposed to end untruth and uncertainty. If the tech gets a facelift, I believe many people will falsely place their trust in a machine that just cannot do what is being asked of it, but tries it's damndest to make it look like it can.

10