Comments

You must log in or register to comment.

ooru t1_iz609ur wrote

It's not. In fact, it's only good at surface details. Ask it deep questions about philosophy or specifics about scientific inquiry, and it gets that stuff wrong. The response is impressive...

...but the problem is: it sounds plausible to the layperson, so the layperson doesn't know they're getting incorrect information (and the developers wouldn't know, either). This is currently a good way to spread misinformation. If this is the future, we're headed towards dystopia.

13

wwarnout t1_iz69yfe wrote

> but the problem is: it sounds plausible to the layperson

Exactly. I'm an engineer, so I asked it to calculate the loading on a beam. The first attempt returned 35 grams, which is 4 orders of magnitude too small. The second attempt returned 800 kg, which is more plausible, but I'd have to do my own calcs to verify it.

4

MadmanKe_254 t1_iz7qzax wrote

It doesn't feel like a human though... A couple of answers seemed generic

2

mach219 t1_iz8e0yu wrote

Here's some dystopia for you : Big corporations and governments will be behind the biggest AI tools in the future, they will say and show what they will be trained to say and show, and will spread ideologies, censorship Information will be filtered by AI Creativity and imagination replaced by AI Fewer human interactions

1

AndiLivia t1_iz8et3m wrote

This hummus got me fartin' like a beast

−1