Viewing a single comment thread. View all comments

str8grizzlee t1_j6gehkm wrote

Not really. One of my colleagues asked ChatGPT for a list of celebrities who shared a birthday with him. The list was wrong - ChatGPT had hallucinated false birthdays for a number of celebrities.

Brad Pitt’s birthday is already in ChatGPT’s training data. More or better training data can’t fix this problem. The issue is that it is outputting false information because it is designed to output words probabilistically without regard for truth. Hallucinations can only be solved manually be reinforcing good responses over bad responses but even if it gets better at outputting good responses, it still will have an issue with creating hallucinations in response to novel prompts. Scale isn’t a panacea.

11

actuallyserious650 t1_j6ggw67 wrote

This is the point most people miss. Chat GPT doesn’t understand anything. It’d tell you 783 x 4561 = 10678 if those three numbers were written that way often enough online. It creates compelling sounding narratives because we, the humans, are masters at putting meaning into words that we read. But as we’re already seeing, Chat GPT will trot out easily disprovable falsehoods if it sounds close enough to normal speech.

16

erics75218 t1_j6gqte1 wrote

Bingo. And people who matter when it comes to being a huge pain in AI's ass will never learn.

Don't like chatGTP responses...then just talk to Truth Socials FreedomBOT it that's been trained on Fox News Media. Lol.

Ground truth for human created historical documents, outside of scientific shit, probably doesn't exist?

Celeb birthdays are fun, there is souch BS out there about Celebrities that the results must be hilarious on occasion.

6

DasKapitalist t1_j6hwc3s wrote

What's worse is that its been deliberately lobotomized on a slew of topics, so at best it's repeating the received knowledge of whatever passes for the mainstream. Which is fickle and frequently inaccurate.

3

BabaYagaTheBoogeyman t1_j6gi57f wrote

As much as we want to believe, we have all the answers. The internet is full of misinformation and half truths. If AI doesn't have the ability to distinguish what's fact from fiction, it will never replace humans.

4

[deleted] t1_j6gfivm wrote

[deleted]

3

fksly t1_j6h8hoh wrote

ChatGPT approach? Yes. Nobody really into AI thinks it is a good way to get anything close to general purpose intelligence.

In fact, in a way, it has been getting worse. It is better at bullshiting and appearing correct, but it got less correct compared to last iteration of ChatGPT.

6

str8grizzlee t1_j6hker8 wrote

Of course I think the tech will improve, I just think accuracy is not solved by more training data

1

JoaoMXN t1_j6h6yhg wrote

Yes really. ChatGPT is one of the least complex "AI" out there, LaMDA for example that'll be available in the future have billions of more data than it. And we'll get more and more AIs like that in a matter of years. I wouldn't underestimate AIs like you.

2

str8grizzlee t1_j6hko4c wrote

I’m not underestimating future iterations but you’re totally missing my point - accuracy is not solved by more data. It is solved by better modeling.

3

Due_Cauliflower_9669 t1_j6gtawv wrote

Where does “better training data” come from? These bots are using data from the open web. The open web is full of good stuff but also a lot of bullshit. The approach ensures it continues to train itself on a mix of high-quality and low-quality data.

2

nicuramar t1_j6ph4w8 wrote

> Where does “better training data” come from? These bots are using data from the open web.

The raw data is from there, among other things, but there is more to it. It was trained using supervised learning and reinforced learning.

1