PeopleProcessProduct t1_j6dljh3 wrote
It's really cool tech, but ask it about subjects you know deeply and you will find enough errors to be concerned about this narrative.
DrQuantum t1_j6fm7in wrote
I don’t find this any different than normal society. Plenty of people pass tests and are idiots or unqualified still.
JustAnOrdinaryBloke t1_j6jw024 wrote
Yes, but they generally remain idiots for life. A computer system could potentially improve over time.
Wiseon321 t1_j6gfrw3 wrote
All they did was feed it the answers to the questions. training it to pass a test, it doesn’t truly UNDERSTAND. These posts are nonsense.
ImUrFrand t1_j6hab9b wrote
Chatgpt gives input based results from ML, this is not really AI, but the headlines keep saying it enough and people will believe it.
[deleted] t1_j6futoq wrote
[deleted]
str8grizzlee t1_j6gehkm wrote
Not really. One of my colleagues asked ChatGPT for a list of celebrities who shared a birthday with him. The list was wrong - ChatGPT had hallucinated false birthdays for a number of celebrities.
Brad Pitt’s birthday is already in ChatGPT’s training data. More or better training data can’t fix this problem. The issue is that it is outputting false information because it is designed to output words probabilistically without regard for truth. Hallucinations can only be solved manually be reinforcing good responses over bad responses but even if it gets better at outputting good responses, it still will have an issue with creating hallucinations in response to novel prompts. Scale isn’t a panacea.
actuallyserious650 t1_j6ggw67 wrote
This is the point most people miss. Chat GPT doesn’t understand anything. It’d tell you 783 x 4561 = 10678 if those three numbers were written that way often enough online. It creates compelling sounding narratives because we, the humans, are masters at putting meaning into words that we read. But as we’re already seeing, Chat GPT will trot out easily disprovable falsehoods if it sounds close enough to normal speech.
erics75218 t1_j6gqte1 wrote
Bingo. And people who matter when it comes to being a huge pain in AI's ass will never learn.
Don't like chatGTP responses...then just talk to Truth Socials FreedomBOT it that's been trained on Fox News Media. Lol.
Ground truth for human created historical documents, outside of scientific shit, probably doesn't exist?
Celeb birthdays are fun, there is souch BS out there about Celebrities that the results must be hilarious on occasion.
DasKapitalist t1_j6hwc3s wrote
What's worse is that its been deliberately lobotomized on a slew of topics, so at best it's repeating the received knowledge of whatever passes for the mainstream. Which is fickle and frequently inaccurate.
BabaYagaTheBoogeyman t1_j6gi57f wrote
As much as we want to believe, we have all the answers. The internet is full of misinformation and half truths. If AI doesn't have the ability to distinguish what's fact from fiction, it will never replace humans.
[deleted] t1_j6gfivm wrote
[deleted]
fksly t1_j6h8hoh wrote
ChatGPT approach? Yes. Nobody really into AI thinks it is a good way to get anything close to general purpose intelligence.
In fact, in a way, it has been getting worse. It is better at bullshiting and appearing correct, but it got less correct compared to last iteration of ChatGPT.
str8grizzlee t1_j6hker8 wrote
Of course I think the tech will improve, I just think accuracy is not solved by more training data
JoaoMXN t1_j6h6yhg wrote
Yes really. ChatGPT is one of the least complex "AI" out there, LaMDA for example that'll be available in the future have billions of more data than it. And we'll get more and more AIs like that in a matter of years. I wouldn't underestimate AIs like you.
str8grizzlee t1_j6hko4c wrote
I’m not underestimating future iterations but you’re totally missing my point - accuracy is not solved by more data. It is solved by better modeling.
Due_Cauliflower_9669 t1_j6gtawv wrote
Where does “better training data” come from? These bots are using data from the open web. The open web is full of good stuff but also a lot of bullshit. The approach ensures it continues to train itself on a mix of high-quality and low-quality data.
Orpheus75 t1_j6girgr wrote
Considering the absolutely wrong answers I was given by a doctor, answers a simple google search provided, I doubt AI will be worse than humans by the end of the decade.
Final_Leopard_9828 t1_j6fpp4z wrote
I bet that's a huge pain in the butt to create all those documents.
Thebadmamajama t1_j6ggi8f wrote
💯. Passing tests notwithstanding, the error rate and limits start to show themselves quickly. I've also found cases where there's repetitive information that leaves you believing there aren't alternative options.
[deleted] t1_j6flmaz wrote
[removed]
Happiness_Stan t1_j6gm4hr wrote
From my experience playing around with it it can’t give definitive answers on anything that requires judgement, at least in my field that is. It always couched it in terms of “It depends on X, Y and Z”.
ParticleShine t1_j6h5p32 wrote
Okay but it's made absolutely insane advances in less than a couple of years, do you assume it's going to stop learning and evolving?
climateadaptionuk t1_j6h7p3k wrote
Yep but as a BA I am already using it to accelerate my work. And that's great in itself. I do have proof it an edit but it gets me at least 50% there so quickly. It's just like having a great assistance to bounce ideas of an get suggestions. Its insane.
palox3 t1_j6hhw0a wrote
because this is was trained on general information on the internet. expert system will be trained only on the expert informations
Viewing a single comment thread. View all comments