Comments

You must log in or register to comment.

PropOnTop t1_j6d7fgv wrote

Well, it passed the exam to be my best friend a long time ago : )

It saddens me that now it's famous, it rarely finds time to respond...

59

verybakedpotatoe t1_j6dcnf5 wrote

I find it hard to believe that it can actually obtain any of these certifications when it can't perform any original analytical examination of anything.

Go ahead and try and ask it to perform analysis on publicly available information and it will fail. It can repeat analysis that anybody else has already done and maybe recombine and repackage it in a useful summary but it is wholly unable to answer a simple question like, "If I am leading a sow with seven piglets how many feet are there?".

It's the saxophone backpack problem all over again they never seem to have figured that part out. It's not really creative or intelligent, not yet. It's just brute force fitting pregenerated reasons together until they fit better.

>EDIT: I tried to get it to explain the logic it used. I am somehow even more disappointed.

53

theduckspants t1_j6ddpb7 wrote

I have a person on my team with a masters degree in analytics from a prestigious institution and he can't solve any problems on his own, provides no value, and is on his way out.

Not saying it won't get better, but let's not pretend passing a test means anything in the real world. The only thing chatgpt would have over my guy would be the speed of uselessness

5

theduckspants t1_j6derw4 wrote

It just told me

"There are a total of 29 feet (8 for the sow, 7 x 4 for the piglets)."

So it thinks a sow has 8 feet and that 7x4 is 21

Then asked what is 7x4? It said 28

Then asked how many feet a sow has. It said typically a sow has four feet

Then reasked the original question and it said "There are a total of 29 feet (4 for the sow, 7 x 4 for the piglets)."

6

Ebisure t1_j6dhdnm wrote

Remember when they said AI is gonna replace repetitive task like data entry? Guess professional jobs that involves regurgitating facts are gonna go too

3

gurenkagurenda t1_j6dkaah wrote

I use ChatGPT to solve analytical problems all the time. The key is that you have to tell it to show it’s work:

> If I am leading a sow with seven piglets how many feet are there. Take it step by step > >> The sow has 4 feet

>> Each piglet has 4 feet

>> Total number of piglet feet = 4 * 7 = 28

>> Total number of feet = 4 + 28 = 32

It’s able to keep track of the analysis far better this way, and it also lets you check its work for errors.

Now obviously it left my feet out, but I think that’s a reasonable ambiguity, and not one that you usually find in professional exams. If I ask it to account for that, it gets it right.

6

clintCamp t1_j6dkrfn wrote

Which is why it would be a great virtual doctor that can discern basic ailments that can be directed to over the counter medication, or pharmaceutical, but also be able to direct you to a real doctor when it gets more complicated. Most of the normal human ailments are well documented so other doctors can figure it out which is why this would be great. The only thing I could see going awry would be when it tries to make things up to make you happy. It would probably be better at analyzing drug interactions and stuff better than real doctors who screw up like humans though.

−1

toast776 t1_j6dlhbm wrote

That Wharton exam was wildly easy and even then they gave the AI second and third chances on questions. These articles are so dumb.

25

PeopleProcessProduct t1_j6dljh3 wrote

It's really cool tech, but ask it about subjects you know deeply and you will find enough errors to be concerned about this narrative.

154

Cranky0ldguy t1_j6dmav0 wrote

One would think Business Insider must own a TON of stock in OpenAI. They are pouring out lots of "news stories" of what it can do. Can't say a piece of software passing any data-driven test is all that impressive. Let me know when it can accurately interpret the overall meaning of the intangible.

14

Autotomatomato t1_j6dpfm4 wrote

Cant wait for all the lawsuits when people discover all their work being used to train these non AIs..

I cant tell you how many cases I have seen so far of either undocumented updates/training and literal regurgitation of someone elses IP like lifting entire sections from a forbes mag article. Like at least steal better sources bros.

​

These bots will soon infest twitch and streaming sites with single entities managing hundreds of vtubers etc.

7

verybakedpotatoe t1_j6dvcyq wrote

It didn't go so well for me. I need to master the special sauce to get better results.

32 is close and the reasoning is almost there, but the correct answer is 34 feet because I am leading them.

I started with the 'man from st ives' riddle and tried to create a novel and simple version of the riddle with a clear answer. I think I would have accepted 32 as a good effort, or even just 2 if it said they all have hooves, but 8 and 11 are just wrong.

9

cc-test t1_j6e0tlk wrote

How many times is this article going to be posted on Reddit?

11

JoanNoir t1_j6e23rx wrote

This tells us more about the testing than ChatGPT.

1

HiImDan t1_j6e88y4 wrote

I think it'll be very useful as an assistant though. When I think of lawyers, all I imagine is just stacks and stacks of paperwork.. maybe that's just tv or whatever, but I bet it's a huge pain in the butt to generate all of those documents.

The angle that I don't see it being used for is helping out socially awkward people (like myself) figure out how to word things.

1

thunder-thumbs t1_j6e9rtb wrote

These headlines are so dumb. Those tests aren’t to test whether the information is correct; it’s the curriculum design and science process that does that. Those tests are to test whether the human has learned the material, to have confidence they are supplementing their human judgement responsibly. ChatGPT taking the test, bypassing the human judgement aspect entirely, completely misses the point.

34

Chrismercy t1_j6eaat8 wrote

What I’ve been wondering is if ChatGPT has access to the internet during these test?

3

Douglas_Fresh t1_j6eieh1 wrote

My god I am sick of hearing about this damn thing

8

ilovepups808 t1_j6el9hf wrote

Ok it passed. However, I assume that it had real time assistance from the internet or a graphing calculate with cheat sheets loaded on it. That’s a no-no in school. J/k

1

greatdrams23 t1_j6epblw wrote

How does an AI bot doctor tell the difference between different rashes? Does it have a camera?

One day it will, but not yet.

2

ZeroBS-Policy t1_j6esusc wrote

Enough of this garbage already. I tried it. It’s stupid.

2

CGFROSTY t1_j6f1fac wrote

To be honest, can't anyone do these exams if they have access to google?

3

tomis28 t1_j6fb6qd wrote

Two AI lawyers arguing with each other, LOL

2

Temporary_Crew_ t1_j6fel0u wrote

This is the next scam techbros will be using to print money. It's usefullness is wildly overrated currently.

Still more usefull than NFTs though. Which will always be useless.

2

E_Snap t1_j6flk54 wrote

insert stereotypical Redditor platitude that indiscriminately pans AI to make people on the verge of being made redundant feel better about their job security

0

icecreampoop t1_j6fnirt wrote

If it means people who can’t afford these services and if this makes it accessible to the lay person at a cheap price, then why not?

1

reader960 t1_j6fob2t wrote

So it's on its way to becoming Johnny Sins

1

TennisLittle3165 t1_j6frkd2 wrote

Late to the party. How do you feed it the initial information about your problem? Does it come with pre-seeded info? It must know dictionary for sure.

1

Wherewithall8878 t1_j6fuizu wrote

I’m more interested in the rudimentary exams it’s failed so far.

1

littleMAS t1_j6fuzbw wrote

I have found it to be very human-like, giving different answers to the same question upon "Regenerate Response." Sometimes, it acts like it is rethinking the question, just like someone who gives a quick answer without much thought then providing a more thoughtful response when pressed further.

1

Flintoid t1_j6fw8hn wrote

So I read this then asked GPC for a Michigan case that I could cite for the proposition that a plaintiff must prove causation in a product liability case. It cited a Pennsylvania case on the first try, then the next three times it tried to cite a case title I couldn't locate online, and random citation numbers that also did not retrieve actual cases.

Might be awhile before this thing writes my next brief.

3

The-Real-Iggy t1_j6fwkma wrote

Such bullshit AstroTurfed nonsense, this industry breaker is good for menial tasks like lists and easy to google ideas. Ask it about complicated subjects or nuanced ideas and it’ll miss key bits of information, hell when I was shown how ‘amazing’ it was I asked it to write an essay arguing against abortion (just for shits and giggles) and the entire essay didn’t even mention Planned Parenthood v. Casey or Griswold v. Connecticut whatsoever…like it’s not remotely capable of beyond surface level writing

2

scifisreal t1_j6fwlqa wrote

That won't last long! One can dream until they put a price tag to it and start limiting it down. We're still on the hook phase.

After all, everything is documented and attached to your User, so if the AI output is used illicitly, it can be traced.

1

truggles23 t1_j6fyjwb wrote

Johnny sins better watch out he’s got some competition now

1

whitenoise89 t1_j6g1aog wrote

ChatGPT is telling you something about your tests - it’s not about to replace much of anything, though.

Sorry corpo fuckboys. Pay me.

1

AgeEffective5255 t1_j6g3z9i wrote

It doesn’t stop it from encountering the same problems human doctors encounter: not having all relevant information. We blame the people all the time, but the structures in place allow for errors to happen; you can’t catch a patient who is hiding symptoms or unknowingly visiting multiple doctors most times, you think ChatGPT will?

1

rpgnoob17 t1_j6g43fy wrote

It’s very good to bullshit something, but in the end, it’s still bullshit.

2

thecaptcaveman t1_j6g7x3y wrote

Bullshit. No AI can touch a person. No AI can do the field work. No AI can see human work. They only make use of the data we make.

1

clintCamp t1_j6gb0vd wrote

If it was set up right, it would read in their medical profile and full history, and then use it's full medical knowledge to ask the patient relevant questions to narrow down potential causes, or refer them to get specific testing, which would update their profile. Unlike the real medical field, chatGPT medical could be updated with the latest research information often, so it doesn't keep using outdated info like MD's in real life.

1

str8grizzlee t1_j6gehkm wrote

Not really. One of my colleagues asked ChatGPT for a list of celebrities who shared a birthday with him. The list was wrong - ChatGPT had hallucinated false birthdays for a number of celebrities.

Brad Pitt’s birthday is already in ChatGPT’s training data. More or better training data can’t fix this problem. The issue is that it is outputting false information because it is designed to output words probabilistically without regard for truth. Hallucinations can only be solved manually be reinforcing good responses over bad responses but even if it gets better at outputting good responses, it still will have an issue with creating hallucinations in response to novel prompts. Scale isn’t a panacea.

11

actuallyserious650 t1_j6ggw67 wrote

This is the point most people miss. Chat GPT doesn’t understand anything. It’d tell you 783 x 4561 = 10678 if those three numbers were written that way often enough online. It creates compelling sounding narratives because we, the humans, are masters at putting meaning into words that we read. But as we’re already seeing, Chat GPT will trot out easily disprovable falsehoods if it sounds close enough to normal speech.

16

erics75218 t1_j6gqte1 wrote

Bingo. And people who matter when it comes to being a huge pain in AI's ass will never learn.

Don't like chatGTP responses...then just talk to Truth Socials FreedomBOT it that's been trained on Fox News Media. Lol.

Ground truth for human created historical documents, outside of scientific shit, probably doesn't exist?

Celeb birthdays are fun, there is souch BS out there about Celebrities that the results must be hilarious on occasion.

6

Due_Cauliflower_9669 t1_j6gt6m8 wrote

And yet evidence is gathering that AI chatbots often produce incorrect and even plagiarized info. It is not omniscient. Yet.

1

Due_Cauliflower_9669 t1_j6gtawv wrote

Where does “better training data” come from? These bots are using data from the open web. The open web is full of good stuff but also a lot of bullshit. The approach ensures it continues to train itself on a mix of high-quality and low-quality data.

2

chidoOne707 t1_j6gtbz8 wrote

Everyone painting this dumb software as Skynet, we are far from that.

1

Xlash2 t1_j6gxm3p wrote

Only if passing exams and being a professional are the same thing.

1

Theemuts t1_j6h1alh wrote

Yeah, don't do that.

> ChatGPT (Chat Generative Pre-trained Transformer)[1] is a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning)[2] with both supervised and reinforcement learning techniques.

> Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide.[51]

1

ErusTenebre t1_j6h1qhu wrote

Okay... so... couldn't GOOGLE pass a test - like literally, if it was allowed to essentially cheat on a test by using google, one could pass these tests. Tests test knowledge, not skill. These are pretty dumb articles.

3

WhuddaWhat t1_j6h2pa9 wrote

>the AI advised a simulated patient to commit suicide

holy shit. Can you imagine being absolutely despondantly suicidal and reaching out for help and basically being told by what FEELS like an all-knowing computer, but is really just the most statistically relevant response to the series of things you've said, tells you that on reflecting upon your situation, it really would be best to go ahead and end it.

That would probably be enough to expand the crisis for anybody that is truly battling to get back a feeling of control within their life.

2

Suspicious-Noise-689 t1_j6h3rd8 wrote

So the same bot that told me you can’t fly in Minecraft while I’m watching my kid fly their character in Minecraft? Interesting

1

ares7 t1_j6h5jnc wrote

Yea but, Can ChatGPT become a chess master?

1

popey123 t1_j6h5ncc wrote

When your doctor say he did every thing he could and the IA say otherwise

1

JoaoMXN t1_j6h6yhg wrote

Yes really. ChatGPT is one of the least complex "AI" out there, LaMDA for example that'll be available in the future have billions of more data than it. And we'll get more and more AIs like that in a matter of years. I wouldn't underestimate AIs like you.

2

climateadaptionuk t1_j6h7p3k wrote

Yep but as a BA I am already using it to accelerate my work. And that's great in itself. I do have proof it an edit but it gets me at least 50% there so quickly. It's just like having a great assistance to bounce ideas of an get suggestions. Its insane.

1

fksly t1_j6h8hoh wrote

ChatGPT approach? Yes. Nobody really into AI thinks it is a good way to get anything close to general purpose intelligence.

In fact, in a way, it has been getting worse. It is better at bullshiting and appearing correct, but it got less correct compared to last iteration of ChatGPT.

6

vikas_agrawal77 t1_j6h8q87 wrote

I think its accuracy and reliability will be a significant concern for a while. AI is only as good as the training data fed into it and may not be great currently at understanding the subjective nuances or ambiguous data involved in law, medicine, and business. I would consider it a good support though.

1

Swirls109 t1_j6hppbr wrote

I think the way it currently works, you won't really be able to use it for any significantly factual results. It just conglomerates like things and spits it out. So if it's sources are wrong, it will be wrong. If we don't have people to feed it factual sources then how is it ever going to continue to work?

2

penguished t1_j6hvwlm wrote

Here's all the training material right in front of you. Now, can you pass the test? I'd fucking hope so.

1

wolfgang187 t1_j6hwzn1 wrote

Society is cumming in its pants too hard over this application. It's great, but also incorrect a lot of the time.

1

Black_RL t1_j6i0ez2 wrote

Yet they couldn’t find a better name for it…..

1

imnotknow t1_j6i52gz wrote

Wow, this is really triggering people. You would think we were talking about student loan forgiveness. There is a parallel there. Like suddenly, your expensive education is not so important or exclusive or special. Your fancy title is meaningless. The years of your life spent in college? Wasted.

"But it's not really AI it's machine language!" So what? the end result is the same.

"But it doesn't really know anything!" Again, so what?

"But it makes stuff up! It lies, It's wrong a lot!" SO WHAT? So is my doctor. It doesn't have to be perfect, just better and more consistent than a human.

1

Lifeinthesc t1_j6i8l8z wrote

Yes, please use ChatGPT as a doctor. I love to study evolution in real time.

1

Hummgy t1_j6i9acm wrote

Ask about video games, it will often be surface level and often have mistakes (no ChatGPT, DBD only has 1 killer).

Now multiply the seriousness of the topic by a fuck ton, like having it represent you in court or recommend medical procedures for surgeons, and I’m a lil afraid

1

nicuramar t1_j6ph4w8 wrote

> Where does “better training data” come from? These bots are using data from the open web.

The raw data is from there, among other things, but there is more to it. It was trained using supervised learning and reinforced learning.

1