Viewing a single comment thread. View all comments

Graega t1_j8i50tw wrote

Call me when it has to study ahead of time, using a single text instead of being fed huge amounts of sources; has to identify what to store in a limited size database; and has to take the test without any internet access or ability to look up things behind what it decided to store. I'll be impressed if it passes then.

7

LordKeeper t1_j8i9glc wrote

But it didn't have to study at all for this exam, and that's kind of the point. A human doctor, even one that scored in the 95th percentile on the USMLEs, couldn't scrape by with a passing grade on a Law or MBA exam. ChatGPT, in its basic form, can do passably in any one of these areas, without needing to acquire additional material from the internet or elsewhere. When models like these become able to "study" on their own, and even identify what they need to study to advance in a field, they're going to take over multiple professions at once.

11

semitope t1_j8idqk3 wrote

>without needing to acquire additional material from the internet or elsewhere

It doesn't constantly search the internet to come up with it's answers? It needs data. All software needs data. Not sure how it works but its either it has access to the internet to look through it and uses indexing like google, or their servers have stored massive amounts of data for it to be relevant in different areas.

I doubt AI can do well in fact heavy fields like law and medicine with no way of knowing the facts.

−4

GondolaSnaps t1_j8in58c wrote

It was trained on massive amounts of internet data, but it isn’t online.

If you ask it, it’ll even tell you that all of it’s information is from 2021 and that it has no knowledge of anything after that.

For example, if you ask it about Queen Elizabeth it’ll describe her as the current monarch as it has no idea she’s already dead.

9

MilesGates t1_j8jehab wrote

>It was trained on massive amounts of internet data, but it isn’t online.

Sounds kind of like doing an open book test where you can read the textbook to find the answers but you can't google for the answers.

1

jagedlion t1_j8jxe3s wrote

Common misconception. It memorizes the data and forms connections in its model. It's sort of like memorization in that way, as it doesn't even store any of the raw information it was trained on. It only stores the predictive model.

This is also why you can implement AI vision algorithms on primitive microcontrollers. They don't have the computational power to solve for the AI model, but once the powerful computer calculates the model, a much simpler one can use it.

2

semitope t1_j8k09qi wrote

sounds about the same thing. given the data before vs looking for it now. Fact is it cannot produce useful responses when it comes to facts without exposure to the data. Would be like someone talking about something they know absolutely nothing about. Which might be why sometimes it's accused of making things up confidently.

0

jagedlion t1_j8k0uqa wrote

I mean, humans can't either give you information that they don't have exposure to. We just acquire more data during our normal day to day lives. People also do their best to infer from what they know. They are more willing to code their certainty in their language, sure, but humans also can only work off of the knowledge they have and the connections they can find within.

4

semitope t1_j8k5n5n wrote

humans aside, saying it doesn't need to acquire additional information from the internet or elsewhere isn't saying much if it already acquired the information from the internet and elsewhere. It already studied for the exam

0

jagedlion t1_j8kbruo wrote

Part of model building is that it compresses well and doesn't need to store the original data. It consumed 45TB of internet, and stores it in its 700GB working memory (the inference engine can be stored in less space, but I cant pin down a specific minimal number).

It has to figure out what's worth remembering (and how to remember it) without access to the test. It studied the general knowledge, but it didn't study for this particular exam.

2

jagedlion t1_j8jy2ud wrote

So it does many of the things you listed.

It greatly compresses the training database into a tiny (by comparison) model. It runs without access to either the internet, nor the original training data. The ability for it to run 'cheaply' is directly related to how complex the model being built is. Keeping the system efficient is important and that's a major limit on the size of what it can store.

It was trained on 45TB of internet data, compressed and filtered down to around 500GB. A very limited size database already. Then it actually goes further to 'learn' the meaning though, so this is actually stored as 175 billion 'weights' which is about 700GB (each weight is 4 bytes). Still though, that's a pretty 'limited' inference size. Not like, do it on your own computer size, but not terrible. They say it costs a few cents per question, so, pretty cheap compared to the costs of actually hiring even a poor quality professional.

It does therefore have to 'study' ahead of time.

The only thing it doesn't do that you listed, is that it reads many sources, not just one. But the rest? It already does it.

2