Submitted by billjames1685 t3_youplu in MachineLearning

Hey guys, I've been thinking about this question recently. There are tasks that ML-based models outperform humans at, such as some image classification benchmarks and a bunch of games including chess, while humans are better at tons of other things like abstract math.

But for which of these tasks can ML models outperform us at given the same amount of data as we have? Like chess for example, can AlphaZero outperform humans if it had as many games of pretraining as, say, Magnus Carlsen has had? I'd imagine that Stockfish might be able to without pretraining just by virtue of computing so many positions ahead, but I'm not sure AlphaZero could, because its tree/policy and value NNs might not be that optimized.

As another example, its well-known that humans are generally pretty great at few-shot learning in, say, image classification; we can distinguish, say, dogs from cats given only a couple input examples.

70

Comments

You must log in or register to comment.

IntelArtiGen t1_ivfxox3 wrote

For many tasks you can't really compare because we are fed with multiple types of raw data continuously while most models train on one specific type of data coming from one clean dataset.

>we can distinguish, say, dogs from cats given only a couple input examples.

After we've seen billions of images during multiple months/years of life. We had a very large and long "pretraining" before being able to perform "complex" tasks. So it depends on what you compare, most models need less data but train on a cleaner dataset with architectures that are already optimized for that specific task.

103

billjames1685 OP t1_ivfyks8 wrote

I’m pretty sure it’s been well established that we can learn after seeing a few images even for things we haven’t seen before, I remember reading some paper testing on some random animals no one has seen before.

But yeah we do have a lot of general pre training; we have image classification training before so there might be some transfer learning stuff endowing us with our few short capabilities. And part of the reason is that our brains have way more compute than any model, so we can probably learn things better as well

−39

ramblinginternetnerd t1_ivgawp3 wrote

The human brain was "pretrained" by around 500 million years of evolution (origin of first vertebrate).

​

Fear of heights seems somewhat inborn for example.

75

blimpyway t1_iviwvnt wrote

Now let's figure out how to store such a pretrained model in ~700MBytes of genetic code, without disturbing the other info about how all non-brainy enzymes, organs and tissues, etc.. should be built and assembled together.

8

[deleted] t1_ivitst9 wrote

[deleted]

−1

uishax t1_ivixeq6 wrote

The stable diffusion model is only 4GB, yet its enough to describe almost anything visually. Its also an extremely size-unoptimized model.

Now the 800MB is mostly spent on other things, but even 8mb, if optimized enough, is plenty to encode a vast amount of visual information into our brains, including a hyper-efficient-accurate human face recognizer, a hatred of bugs/mice/spiders/snakes, a liking of fluffy and shiny stuff, etc.

9

KPTN25 t1_ivjf6z5 wrote

There is epigenetics, microbiome, and some other stuff as well.

2

IntelArtiGen t1_ivg33d5 wrote

>I’m pretty sure it’s been well established that we can learn after seeing a few images even for things we haven’t seen before

An 18 years old can do that. Ask a 1 y.o. to identify 50 different objects, it won't work, even though this 1 y.o. was trained continuously on thousands of images during his first year of life. Of course you were not talking about training a 1 y.o. but an adult, and that's why you can't really compare. In order to be an adult you need to be a 1 y.o., you need to watch the world during thousands of days before you can have that "pretraining" that makes adults able to handle all these tasks more easily than most models.

>our brains have way more compute than any model

That's not as well-established as many people could think. We would want models to do what an 18 years old could do, yet no deep learning model has been trained with real-world interactions for 18 years.

30

blimpyway t1_ivivcwr wrote

Tesla collected 780M miles of driving till 2016

A human learning to drive for 16h/day at an average speed of 30mph for 18years would have a data set of ~3M miles.

So we can say humans are at least 1000 times more sample efficient than whatever Tesla and any other autonomous driving companies are doing.

−1

The_Real_RM t1_ivizjvu wrote

You are assuming Tesla actually needs all that data to train a competing model, you're also ignoring all of the other training a human has before ever starting to drive. It's not so clear who is more efficient, not at all.

I think a better way to compare is thorough the lense of energy, a human brain runs on about 40w of energy, Tesla's models are trained on MW scale computers, how do they compare in terms of total energy spent to achieve certain performance?

5

IntelArtiGen t1_ivj6nih wrote

Probably not, because a 16 y.o. human has 16 years of interactive navigation pretraining in a real world environment in real time before learning to drive. So it depends on how you include this pretraining.

And it also depends on the accuracy of the model as a function of the size of the dataset. Let's say Tesla is 80% (random number) accurate while driving after training on 780M miles, a human is 75% accurate after 3M miles, and if you train the Tesla model on 3M miles instead of 780M it's 75% accurate, on these metrics alone Tesla would be as efficient as a human.

No comparison is perfect but we can't ignore that during the first years of our lives we train to understand the world while not being very efficient to perform tasks.

1

billjames1685 OP t1_ivg3dy8 wrote

Yeah I addressed that in the second paragraph; we have been pretrained on enough image classification tasks that we probably have some transfer learning-esque reasons leading to our few shot capabilities.

−13

IntelArtiGen t1_ivg4d74 wrote

I think it's not just "transfer learning" or "image classification" it's also just learning without explicitly using "labels". Like contrastive learning / self supervised learning / reinforcement learning etc.

12

billjames1685 OP t1_ivg5bij wrote

Yeah I agree. Not sure if I’m misunderstanding you, but by “transfer learning” I basically mean like all of our pre training (which occurred through a variety of methods as you point out) has allowed us to richly understand images as a whole, so we can apply and generalize well in semi-new tasks/domain.

−5

IntelArtiGen t1_ivg765c wrote

Ok that's one way to say it I also agree. I tend to not use the concept of "transfer learning" for how we learn because I think it's more appropriate for well-defined tasks and we are rarely confronted with tasks that are as well-defined as the ones we give to our models.

And transfer learning implies that you have to re-train a part of the model on a new task, and that's not exactly how I would define what we do. When I worked on reproducing how we learn words I instead implemented the solution as a way to put a new label on a representation we were already able to produce based on our unsupervised pretraining. I don't know which way is the correct one I just know that doing that works and that you can teach new words/labels to a model without retraining it.

3

billjames1685 OP t1_ivh33jr wrote

That’s a fair point; I was kind of just using it as a general term.

2

OptimizedGarbage t1_ivh2jio wrote

There's a famous psych experiment where cats raised in a room containing only horizontal stripes never see vertical lines. After leaving the room their brains simply haven't learned to recognize vertical lines, so they'll walk face first into vertical bars without realizing they're there. There's a massive amount of data that goes into learning the features needed to distinguish objects from each other and learn the basics of how objects in 3D space appear.

Similarly, if you pretrain a neural net on any random assortment of images, you can get very fast learning after that by fine-tuning using new classes. But the overwhelming majority of the data is going towards "how to interpret images in general" not "how to tell two novel object classes apart".

6

billjames1685 OP t1_ivh2w9b wrote

Wow, that’s fascinating. I think the studies I saw weren’t quite saying what I thought they were as I explained elsewhere; we have so much data and training just by being in the world and seeing stuff that we are able to richly classify images even in new domains, but yeah it seems that pretraining is necessary. Thanks!

3

LordOfGalaxy t1_ivg17bi wrote

I don't think we have that much compute power. A human brain has about 100 billion neurons and they can fire at about 100Hz on average at best. Each neuron has about 1000-10000 synapses. If each firing counted as one operation for every synapse, this puts the compute power at an absolute maximum of about 100 POPS (Peta Operations Per Second). A single graphics card can manage about 100 TFLOPS these days, so this is really only about a thousand graphics cards - nothing unachievable. And the human brain does a LOT more than any model we currently have. Something like a rat brain probably has less compute power than a single graphics card, and yet in many ways our models are incapable of what a rat can do. The problem is more fundamental than just "not enough compute" IMHO.

5

billjames1685 OP t1_ivg2iio wrote

I am basing on this blogpost: https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

Written in 2015 but the author has commented recently that he still holds the same opinion.

More recent (Jan 2022): https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html#evolution-of-computers

Generally though I don’t think there is a consensus on this because there are a lot of loosely defined terms and the brain is basically impossible to simulate.

I agree that the brain is just more optimized in general than NNs, but I’m pretty sure it’s also just way more powerful as well.

The estimated computational capacity of the brain keeps increasing as we learn more about it.

5

LordOfGalaxy t1_ivg90on wrote

A lot of the author's estimates are on the higher side, which takes him to the ~10^21 number. Fair enough. But even then one must concede that, say, a rat brain, with 1000 times fewer neurons, should still be within reach of modern supercomputers in terms of sheer processing power.

And even the authors of both those posts note that biological brains are VERY different from ANNs, which could confer them significant advantages. That is my own view - the biological brain is just better at what it does, and our algorithms will require significant changes to match that level of efficiency. Of course, we still need significant advances at the hardware level as well (the human brain barely uses 30W and still has some 3-6 orders of magnitude more computing power than the most powerful GPUs that easily use ten times that much power), but even with such advances we may not be able to match the biological brain unless we make some more fundamental changes to our methods.

8

billjames1685 OP t1_ivg9nxw wrote

Oh absolutely. Our brain is absolutely insane - with 20-30 watts it’s able to possibly have more compute than supercomputers that run on several megawatts of energy. The level of efficiency it displays is just ridiculous.

4

DarkCeldori t1_ivgnyd2 wrote

We also have to remember the brain has very sparse activity. IIRC on the order of 2% activity. Also most of the neurons are on the cerebellum, and humans without cerebellum still have general intelligence albeit with some difficulty with precise motion. The neocortex only has about 16Billion neurons and it is here that general intelligence occurs. That brings the 100POPs down to 16POPs times 0.02% activity = 320TOPS.

https://aiimpacts.org/rate-of-neuron-firing/

3

LordOfGalaxy t1_ivit0x2 wrote

True, every neuron in the brain cannot possibly be firing at the same time, and much of the brain is dedicated to just keeping us alive

2

ginsunuva t1_iviwmie wrote

>some transfer learning

We know general physics, 3d projection, lighting, and biological concepts. So much transfer that it’s always an entirely unfair comparison.

1

lgcmo t1_ivg3cgz wrote

I used to work at a hedgefund and we used ml models to determine economic indexes such as CPI, GDP, etc. for a lot of countries with a pretty nice automation and pipelines.

We had analysts before looking at the same series + news + talking with other people. For some items the analyst is better, but for must part (and in the total game), the models are better.

Not a ton better tho, in a 1.5 year of development a small team outperformed slightly the analyst, but then they could do analysis for several countries with pretty much no work.

Cool stuff

43

picardythird t1_ivht670 wrote

Interesting. I would be interested to see how your models accounted for unexpected causal impacts, as well as seemingly-outsized shocks to the market based on seemingly-irrational (or arbitrary) news or reports.

8

Dr-Do-Too-Much t1_ivhuc29 wrote

You'd need a "news feed digestion" pipeline to scrape and encode world news before winding that into the main market predictor. I'd love to spend hedge fund money on that

4

narwhal_breeder t1_ividnb1 wrote

You dont need to, AzFiNText has been around for a long time and is well reaserched.

3

lgcmo t1_ivjpk6e wrote

When I left there was some ideias floating around to scrape not only news, but also statements given by public figures and central banks. There is an metagame of interpreting what Jerome Powell really meant at each phrase of his speech.

Nothing like that was done (and believe it's not being built), but we had a sorta smart news pipelines that filtered relevant news and send to analysts. As I said, we had not fired the analysts, we were teaming up with them. When there was a market shock or something very disruptive, the analysts took over and we would update the models.

But I have to say that even with radical events (such as the invasion of Ukraine), the models were not that far off. We had some anomaly detection on the pipeline

1

r_linux_mod_isahoe t1_ivi03tq wrote

Only students replying?

Jeez. On a bazillion of things.

My favorite: weather prediction. Please feed a human all the raw data from 120 sensors, 10 observation satellites for the past 48 hours that you've collected and ask them what's the chance of rain tomorrow. Haha.

14

TheDarkinBlade t1_ivgfcje wrote

It's not really a fair comparison, since humans have the entire ancestral knowledge of evolution behind them. Evolution has "pretrained" our brains to that point, that we can learn new thing pretty fast, because we "finetune" existing structures. If you learn to recognize a new object, you are using information from all the yet learned objects to inform that process

12

EverythingIsTaken61 t1_ivg0kwj wrote

With some tabular data I think most models will outperform humans as long as the context (meaning of each variable) is unknown. If the human knew what the task was about, then they might get an advantage, but that wouldn't really be the same amount of data?

For sensory data, I don't think it's easily compared because we already have experiences in life + in our DNA.

10

gwern t1_ivho3vx wrote

I'd predict the opposite: 'tabular data' of the usual sort will yield bad human performance. See the clinical prediction literature going back to Paul Meehl: given some tabular data and asked to predict stuff like disease progression or recidivism risk, the expert human will often underperform a simple linear model, never mind 'real' tabular ML. We're really good at stuff like images, yes, but give us a CSV and ask us to predict housing prices in Boston in 1970...

4

evanthebouncy t1_ivhesll wrote

374637+384638/27462*737473-384783+48473/38374/38474

6

blimpyway t1_ivizpp3 wrote

Is any ML model able to (learn how to) solve that? With pen and paper I can, and cannot credit 500M yr of genetic heritage for it.

3

Blakut t1_ivgysz9 wrote

>But for which of these tasks can ML models outperform us at given the same amount of data as we have?

The brain comes pretrained for a lot of things tho. Babies react to human faces. Pareidolia is simply our pretrained brains "overfitting" random noise, interpretting it as faces. Because, most probably, humans are very good at recognizing human faces. It's a great question really, because it makes us think about the nature of our own intelligence.

A better comparison could be made, maybe, by selecting tasks at which a human would be not "pre-trained".

5

ObjectManagerManager t1_ivhwb6m wrote

Given unlimited data, models are at least as good as humans at every task. All you'd need is a dictionary, and you could perfectly recover the target distribution.

Where humans excel is learning with a relatively small amount of data. But presumably that's just because we're able to transfer knowledge from other, related tasks. Some models can do that too, but not nearly as well. Either way, that invalidates the comparison since the data isn't fixed anymore.

4

blablanonymous t1_ivif2da wrote

We’re also constantly iterating on the learning algorithm. We learn to learn. That’s one of the most important skill we learn throughout our education. Computer need to be taught how and what to learn for the most part

3

f10101 t1_ivj1s5q wrote

To take an example where it's a fair fight, and the computer doesn't win by virtue of having more input bandwidth: RL models applied to narrow physical tasks.

These will often exceed human ability after just a couple of hundred attempts - Cart Pole would be an example.

2

blimpyway t1_ivjly2a wrote

This indeed could be one case. However a couple hundred attempts is not the limit - a kid would get it in less than a couple dozen trials or she will get bored.

However I found that some models can do it even faster. Like under 5 failures or less on 50% trials, including only 2 failures in 5% of trials.

1

Pawngrubber t1_ivg0jun wrote

All AI surpasses humans when data gets large enough. Hypothetically if a person could review billions of games there's no way they'd beat Alphazero/Leela trained on billions of games.

To treat your question fairly, you should only ask in the small data domain.

One easy example: if you heavily leverage tree search algorithms and have a tiny neural net eval (much smaller than stockfish nnue) it would still surpass humans even with only hundreds of games.

Outside of RL it's harder. But sometimes simple models with few parameters (linear/logistic regression models) can outperform humans with only dozens of samples.

1

billjames1685 OP t1_ivg45yo wrote

This seems to make sense I think. AI will probably always outperform us for narrowly defined tasks, but I think we excel at being able to generalize to a lot of different tasks. Although even AI is starting to do well at this too; first there was AlphaGo four years ago and now we have all the transfer learning stuff going on in NLP.

It’s pretty curious; I never would have expected NNs to have half the capabilities they do nowadays.

2

blimpyway t1_ivjl2sn wrote

> One easy example: if you heavily leverage tree search algorithms and have a tiny neural net eval (much smaller than stockfish nnue) it would still surpass humans even with only hundreds of games.

Any reference on that?

1

Pawngrubber t1_ivjlhmg wrote

I wish I had a model or paper to point to. I don't. I worked with the Komodo team for a few years and I believe this to be true from experience in training/testing alternatives to nnue.

2

junetwentyfirst2020 t1_ivhv5ir wrote

It depends what you mean by better. Is a car better than a person at driving? They’re safer on average, but you need pretty advanced AI to be able to navigate the long-tail of potential situations. A human can navigate a large majority of the long tail.

1

hgoel0974 t1_ivi38ul wrote

On top of what others have said, one additional aspect of pre-learned experience that we're only kind of starting to look into for ML is that there are architectures that seem more predisposed certain tasks than others.

For instance, "What's Hidden in a Randomly Weighted Neural Network?" discusses how untrained subnetworks in sufficiently large networks can have decent accuracy in classification tasks, in certain cases even when the weights are set to a constant value.

Evolution has had much more time to refine such strategies than ML models have had.

1

undefdev t1_ivix6uz wrote

Humans are really bad at producing random outputs :)

1

The_Real_RM t1_ivj0ws4 wrote

Probably, any task.

Once you factor in the pre-training, the structure of the network and pre-training a model with the equivalent adult person's experience you get a model that should be able to equate or surpass humans in any task. Of course some humans have different kinds of pre-training so if you want to be fair a particular instance of the model won't surpass all humans at all activities, but a collection of them with diverse pre-training would. In terms of scalability and performance on task there isn't even a competition of course, the model would always perform at the peak of its performance

1

DoctorFuu t1_ivj3mbq wrote

How do you quantify the amount of data a human has? Do you plan on removing his brain and putting a new one in with no information inside before starting the experiment?

I understand the intent of your question, but its premises have no meaning. A human with no information is not even born yet.

Aaaah, I have it! A model is better at existing with less information than a human is!

1

blimpyway t1_ivjkpvi wrote

Yes for all living organisms their learning competencies are geared towards sample efficiency. Whoever spends a bit too much on figuring out how to solve a problem either starves to death and more likely is eaten long before that.

1

NZDamo t1_ivqn8w3 wrote

Catwalks

1

IDefendWaffles t1_ivg12m2 wrote

AlphaZero trained by self play from no data.

−1

new_name_who_dis_ t1_ivhbys1 wrote

That would be like saying GPT was trained on no data just because there’s no labels and annotations.

Alpha zero was trained in an environment with basically infinite data.

4