Submitted by billjames1685 t3_youplu in MachineLearning
Pawngrubber t1_ivg0jun wrote
All AI surpasses humans when data gets large enough. Hypothetically if a person could review billions of games there's no way they'd beat Alphazero/Leela trained on billions of games.
To treat your question fairly, you should only ask in the small data domain.
One easy example: if you heavily leverage tree search algorithms and have a tiny neural net eval (much smaller than stockfish nnue) it would still surpass humans even with only hundreds of games.
Outside of RL it's harder. But sometimes simple models with few parameters (linear/logistic regression models) can outperform humans with only dozens of samples.
billjames1685 OP t1_ivg45yo wrote
This seems to make sense I think. AI will probably always outperform us for narrowly defined tasks, but I think we excel at being able to generalize to a lot of different tasks. Although even AI is starting to do well at this too; first there was AlphaGo four years ago and now we have all the transfer learning stuff going on in NLP.
It’s pretty curious; I never would have expected NNs to have half the capabilities they do nowadays.
blimpyway t1_ivjl2sn wrote
> One easy example: if you heavily leverage tree search algorithms and have a tiny neural net eval (much smaller than stockfish nnue) it would still surpass humans even with only hundreds of games.
Any reference on that?
Pawngrubber t1_ivjlhmg wrote
I wish I had a model or paper to point to. I don't. I worked with the Komodo team for a few years and I believe this to be true from experience in training/testing alternatives to nnue.
Viewing a single comment thread. View all comments