billjames1685
billjames1685 t1_ivmnab8 wrote
Reply to comment by terminal_object in [D] Academia: The highest funded plagiarist is also an AI ethicist by [deleted]
Half the people there are just saying the same shit about how we need to not let AI turn us into paperclips by accident, instead of addressing actual problems that AI will pose in the future, like for example the fact that the internet is going to be flooded with bots in a few years making it impossible to distinguish who is a human and who isn't...
billjames1685 OP t1_ivh33jr wrote
Reply to comment by IntelArtiGen in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
That’s a fair point; I was kind of just using it as a general term.
billjames1685 OP t1_ivh2w9b wrote
Reply to comment by OptimizedGarbage in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
Wow, that’s fascinating. I think the studies I saw weren’t quite saying what I thought they were as I explained elsewhere; we have so much data and training just by being in the world and seeing stuff that we are able to richly classify images even in new domains, but yeah it seems that pretraining is necessary. Thanks!
billjames1685 OP t1_ivg9nxw wrote
Reply to comment by LordOfGalaxy in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
Oh absolutely. Our brain is absolutely insane - with 20-30 watts it’s able to possibly have more compute than supercomputers that run on several megawatts of energy. The level of efficiency it displays is just ridiculous.
billjames1685 OP t1_ivg5bij wrote
Reply to comment by IntelArtiGen in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
Yeah I agree. Not sure if I’m misunderstanding you, but by “transfer learning” I basically mean like all of our pre training (which occurred through a variety of methods as you point out) has allowed us to richly understand images as a whole, so we can apply and generalize well in semi-new tasks/domain.
billjames1685 OP t1_ivg45yo wrote
Reply to comment by Pawngrubber in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
This seems to make sense I think. AI will probably always outperform us for narrowly defined tasks, but I think we excel at being able to generalize to a lot of different tasks. Although even AI is starting to do well at this too; first there was AlphaGo four years ago and now we have all the transfer learning stuff going on in NLP.
It’s pretty curious; I never would have expected NNs to have half the capabilities they do nowadays.
billjames1685 OP t1_ivg3dy8 wrote
Reply to comment by IntelArtiGen in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
Yeah I addressed that in the second paragraph; we have been pretrained on enough image classification tasks that we probably have some transfer learning-esque reasons leading to our few shot capabilities.
billjames1685 OP t1_ivg2lfu wrote
Reply to comment by IDefendWaffles in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
I am considering self-played games as data.
billjames1685 OP t1_ivg2iio wrote
Reply to comment by LordOfGalaxy in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
I am basing on this blogpost: https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/
Written in 2015 but the author has commented recently that he still holds the same opinion.
More recent (Jan 2022): https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html#evolution-of-computers
Generally though I don’t think there is a consensus on this because there are a lot of loosely defined terms and the brain is basically impossible to simulate.
I agree that the brain is just more optimized in general than NNs, but I’m pretty sure it’s also just way more powerful as well.
The estimated computational capacity of the brain keeps increasing as we learn more about it.
billjames1685 OP t1_ivfyks8 wrote
Reply to comment by IntelArtiGen in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
I’m pretty sure it’s been well established that we can learn after seeing a few images even for things we haven’t seen before, I remember reading some paper testing on some random animals no one has seen before.
But yeah we do have a lot of general pre training; we have image classification training before so there might be some transfer learning stuff endowing us with our few short capabilities. And part of the reason is that our brains have way more compute than any model, so we can probably learn things better as well
Submitted by billjames1685 t3_youplu in MachineLearning
billjames1685 t1_izyiv4l wrote
Reply to comment by eigenman in [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
This doesn’t mean it can’t remember. A lot of times it outputs this for stuff that it actually can do if you prompt it correctly, which can take some experimenting to figure out.
From my experience it remembers previous responses and Can talk about them. One time I asked it about a particular fact, it gave a slightly wrong answer, I said this was wrong and provided the correct answer, and it said my response was correct. I asked it if it was incorrect initially and it said that it was, and provided more context for the answer as well.