billjames1685

billjames1685 t1_izyiv4l wrote

This doesn’t mean it can’t remember. A lot of times it outputs this for stuff that it actually can do if you prompt it correctly, which can take some experimenting to figure out.

From my experience it remembers previous responses and Can talk about them. One time I asked it about a particular fact, it gave a slightly wrong answer, I said this was wrong and provided the correct answer, and it said my response was correct. I asked it if it was incorrect initially and it said that it was, and provided more context for the answer as well.

1

billjames1685 t1_ivmnab8 wrote

Half the people there are just saying the same shit about how we need to not let AI turn us into paperclips by accident, instead of addressing actual problems that AI will pose in the future, like for example the fact that the internet is going to be flooded with bots in a few years making it impossible to distinguish who is a human and who isn't...

1

billjames1685 OP t1_ivh2w9b wrote

Wow, that’s fascinating. I think the studies I saw weren’t quite saying what I thought they were as I explained elsewhere; we have so much data and training just by being in the world and seeing stuff that we are able to richly classify images even in new domains, but yeah it seems that pretraining is necessary. Thanks!

3

billjames1685 OP t1_ivg5bij wrote

Yeah I agree. Not sure if I’m misunderstanding you, but by “transfer learning” I basically mean like all of our pre training (which occurred through a variety of methods as you point out) has allowed us to richly understand images as a whole, so we can apply and generalize well in semi-new tasks/domain.

−5

billjames1685 OP t1_ivg45yo wrote

This seems to make sense I think. AI will probably always outperform us for narrowly defined tasks, but I think we excel at being able to generalize to a lot of different tasks. Although even AI is starting to do well at this too; first there was AlphaGo four years ago and now we have all the transfer learning stuff going on in NLP.

It’s pretty curious; I never would have expected NNs to have half the capabilities they do nowadays.

2

billjames1685 OP t1_ivg2iio wrote

I am basing on this blogpost: https://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/

Written in 2015 but the author has commented recently that he still holds the same opinion.

More recent (Jan 2022): https://www.scienceabc.com/humans/the-human-brain-vs-supercomputers-which-one-wins.html#evolution-of-computers

Generally though I don’t think there is a consensus on this because there are a lot of loosely defined terms and the brain is basically impossible to simulate.

I agree that the brain is just more optimized in general than NNs, but I’m pretty sure it’s also just way more powerful as well.

The estimated computational capacity of the brain keeps increasing as we learn more about it.

5

billjames1685 OP t1_ivfyks8 wrote

I’m pretty sure it’s been well established that we can learn after seeing a few images even for things we haven’t seen before, I remember reading some paper testing on some random animals no one has seen before.

But yeah we do have a lot of general pre training; we have image classification training before so there might be some transfer learning stuff endowing us with our few short capabilities. And part of the reason is that our brains have way more compute than any model, so we can probably learn things better as well

−39