Comments

You must log in or register to comment.

Deep-Station-1746 t1_iylqt4s wrote

I stopped after I saw: "If NNs would have been truly learning, adversarial attacks won’t exist". Yeah sure. :)

14

Difficult-Race-1188 OP t1_iylv1fs wrote

What I mean to say by that the biggest cause for Adversarial attacks is that NN creates linear boundaries locally, that's why Kernel SVM are the best defense against adversarial attacks.

−5

Difficult-Race-1188 OP t1_iylv75t wrote

Because VC dimension of kernel SVM is infinite and thus they can create extremely curved boundaries in lower hyperdimension.

−1

druffischnuffi t1_iylvkt0 wrote

I still do not get why people keep saying that AI is "not truly learning" or "not actually intelligent".

They always invent some weird criteria that a "true AI" would need to satisfy, for example that it can learn affine transformations without being taught to or that it must be immune to adversarial attacks.

If you think you are truly learning because your brain figured out affine transformations on its own, try reading a book upside down

5

Difficult-Race-1188 OP t1_iylz2aw wrote

What people mean when they say AI is not truly learning is that often the most impressive results are coming from extremely big models. For example, almost all the top AI scientist takes dig on Large language models, because we don't know whether they learned something or it just memorized all the possible combinations. Why people believe AI is not truly learning is that there are papers that show that AI was unable to generalize to simple mathematical equations.

x³ + xy² + y (mod 97), AI was unable to generalize to this simple equation.

https://medium.com/aiguys/paper-review-grokking-generalization-and-over-fitting-9dbbec1055ae

https://arxiv.org/abs/2201.02177

−1

druffischnuffi t1_iym6mjv wrote

I agree. That is very unsatisfactory. I also think that NNs are often being overestimated.

However, I think what is lacking in the line of reasoning is a positive definition of true learning. A test that an AI must pass if it is truly learning.

I myself would not consider myself able of generalizing a set of samples to the above equation. So does that mean I cannot learn?

2

Blakut t1_iylqsu3 wrote

wait what about VAEs? Don't they "learn" to interpolate?

3

Difficult-Race-1188 OP t1_iyluwu2 wrote

Even I don't know how VAE learn that. But recent paper that Neural networks can be written exactly like decision trees proved mathematically that NN are also decision tress but with added hyperspace.

−3

Blakut t1_iylv3io wrote

from my understanding they introduce sampling of latent space, so when you decode, your parameters in latent space have a gaussian distro around a learned mean and sigma. This in turn, from what i gather, learns "in between" mappings in latent space.

1

patrulek t1_iylz5fe wrote

Like in this meme with cosmonaut:

"So its all if's and else's? Always has been."

3

vasjpan02 t1_iylpv5w wrote

sure, but now they speed them up by replacing sigmoids with step fcts

2