groman434
groman434 OP t1_j2xp8he wrote
Reply to comment by GFrings in [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
Yep, you are right, I was not clear enough. What I meant was that AI would to a task "significantly better" (whatever this means exactly). For instance, if humans can find 90% of dogs in a dataset, that AI would be able to find 99.999% of dogs.
groman434 OP t1_j2xb0c8 wrote
Reply to comment by Extension_Bat_4945 in [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
My question was slighly different. My understanding is that one of major factors that impact your quality of your model predictions is your training set. But since your training set could be inaccurare (in other words, made by humans), how this fact can impact quality of learning and then quality of predictions.
Of course, as u/IntelArtiGen wrote, models can avoid reproducing errors made by humans (I guess because they are able to learn specific features during a teaching phase when your training set is good enough). But I wonder what this good enough means exactly (in other words, how inevitable errors made by humans when preparing it impact an entire learning process and what kind of errors are acceptable) and how an entire training process can be described mathematically. Of course, I have seen many explanation using gradient descent as an example, but none of them incorporated the fact that a training set (or loss function) was imperfect.
groman434 OP t1_j2x82ze wrote
Reply to comment by IntelArtiGen in [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
>But we don't design or train our models to exactly reproduce what a human did, that would be a risk of overfitting, so even by reproducing humans a model can do better and not reproduce some mistakes.
Can you please elaborate on this? Let's say your train data contains 10% of errors. Can you train a model that it would be more than 90% accurate? If yes, why?
Edit: My guess would be that the model during the training phase, can "find out" what are features typical for cats provided that the training set is "good enough". So even if the set contains some errors, they will not impact significantly a prediction the model can give.
groman434 OP t1_j2x4o2g wrote
Reply to comment by Category-Basic in [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
I would argue that there is a significant difference between how a knife works and how ML works. You do not have to train a knife how to slide bread.
Besides, it looks to me that ML can outperform humans just because it utilises the fact that modern day computers can do zylions of computations per second. Of course, the sheer speed of computation is not enough and this is why we need smart algorithms as well. But those algorithms benefit from the fact that they have super power hardware available, often not only during training phase but also during normal operation.
Submitted by groman434 t3_103694n in MachineLearning
groman434 OP t1_j2xv6xl wrote
Reply to comment by junetwentyfirst2020 in [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
When I put some thoughts in my question (yes, I know, I should have done it before I posted it), I realised that I was interested in how training in general and training set imperfections impact a model performance. For instance, if a training set is 90% accurate, then how and why the model which used that data for training can be more than 90% accurate. And what kind of errors in the training set the model can correct?