Viewing a single comment thread. View all comments

goldygnome t1_j7nldog wrote

Self learning AI exist. Labels are just our names for repeating patterns in data. Self learning AIs make up their own labels that don't match ours. It's a solved problem. Your information is out of date.

Google has a household robot project that successfully demonstrated human like capabilities across many domains six months ago.

True, it's not across ALL domains, but it proves that narrow AI is not the end of the line. Who knows how capable it will be when it's scaled up?

https://jrodthoughts.medium.com/deepminds-new-super-model-can-generalize-across-multiple-tasks-on-different-domains-3dccc1202ba1

1

khrisrino t1_j7pjj2g wrote

We have “a” self learning AI that works for certain narrow domains. We don’t necessarily have “the” self learning AI that gets us to full general AI. The fallacy with all these approaches is that it only ever sees the tip of the iceberg. It can only summarize the past it’s no good to predict the future. We fail to account for how complex the real world is and how little of it is available as training data. I’d argue we have neither the training dataset nor available compute capacity and our predictions are all a bit too over optimistic.

1

goldygnome t1_j7rvgk5 wrote

Where are you getting your info? I've seen papers over a year ago that demonstrated multi-doman self supervised learning.

And what makes you think AI can't predict the future based on past patterns? It's used for that purpose routinely and has been for years. Two good examples are weather forecasting & finance.

I'd argue that training data is any data for unsupervised AI, that AI has access to far more data than puny humans because humans can't directly sense the majority of the EM spectrum and that you're massively overestimating the compute used by the average human.

1