Viewing a single comment thread. View all comments

goldygnome t1_j72261j wrote

First paragraph claims to know that "intelligence" can't be mimicked by our tech, yet intelligence is just learning and application of skills, which LLMs mimic quite successfully to a limited extent.

Nobody is seriously claiming LLMs reason and nobody is seriously claiming that human consciousness is just an LLM.

Intelligence and conciseness are two separate things. We have demonstrated super human capabilities in single domains. AGI just expands that to all domains. It does not require consciousness and t is achievable with our tech. Google has already demonstrated an AI that is capable across dozens of domains.

Of course, I'm assuming this wasn't some elaborate chat bot troll.

3

ReExperienceUrSenses OP t1_j727zyx wrote

Not a troll. I was a part of this project for four years:

Full Adult Fly Brain

I know that consciousness and intelligence are separate things I never claimed such. I'm just here to pick brains and discuss the computability of the brain. I don't argue these things to call anyone dumb, just curious to see what they say if presented with these ideas.

Those claims of super human capabilities in single domains are misleading. The machines performed well on the benchmarks, not necessarily in any real world scenarios. Give them some out of distribution data, not in their training datasets, and they crumble.

I use LLMs as an example, because they operate with the same fundamental architecture as all the others and its the "hot thing" right now. Progress in these areas doesn't necessarily mean overall progress in the goal of AGI and I just urge people to exercise caution and think critically about all the reporting.

EDIT: I posted that research project, because I worked extensively with neural networks to automate the process of building that connectome. I'm familiar with the hurdles that go into training a machine to see and trace the individual cells in those images and detect the points of synapse.

I use LLMs as an example, because I know that people are confusing using words with understanding the meaning of the words.

1

khrisrino t1_j728qg8 wrote

“… intelligence is just learning and application of skills”

Sure but that learning has been going on for a billion years encoded in dna and passed on in culture, traditions, books, internet etc. That training dataset does not exist to train an LLM on. We may have success in very narrow domains but I doubt there will (ever?) be a time when we have an AI that is equivalent to a human brain over all domains at the same time. Maybe the only way to achieve that will be to replicate the brain completely. Also many domains are exponentially intractable because it’s not just one human brain but all human brains over all time that are involved in the outcome eg stock market, political systems etc

0

goldygnome t1_j7nldog wrote

Self learning AI exist. Labels are just our names for repeating patterns in data. Self learning AIs make up their own labels that don't match ours. It's a solved problem. Your information is out of date.

Google has a household robot project that successfully demonstrated human like capabilities across many domains six months ago.

True, it's not across ALL domains, but it proves that narrow AI is not the end of the line. Who knows how capable it will be when it's scaled up?

https://jrodthoughts.medium.com/deepminds-new-super-model-can-generalize-across-multiple-tasks-on-different-domains-3dccc1202ba1

1

khrisrino t1_j7pjj2g wrote

We have “a” self learning AI that works for certain narrow domains. We don’t necessarily have “the” self learning AI that gets us to full general AI. The fallacy with all these approaches is that it only ever sees the tip of the iceberg. It can only summarize the past it’s no good to predict the future. We fail to account for how complex the real world is and how little of it is available as training data. I’d argue we have neither the training dataset nor available compute capacity and our predictions are all a bit too over optimistic.

1

goldygnome t1_j7rvgk5 wrote

Where are you getting your info? I've seen papers over a year ago that demonstrated multi-doman self supervised learning.

And what makes you think AI can't predict the future based on past patterns? It's used for that purpose routinely and has been for years. Two good examples are weather forecasting & finance.

I'd argue that training data is any data for unsupervised AI, that AI has access to far more data than puny humans because humans can't directly sense the majority of the EM spectrum and that you're massively overestimating the compute used by the average human.

1