DauntingPrawn
DauntingPrawn t1_jddtbgn wrote
Reply to Could GNNs be the future of AI? by mrx-ai
Not on their own. We know the human brain has different processing centers, and I think AGI is going to require activation and routing networks to invoke specific functional networks, ie image processing, language processing, etc. So I could see graph networks to work out simulated thought processing of inputs that produces probabilistic routes through those functional networks, with a sort of reality filter or expectation filter -- maybe a Boltzmann type of energy activation -- to choose from those results.
DauntingPrawn t1_jach5qw wrote
Reply to Re: Meet Joe Black. by ComputerLarge2868
I looked your take, but your experience made me cry. I can't imagine what you went through but to have that precious moment to carry with you after he was gone is beyond words.
DauntingPrawn t1_j22zjob wrote
Speaking from my personal experience, one reason is that one party can unilaterally make it complicated by not working in good faith to get it finished. That often results in imbalanced outcomes because the other party either gives in or runs out of money for legal fees.
DauntingPrawn t1_ix1140m wrote
Reply to comment by me_not_at_work in When discovering a new band, how do you usually approach their catalog? by [deleted]
Love it! Came here to say basically the same thing.
DauntingPrawn t1_itogzwc wrote
Reply to comment by MisterFuller22 in What Are Some Weird, Unintended Spiritual Implications Behind Hollywood Movie Plots? by Rowan-Trees
Yoked Korean Jesus would fit right in.
DauntingPrawn t1_jdhi42a wrote
Reply to comment by DragonForg in Could GNNs be the future of AI? by mrx-ai
Complex cognition exists independent of language structures and LLMs mimic language structures, not cognition. You can destroy the language centers of the brain and general intelligence, ie cognition and self recognition, remain intact. Meanwhile ChatGPT isn't thinking or even imitating thought, it's imitating language by computing a latent space for emergent words based on prior language input. Math.
Meanwhile a baby can act on knowledge learned by observing the world long before language emerges. AGI requires more than language, more than memory. It requires the ability to model reality and learn language from raw sensory input alone, and to synthesize information and observation into new ideas, and motives to act on that information, the ability to predict an outcome and a value scale to weigh one potential outcome over another. A baby can do that but ChatGPT doesn't even know when it's spouting utter nonsense and stable diffusion doesn't know how many fingers a human has.
We have no ways of modeling unobserved information. A LLM cannot add a new word to it's model. It will never talk about anything that was invented after its training. Yes, they are impressive. On the level of parlor tricks and street magic.