Submitted by hey__bert t3_125x5oz in singularity
I keep seeing people argue that because AI systems are simply complex functions trained on large amounts of data, they are just predicting the next word it should say and they don't really "understand" what anything is. While this is technically true based on how models are currently developed, the argument makes a very obtuse assumption about what it means to understand something.
Humans are also trained on huge amounts of input data as they grow up and learn how to behave and think about the world. When we say we understand something, we mean we have many layers of knowledge/information about what that thing is. We can have a very deep understanding of a subject that we hold as a large model of information in our brain, but, in the end, that model is just made up of layers of data/facts that reference each other. All it is is layers of data - nothing more. You can drill down into the layers of any subject by asking yourself questions about what you know about it and why. Even with a human brain, it doesn't take too long to hit a wall about how much you really know, and everyone has different depths of understanding on any subject.
For example, you can ask yourself, "what is a ball?" and answer -> a ball is a sphere -> some balls can bounce -> they can be used in sports...etc. When you do this, you are just traversing through everything you can remember about balls. Current AI models do something very similar - they just lack the "depth" of knowledge the human brain has due to processing power and memory limitations in encoding so much information in multidimensional vectors. When our currently shallow machine learning models have the processing power to encode deeper understandings of any subject, asking what the computer "understands" will be completely meaningless. When you add this to the fact that people are often very wrong about what they think they understand, I see no reason that a computer couldn't "understand" anything better than a human.
D_Ethan_Bones t1_je6ggtx wrote
This sub is flooded with people who started caring about AI in the past few months and gum up the terminology. People say AGI when they mean ASI, sometimes combining this with the idea that AGI is any minute now.
The latter is based on a much looser definition of AGI which is nowhere near ASI, but saying AGI 2023 Singularity 2023 gets updoots and retoots.
Then there's the people who just bust in here and say "well that's not TRUE AI" - the first time I have seen 'true' be the key term is from... these people.