theotherquantumjim
theotherquantumjim t1_jedspfh wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
The language and the symbols are simply the tools to learn the inherent truths. You can change the symbols but the rules beneath will be the same. Doesn’t matter if one is called “one” or “zarg” or “egg”. It still means one. With regards LLMs I am very interested to see how far they can extend the context windows and if there are possibilities for long-term memory.
theotherquantumjim t1_jednh7n wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Whilst this is correct, 1+1=2 will still be true whether there is someone to observe it or not.
theotherquantumjim t1_jearyqw wrote
Reply to comment by TitusPullo4 in [R] The Debate Over Understanding in AI’s Large Language Models by currentscurrents
It absolutely is not.
theotherquantumjim t1_jearid2 wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
That is one school of thought certainly. There are plenty in academia who argue that maths is fundamental
theotherquantumjim t1_je8ysa1 wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
This is largely semantic trickery though. Using apples is just an easy way for children to learn the fundament that 1+1=2. Your example doesn’t really hold up since a pile of sand is not really a mathematical concept. What you are actually talking about is 1 billion grains of sand + 1 billion grains of sand. Put them together and you will definitely find 2 billion grains of sand. The fundamental mathematical principles hidden behind the language hold true
theotherquantumjim t1_je8shkz wrote
Reply to comment by Andriyo in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
This is not correct at all. From a young age people learn the principles of mathematics, usually through the manipulation of physical objects. They learn numerical symbols and how these connect to real-world items e.g. if I have 1 of anything and add 1 more to it I have 2. Adding 1 more each time increases the symbolic value by 1 increment. That is a rule of mathematics that we learn very young and can apply in many situations
theotherquantumjim t1_jdz4uul wrote
Reply to comment by TopicRepulsive7936 in Singularity is a hypothesis by Gortanian2
Lol what a bizarre request
theotherquantumjim t1_jdz19vm wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
I think AGI (depending on your definition) is pretty close already. As you’ve alluded to, we may never get ASI. I’m not sure that matters really. Singularity suggests a point where the tech is indistinguishable from magic e.g. nanotech, ftl travel etc. I don’t think we need that kind of event to fundamentally re-shape society, as others have said
theotherquantumjim t1_jdlre84 wrote
theotherquantumjim t1_jdjvcxi wrote
Reply to comment by anothererrta in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
Exactly. If it looks like a dog and barks like a dog, then we may as well call it a dog
theotherquantumjim t1_jamo41v wrote
Reply to comment by Aseyhe in Why do cosmologists say that gravity should "slow down" the expansion of the universe? by crazunggoy47
Recent study suggests otherwise doesn’t it? Yet to be confirmed independently I guess, but hasn’t it very recently been posited (maybe also evidenced) that black holes are driving expansion by returning energy to the quantum vacuum? Does this not mean expansion would indeed be physical?
theotherquantumjim t1_j6z5twu wrote
Reply to comment by dasnihil in ChatGPT Passes US Medical Licensing Exams Without Cramming by RareGur3157
Such a good bot
theotherquantumjim t1_j65196k wrote
Reply to comment by gay_manta_ray in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
Unless you’re just a plain ol’ psychopath. In which case being cruel is just for funs
theotherquantumjim t1_j64yh4e wrote
Reply to comment by helliun in MusicLM: Generating Music From Text (Google Research) by nick7566
Not always true either.
theotherquantumjim t1_j64h4ws wrote
Reply to comment by helliun in MusicLM: Generating Music From Text (Google Research) by nick7566
That hasn’t really been the case for some time now
theotherquantumjim t1_j1hbwsd wrote
Reply to comment by fortunum in Hype bubble by fortunum
Is it not reasonable to posit that AGI doesn’t need consciousness though? Notwithstanding we aren’t yet clear exactly what it is, but there doesn’t seem to be a logical requirement for AGI to have consciousness. Having said that, I would agree that a “language mimic” is probably very far away from AGI and that some kind of LTM, as well as multi-mode sensory input, cross referencing and feedback is probably a pre-requisite.
theotherquantumjim t1_jegi1u7 wrote
Reply to comment by dennyCranne72 in ELI5 why does stretching feel good? by dennyCranne72
Mmm donuts