naum547
naum547 t1_je7ml6j wrote
Reply to comment by WarmSignificance1 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
LLMs are trained exclusively on text, so they excel at language, basically they have an amazing model of human languages and know how to use them, what they lack for example is a model of the earth, so they fail at using latitude etc. same for math, the only reason they would know 2 + 2 = 4 is because they read enough times that 2 + 2 = 4, but they have no concept of it. If they would be trained on something like 3d objects they would understand that 2 things + 2 things make 4 things.
naum547 t1_je67zvc wrote
Reply to The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
I think you are underestimating how much of a difference intelligence would make, for example how much "higher" is out intelligence then a monkey? probably less then 2x, maybe we are even something like 50% more intelligent. So, an AI 5x more intelligent would be completely incomprehensible to us, let alone a 100x AI.
naum547 t1_je66s75 wrote
It probably won't, and hopefully with enough time jobs would be obsolete, but the period in between could be problematic.
naum547 t1_je7ttzr wrote
Reply to comment by [deleted] in Do you guys think AGI will cure mental disorders? by Ok-Wing111
Let's say hypothetically nanobots in your blood stream could cure every disease and illness even before you knew you had it and significantly extend your lifespan. Would you still refuse it?