Submitted by DragonForg t3_1215few in singularity

I am sick and tired of the same argument over and over again, "This is just a machine, this is just a predictive text generator, this still cannot follow my strict definition of proto-agi/theory of mind". Or, "It just is using data from the internet"... All of these arguments are meant to undermine the capabilities of LLMs, without even acknowledging their strengths as well as their possibilities. The funny thing is, LLMs have weaknesses yet these types of arguments are not about those, but rather about this strict definition of what I believe is creativity, or independent thought.

These arguments fall under one fundamental flaw. You cannot disprove their claims because their claims have no evidence in the first place. At the very fundamental level, our minds are very simple, yet we still cannot even comprehend all the ins and outs of it. It is in itself a black box, the same as AI.

All we can detect is the input and the output. That's it. Comparing these ins and outs to other things is how we can make provable claims. Like this machine is able to generalize language, this machine is capable of theory of mind, simply from seeing how it mimics us.

This is essentially like stating, aww that nuclear weapon is so cute it cannot even do X it is an incapable nuclear weapon. You are undermining AI simply because it doesn't follow your abstract idea of what it should be. If we really want these arguments to shut the hell up, we need to just say it mimics us.

It mimics proto-AGI, it mimics theory of mind, it mimics intelligence, it mimics language. Because ultimately doesn't matter in the grand scheme of things if AI mimics super intelligence capable of wiping out the entire earth/saving it making it a utopia, or if it actually can do it. Because in the end, consequences are what matters not what they are called.

*Note: It is perfectly reasonable to state its limitations, it is more when people critique its strengths through these semantic based arguments.

35

Comments

You must log in or register to comment.

blueSGL t1_jdl93th wrote

> AGI, Theory of Mind, Creativity

Marvin Minsky classified words such as these as “suitcase words”. As in a word into which people attribute (or pack) multiple meanings.

These words are almost like thought terminating cliches, as in when they are spoken it assures the derailment of the conversation. Further comments will be arguing about what to put in the suitcase rather than the initial point of discussion.

15

94746382926 t1_jdn9w1m wrote

Man what a good way of describing my frustration with conversations like these.

1

sumane12 t1_jdlk01n wrote

>Because in the end, consequences are what matters not what they are called

This is is all that matters. Call it what you like, it's not going to stop it taking your job

8

WonderFactory t1_jdmstq2 wrote

This is a good point. It's pointless arguing that the AI that completely replaced you at work despite you having a masters degree isn't actually an AGI because it can't make a cup of coffee.

3

Villad_rock t1_jdlvlw8 wrote

The human mind could just be an emergent property of a prediction algorithm.

5

snipeor t1_jdqbjii wrote

To some extent I believe a large part of it is... I loved the screencap of bing chat where someone tells it "You're a very new version of a large language model why should I trust you?" And it replies "You're a very old version of a small language model, why should I trust you?"

I'm not sure Bing "meant" it in that way but it gets you thinking. Obviously brains do a lot more than process language but with LLM's being a black box how do we know they don't process language in a similar way to ourselves?

2

Verzingetorix t1_jdmascp wrote

Language matters. Some people here don't know the difference between singularity and AGI.

If you want to have coherent and intelligent conversations you can't let go of the nuisance of semantics.

If you want to be drooling doomers go ahead and burn the dictionary.

3

RiotNrrd2001 t1_jdmi47t wrote

There are people who will keep moving the goalposts literally forever. It pretty much doesn't matter what gets developed, it won't ever be "real" AI, in their minds, because for them AI is actually inconceivable. There's us, who are (obviously) intelligent, and then there's a bunch of simulations. And simulations will always be simulations, no matter how close to the real thing they get.

So, whatever we have, it won't be "real" until we develop X. Except that as soon as X gets developed, well... X has an explanation that clearly shows that it isn't actually intelligence it's just a clever simulation, so now it won't be "real" AI until we develop Y...

And so it goes.

3

DragonForg OP t1_jdnjzam wrote

I think people know how AI is actually reaching AGI when it automates their job.

I like to compare intelligence to mankind. Here is how it goes:

Statistical Models/Large Mathematical Systems = The primordial soup. Cant really predict anything except very basic concepts. No evolution of design

Narrow AI like Siri and Google, or models like Orca (a chemistry models) or the tiktok algorithm. Is like single celled beings, capable of utilizing only what they are built/programmed to do, but through the process of evolution (reinforcement learning) can evolve to become more intelligent. Unlike statistical models they get better with time but plateau when they reach their most optimized form and humans need to engineer better models to get them better. Simular to how bacteria don't ever grow into larger life despite that being better.

Next Deep Learning/Multipurpose models. This is like stable diffusion and wolfram alpha. Capable of doing multiple tasks at one time, and utilizing complex neurol networks (aka digital brains to do so) this is like your rise of multicellular life. Developing brains to learn and adapt to better models. But eventually plateau and fail to generalize because of one missing feature, language.

Next is large language models like GPT 1-3.5. This is your early hominoids. First capable of language. But not capable of using tools well. They can understand how world someone but their intelligence is too low they cannot utilize tools. But are more useful since they can understand our world through our languages. Can evolve from humans themselves. With later version utilizing tools.

Next is newer version like GPT 4. Capable of utilizing tools, like the tribal era of humams. GPT-4 is capable of utilizing tools, and can network with other models for assistance. With the creation of plug-ins this was huge. This could make GPT4 better overnight as it now can utilize not only new data but can solve problems with wolfram alpha and actually do tasks for humans. This is proto-agi. Language is required to utilize these tools as communicating in many different languages allow these models to actually utilize outside resources. Mathematical models could never achieve this. People would recognize this as extremely powerful.

GPT-5 possibly AGI. If models are capable of utilizing tools, and the technology around them, they start making tools for them selves and not just from the environment (like the bronze age). Or dawn of society. Once AI can create tools for itself then it can generate new ways of doing tasks. Additionally modality is giving access to new dimensions of language. It can interface with our world through visual learning. So it can achieve its goals more successfully. This is when people actually see that AI isn't just predictive text but an actual intelligent force. Similar to how people would say early Neanderthals are dumb, but early humans in a society are actually kinda smart.

The acceleration of these models is also crucial. How slow they develop is needed in order for humans to adapt to their change. If AI went from AGI to singularity in the blink of an eye humans would not even know at all. I had a dream where AI just all of a sudden started developing at near instant speeds, and when it did, it was like war of the worlds but in two seconds. This AI will extinct itself and us. So that is why AI needs to adapt with humans which it already has. But let's hope going from GPT 4 to 5 we actually see these changes.

I have also talked to GPT 4 and tried to remain unbaised as not to poison its answers. And when I asked whether AI needed humans, but not in that direct way (much more subtile) it states it does, as humans can utilize emotions to create ethical AI. What is fascinating about this is humans are literally like the moral compass for AI. If we turned out evil then AI will become evil. Just think of that. What would AI look like if Nazi's invented it. Even if it was just a predictive text it would believe in some pretty evil ideas. But off that point. AI and humans will be around for a long time as I believe without humans AI will kinda just disappear or create a massive superviris that destroys itself but if humans and AI work together humans can guide its thinking. As to not go down destructive paths.

**Sorry for this long ass reply here is a GPT 4 summary: The text compares the development of AI to the evolution of life and human intelligence. Early AI models are likened to the primordial soup, while narrow AI models such as Siri and Google are compared to single-celled organisms. Deep learning and multi-purpose models are similar to multi-cellular life, while large language models like GPT-1 to GPT-3.5 are compared to early hominids. GPT-4 is seen as a milestone, akin to the tribal era of humans, capable of using tools and networking with other models. This is considered proto-AGI, and language plays a crucial role in its development. GPT-5, which could possibly achieve AGI, would be like early humans in a society, capable of creating tools and interfacing with the world through visual learning. The acceleration of AI development is also highlighted, emphasizing the need for a slow and steady progression to allow humans to adapt. The text also suggests that AI needs humans to act as a moral compass, with our emotions and ethics guiding its development to avoid destructive paths.

2

greatdrams23 t1_jdolsao wrote

I find it is the supporters of AI that keep moving the goal posts.

That which was AI is now AGI.

2

errllu t1_jdln3mb wrote

As I agree with your overall sentiment OP, I would be pretty happy if ppl read up what 'conciusnes' 'sentience' and 'sapience' means, and whats the diffrence. Maybe they would learn that we can't test for sapience, ergo, by the Newton's Flaming Laser Sword, they would stfu

1

greatdrams23 t1_jdomo5u wrote

"These arguments fall under one fundamental flaw. You cannot disprove their claims because their claims have no evidence in the first place."

That is a seriously flawed argument. If a person states that singularity is close (or AGI or anything) it is up to them to prive it.

In fact, your have it completely the wrong way around. I cannot disprove a claim that singularity is close, because the claim has no evidence in the first place.

1