InevitableAd5222 t1_ja1d1i7 wrote
Reply to comment by guyonahorse in Why the development of artificial general intelligence could be the most dangerous new arms race since nuclear weapons by jamesj
So much of the confusion in this debate comes down to philosophical terminology. Like "general" intelligence. What would we consider "general intelligence"? Symbolic reasoning? BTW we don't need right/wrong answers in the form of a labeled datasets to train an AI. ChatGPT doesn't even use that, it is self-supervised. For more generic "intelligence" look into self-supervised learning in RL environments. ML models can also be trained by "survival of the fittest", genetic/evolutionary algorithms are being researched as an alternative to the SOTA gradient based methods.
​
guyonahorse t1_ja1ftau wrote
Well, ChatGPT's training is pretty simple. It's trained on how accurate it can predict the next words in a training document. It's trained to imitate the text it was trained on. The data is all "correct", which amusingly leads to bad traits as it's imitating bad things. Also amusing is the qualia of the AI seemingly being able to have emotions. Is it saying the text because it's angry or because it's just trained to imitate angry text in a similar context?
But yeah, general intelligence is super vague. I don't think we want an AI that would have the capability to get angry or depressed, but these are things that evolved naturally in animals as they benefit survival. Pretty much all dystopian AI movies are based on the AI thinking that to survive it has to kill all humans...
Monnok t1_ja28d43 wrote
There is a pretty widely accepted and specific definition for general AI... but I don't like it. It's basically a list of simple things the human brain can do that computers didn't happen to be able to do yet in like 1987. I think it's a mostly unhelpful definition.
I think "General Artificial Intelligence" really does conjure some vaguely shared cultural understanding laced with a tinge of fear for most people... but that the official definition misses the heart of the matter.
Instead, I always used to want to define General AI as a program that:
-
Exits primarily to author other programs, and
-
Actively alters its own programming to become better at authoring other programs
I always thought this captured the heart of the runaway-train fear that we all sorta share... without a program having to necessarily already be a runaway-train to qualify.
ChuckFarkley t1_ja3zom1 wrote
By some definitions, your description of GAI also qualifies as being spiritual, esp. Maintaining and improving its own code.
Viewing a single comment thread. View all comments