Viewing a single comment thread. View all comments

phriot t1_ivz7mt9 wrote

Maybe I'm wrong, but I've always understood AGI to be "a roughly human-level machine intelligence." How can something be roughly human without consciousness and at least the appearance of free will?

0

kaushik_11226 t1_ivzik2s wrote

>How can something be roughly human without consciousness and at least the appearance of free will?

It doesn't have to be human. A intelligence machine that can solve major problem's and discoveries doesn't really need to have a human personality and emotion's.

10

phriot t1_ivzmj0u wrote

I feel like you focused on me leaving "level" out of that sentence, where I included it earlier in my comment. You're basically just saying that your definition of AGI is more literal than the one I use. The point of my comment was just that, up until maybe finding this subreddit, every time I saw AGI used, it had the connotation of consciousness.

It's probably splitting hairs, but it seems like people here just want to call any sufficiently good general piece of software "AGI." Yes, a really great General Artificial Intelligence will help us in many areas, but it's not what I've always understood "AGI" to be.

2

AdditionalPizza OP t1_ivzza75 wrote

The definition of AGI is an AI that can learn any task a human can. Most people presume that would mean the AI would also have to be equal or greater to a human at those tasks.

I don't know where the idea came that AGI has to be conscious. As far as I'm aware that's never been the definition. It's a talking point often associated with AGI and mentioned for Turing Tests, but contrary to your experience I've never heard anyone claim it's a requirement of AGI outside of this sub.

I also see other mixed up definitions in this sub. A lot of people refer to the singularity as the years (or decades) leading up to the actual moment of the singularity.

7