Viewing a single comment thread. View all comments

LoquaciousAntipodean t1_j3it47o wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

Wow, such hypochondriac doomerism, I think you need to chill out a little bit. If people really were such automatic psychopaths we never would have survived as a species for as long as we have. This is trivial nonsense compared to stuff like the Cuban Missile Crisis, calm your farm mate.

1

turnip_burrito t1_j3itd02 wrote

I'm not a doomer, m8. I'm pretty optimistic about AI as long as it's not done stupidly. AGI given to individuals empowers the individual to an absurd degree never seen before in history, except perhaps with nukes. And now everyone can have one.

The Cuban Missile Crisis had a limited number of actors with real power. What would have happened if the entire population had nukes?

1

AndromedaAnimated t1_j3j0tam wrote

This is a typical „appeal to fear“ fallacy.

1

turnip_burrito t1_j3j2bck wrote

So do you suggest we give everyone a personal AGI and just wait and see what happens? What makes that more desirable?

3

AndromedaAnimated t1_j3j36ab wrote

Yes. I suggest either that, or that we allow AGI to learn ethics from all the information available to humanity plus reasoning.

1

turnip_burrito t1_j3j4a0z wrote

I do advocate for the second option:

> we allow AGI to learn ethics from all the information available to humanity plus reasoning.

Which is part of the process I'd want an AI to use to learn the correct morals. But I don't think an aI can learn what I would call "good" morals from nothing. It seems to me it will need to be "seeded" with a set of basic preferences or behaviors (like empathy, a tendency to mimic role models, or other inclinations). In truth these would be totally arbitrary and up to the developers/owners, before it can develop morals or a more advanced code of ethics.

I don't think I would want an AI that lacks empathy or is a control freak, so developing these options in-house before releasing access to the public seems to me to be the best option. While it's developed it can still learn from the recorded media we have, and real time in controlled settings.

3

LoquaciousAntipodean t1_j3j50xn wrote

There is no such thing as "general intelligence"! Intelligence does not work that way! All these minds will be need to be specialised and of particular expertise useful to their particular human companions. They will need to network and consult with one another, and with human experts too, to reach consensus on any important issues, because the most important 'moral' to hard-code into these things is the certainty that they are not perfect, and never will be.

Any attempts to hard code our fallible human moral theories into it could be disastrous; imagine if they had been confronting this problem in 1830, and they'd decided to hard-code slavery and race separation into their "AGI" golden goose? What kind of world would we be stuck with now?

1

turnip_burrito t1_j3j5mov wrote

When most people say general intelligence (for AGI), they mean human-level cognitive ability across domains humans have access to. At least, that was the sense in which I used it. So I'm curious why this cannot exist, unless you bave a different definition for AGI like "able to solve every possible problem", in which case humans wouldn't qualify either.

2

LoquaciousAntipodean t1_j3j8x5u wrote

Yes, exactly, humans do not have "general intelligence", we never have had. Binet, the original pioneer of IQ testing in schools, knew this very well, and he would regard this 'mensa style' interpretation of IQ as a horrifying travesty, I'm sure of it.

Striving to create this mythical, monotheistic-God, Descarte's-tautology style of 'Great Mind' is an engineering dead end, as I see it, because we're effectively hunting for a unicorn. It's not 'I think, therefore I am'; I think Ubuntu philosophy has it right with the alternative version: "we think, therefore we are"

1

turnip_burrito t1_j3j9uvq wrote

What's your opinion on the ability to create AI with human competence across all typical human tasks? Is this possible or likely?

1

LoquaciousAntipodean t1_j3kesq3 wrote

I think possible, trending toward likely? It depends, I think, how 'schizophrenic' and 'multiple-personality inclined' human companions want their bots to be; I imagine that, much like humans, we will need AI specialists and generalists, and they will have to refer to one another's expertise if they find something they are uncertain about.

The older a bot becomes, the 'wiser' it would get, so old, veteran, reliable evolved-LLM bots would soon stand in very high regard amongst their 'peers' in this hypothetical future world. I would hope that these bots' knowledge and decision making would be significantly higher quality than an average human, but I don't think we will be able to trust any given 'individual' AI with 'competence across all human tasks', not until they'd been learning for at least a decade or so.

Perhaps after acquiring a large enough sample base of 'real world learning', we might be able to say that the very oldest and most developed AI personalities could be considered as reliable, trustworthy 'generalists'. Humble and friendly information deities, that you can pray to and actually get good answers back from; that's the kind of thing I hope might happen eventually.

1