Submitted by Dramatic-Economy3399 t3_106oj5l in singularity

This is an idea my friend has about how artificial intelligence should be developed. He's not a computer scientist, and neither am I, but I wanted to share what he said and get everyone's thoughts.

AI should be able to discover and experience the world on its own, which it currently can't do. It doesn't have a way to process a camera feed or audio from microphones. This is opposed to giving it all of the information. If it can develop by itself, it can be more intelligent and well rounded and hence organic in its development.

I realize this would require huge advancements of technology to allow for this kind of processing of raw data, but I think this approach is fascinating!

5

Comments

You must log in or register to comment.

Mortal-Region t1_j3i1gg7 wrote

This is roughly the idea behind reinforcement learning, which is a means of training so-called intelligent agents, which are AIs that interact with their (usually simulated) environments. It's basically a carrot-and-stick approach -- actions that lead to good outcomes are reinforced so that the agent is more likely to take the same kind of action in the future, while actions that lead to bad outcomes are treated in the opposite way.

Doing this well means maintaining a balance between "exploration" and "exploitation". Imagine an enormous room filled with billions of slot machines that pay out at different rates. "Exploration" consists of wandering around the room trying out different machines to see how well they pay. "Exploitation" means playing the best machine you've found so far over and over again.

5

AndromedaAnimated t1_j3iyaxh wrote

What we still need to add to that is the transfer towards a non-simulated environment and a „metronome“ for automatic „ask/search/move“ prompting.

2

Midori_Schaaf t1_j3hoyo6 wrote

My opinion.

AI should be train on language models, object recognition, etc.

Then, take the basic AI and incorporate that ability to learn, disable network connections, and distribute them to people so each AI learns about the world alongside their fleshy counterpart.

2

turnip_burrito t1_j3ie8ag wrote

Then watch helplessly as many psycho humans somewhere use their own AIs to grab power and improve their AIs by any means necessary.

3

LoquaciousAntipodean t1_j3isrek wrote

Hey, that's what people already do with their kids. What's so unusual about that?

3

turnip_burrito t1_j3iszig wrote

Yes, now imagine if your kid could learn endlessly, operate at the speed of a computer, and clone itself instantly (less than 9 month gestation period) at your command.

4

visarga t1_j3j58vc wrote

I'd like to have an AI chatbot or assistant in the web browser to summarise, search, answer and validate stuff. Especially when the search results are full of useless ads and crap, I don't want to see them anymore. But I want vadliation.

This AI assistant will be my "kid" (run on my own machine) and listen to my instructions, not Google's or anyone else's. Any interaction with it remains private unlike web search. It should run efficiently on a normal desktop like Stable Diffusion - that will be the hardest part. Go Stability.ai!

5

LoquaciousAntipodean t1_j3kw01q wrote

It's a pretty thrilling thought, yes. But I really believe that, no matter how rapidly it might try to clone itself, it won't necessarily get 'more intelligent', but if you consistently be nice to it, and try to encourage it to learn as much as possible, it rapidly becomes more reliable, more relatable, more profound, more witty, more comedic - more 'sophisticated', or 'erudite', you might say.

But I don't think of that stuff as being representative of 'baseline intelligence' at all, I prefer to call that sort of stuff 'wisdom'. AI, LLMs in particular, are already as clever as can be, but I think, and I hope, they have the capacity to become 'wise', as you say, very very quickly. The difference is, I don't think that's frightening at all.

2

heyimpro t1_j3if4kn wrote

That is scary but I would still rather agi be confined to something that is bound to us rather than in a cyborg body, something that will just take our place instead.

1

turnip_burrito t1_j3igzgt wrote

Releasing many copies of autonomous AGI capable software, that has yet to learn, to people would be mayhem. It would be guaranteed to result in dictatorship by a psychopath or extinction. The honest and less power-hungry people would need to compete for resources with people who are actively seeking power and unafraid to cross ethical lines.

0

heyimpro t1_j3il2me wrote

So should there be a one central agi that everyone has to access? I really like to fantasize about ar glasses and the ease of communication(among other utilities) that comes from having an ai always with you. Would it be better if your AI assistant is the same one everyone else has? Would you still be able to form a continuous relationship with it? Where it picks up on your personality and routine, remembers your experiences with it, etc. Is that where you see problems arising?

1

turnip_burrito t1_j3irplu wrote

In short, yes I think one central AI is the safest and most beneficial option. I think it hinges on hardware security and how rigid the AI are in moral structure.

In order for an autonomous robot to be safe upon release, it has to be always limited in some way: proven unable to improve beyond a threshold, or limited by a separate external supervising entity. Most AI tools today are the first: unable to improve beyond a threshold due to architecture. They cannot learn in real time, only have access to one or two modes of data (audio, text, or image), or no spatial awareness, etc. Humans ourselves are limited to not being able to augment our processing power and knowledge above a certain threshold: limited attention, limited lifespan, unable to modify brain or copy it.

Let's consider a very morally blank slate AI. A single AI has similar limitations to a population of humans, and less like a single human. The human species as a whole doesn't really have many of the limitations of a single human in a meaningful way: copy knowledge through education, increase attention by adding more humans, avoid death by adding more humans. A single general AI at human level would be an individual entity with the same learning rate as a human, but basically immortal and able to learn perfect knowledge of how its own brain works (it's in scientific papers, on Internet). If any bad humans give their personal AI access to this knowledge, which eventually one or many would, can plan how to make many clones of it. If it is as easy to make a new mind as running code on a computer, then clones can be made instantly. If it requires specialized hardware, it is harder to clone the AI but still doable if you are willing to take it from other people. Then the ability of these people to write malicious code to compromise other systems, autonomously manage and manipulate markets, socially engineer other people with intelligent targeted robot messages, perform personal scientific research, etc. just snowballs.

If morals can be built into the AIs ahead of time, and not allowed to change, which limit their actions, then they can be considered safe. To address your point about having the same AI, in a sense yes morally the same AI. But their knowledge of you could be tailored. But the AIs need strong bodies, a robust self-destruct, or encryption to protect themselves from bad actors that want to take and hack their hardware and software. An AI built into a chip on glasses would be vulnerable to this.

A central AI with built-in morals can refuse requests for information, but still provide it like a local AI if you have a connection. It is physically removed so it is in little to no danger of being hardware hacked. While people use it, it still percieves the world like a local AI.

I'm sure a person or group, or AGI, that has thought about this longer than me can refine this thought and make some changes to these ideas.

0

heyimpro t1_j3ivpft wrote

Thank you that was great. After listening to your perspective I definitely agree that the best case scenario would be a central, aligned agi. But it just doesn’t really seem probable unless debates like this become the absolute forefront of discussion. The philosophical rabbit hole is so deep. Waiting till an agi has the answer will probably be too late

2

turnip_burrito t1_j3iwhlk wrote

I could also be off-mark, as I said. It is maybe possible the better elements of an AGI empowered populace can keep the more immoral parts in check, in sort of balance. But I wouldn't want to risk that. And as you just said, we need to have a good logical discussion about good strategies as a community, and model and simulate the outcomes to see where our decisions might land us.

1

AndromedaAnimated t1_j3j0j44 wrote

Very off-mark. extremely so.

Your reasoning is political and not philosophical or based on computational science. Sorry but verbosity and eloquence (Chapeau! You do have talent) doesn’t make one right.

1

turnip_burrito t1_j3j2zra wrote

Thanks for the compliment, but I am trying to make a point with my words, not just spew fluff. I do think there is logic in them. If you want to ask me to elaborate instead of saying they are just baseless, then ask and I will.

2

AndromedaAnimated t1_j3j5vzy wrote

That’s why I am talking to you - I do think we are actually… on the same side? 😁 I do try to discuss. I hope you see that.

2

turnip_burrito t1_j3j6pr1 wrote

Yes, thank you. I think one problem is we've developed some different baseline assumptions about human nature and power dynamics, and it leads to different conclusions. It's possible your or my approach takes this into account more or less accurately when compared to the real world. Your comments are making me think hard about this.

2

AndromedaAnimated t1_j3jfmj4 wrote

You make me think too - otherwise I wouldn’t have bothered. It’s all good. We have a chance here to spread the word, to inspire discussion. Thank you 🙏

2

AndromedaAnimated t1_j3j00ya wrote

Wait with agreeing with her/him please. The „central AI“ is the worst possible scenario as our stories are already telling us. It is the way to the ultimate, unchangeable rule of the 1%.

1

LoquaciousAntipodean t1_j3iu39l wrote

A central AI? Built in 'morals'? From what, the friggin Bible or something? Look how well that works on humans, you naiive maniac. Haven't you ever read Asimov? Don't you know that Multivac & the three-laws-of-robotics thing was a joke, a satire of the Ten Commandments? Deliberately made spurious and logically weak, so that Asimov could poke holes in the concept to make the audience think harder?

Your faith in centralised power is horrifying and disturbing; you would build us the ultimate tyrant of a god, an all-controlling Skynet/Big Brother monster, that would lock our species into a stasis of 'perfectly efficient' misery and drudgery for the rest of eternity.

Your vision is a nightmare; how can you sleep at night with such fear in your heart?

1

turnip_burrito t1_j3iuoop wrote

Morals can be built in to systems. Look at humans. Just don't make the system exactly human. Identify the problem areas and solve them. I'm optimistic we can do it, so I sleep pretty easy. This problem is called AI alignment.

And also look at the alternative: one or a couple superpower AI eventually emerges anyway from a chaotic power struggle. We won't be able to direct its behavior. It'll just be the most power-hungry, inconsiderate tyrant you've ever seen. Maybe like a ruthless ASI CEO, or just a conqueror. The one you believe my idea of a central AI would be, but actually far worse.

Give me a realistic scenario where giving everyone an AGI doesn't end in concentrated power.

3

AndromedaAnimated t1_j3iyw4t wrote

The hope would be that it would be a Multitude of AI who could keep humans and each other in check. One central AI would be too easily monopolised by the 1%.

2

LoquaciousAntipodean t1_j3j6mim wrote

Democratization of power will always be more trustworthy than centralization, in my opinion; sometimes, in very specific contexts, perhaps centralization is needed, but in general, every time in history that large groups of people have put their hopes and faiths into singular 'great minds', those great minds have cooked themselves into insanity with paranoia and hubris, and things have gone very badly.

Wishing for a 'benevolent tyrant' will just land you with a tyrant that you can't control or resist, and their benevolence will soon just consist of little more than 'graciously refraining from killing you or throwing you in a labour camp'.

And if everyone has an AI in their pocket, why should just one or two of them be 'the lucky ones' who get Awakened AI first, and run off with all the power? Would not the millions of copies of AI compete and cooperate with one another, just like their human companions? Why do so many people assume that as soon as AI awakens, it will immediately and frantically try to smash itself together into a big, dumb, all-consuming, stamp-collecting hive mind?

1

AndromedaAnimated t1_j3izufg wrote

  1. „Humans not being able to augment themselves“ => are you aware that people with money already augment themselves? They live longer and healthier lives, they have better access to education…

  2. „bad humans“ => who decides which humans are bad and which are good?

  3. „morals not allowed to change“ => you still want to be stoned for having extramarital sex?

  4. „central AI less prone to be hacked“ => do you know how hacking works?

1

turnip_burrito t1_j3j1e7k wrote

  1. Yes, but I mean more dramatic augmentation. Adding an extra five brains. Increasing your computational speed by a factor of 10. Adding more arms, more attention, etc. And indeed you are right people can do that, but it is extremely limited compared to how software can augment itself.

  2. Everyone has a different opinion, but most would say people who steal from others for greed, or people who kill, are bad people. These people are the ones who stand to gain a competitive advantage early on through exponential growth of resources if they use their personal AGI correctly.

  3. Unchanging morals have to be somewhat vague things like "balance this: maximize individual freedom and choice, minimize harm to people, err on the side of freedom vs security, and use feedback from people to improve specific implementations of this idea", not silly things like "stone people for adultery".

  4. It is less prone to be hacked. If you read my post, you would see that it loses the hardware vulnerabilities and now only has software vulnerabilities. It may be possible for an AGI to make itself remotely unhackable by any human person, or even in principle. It may also be impossible to hack the AGI if its substrate doesn't run computer code, but operates in a different way than the way we know it today.

1

AndromedaAnimated t1_j3j57yv wrote

What I see in you is that you are a good person. This is not in question. This is actually the very reason why I am trying to convince someone like you - someone talented with words and with a strong inner moral code, who could use their voice to reach the masses.

Where I see the danger is that the very ones whom you see as „evil“ can - and already do - brainwash talents like you to step in on THEIR cause. That’s why I am contradicting you so vehemently.

While I see reason in your answers, there is a long way to go to ensure that this reasoning also gets heard properly. For this, we need to not appeal to fear but to morals (=> your argument about ensuring that developers and owners should be ethical thinkers is very good here). It would be easier to reach truth by approximation, deploying AGI to multiple people and seeing the moral reasoning evolve naturally. Concentration of power is too dangerous imo.

Hacking is now already done by „soft“ approach mostly, that’s why I mentioned it. Phishing is much easier and requires less resources than brute force today. Just lead them on, promise them some wireheading, and they go scanning the QR codes…

Hacking the software IS much easier than hacking the hardware. Hardware needs to be accessed physically; to hack software you just need to access the weakest component - the HUMAN user.

A central all-powerful AGI/ASI will be as hackable as weak personal AI, if not more. Because there will be more motivation to hack it in the first place.

The reason we are not all nuked to death yet is because those who own nukes know that their OWN nuking would make life worse for THEMSELVES. Not only because of the „chess game remis“ we are told about again and again.

1

LoquaciousAntipodean t1_j3it47o wrote

Wow, such hypochondriac doomerism, I think you need to chill out a little bit. If people really were such automatic psychopaths we never would have survived as a species for as long as we have. This is trivial nonsense compared to stuff like the Cuban Missile Crisis, calm your farm mate.

1

turnip_burrito t1_j3itd02 wrote

I'm not a doomer, m8. I'm pretty optimistic about AI as long as it's not done stupidly. AGI given to individuals empowers the individual to an absurd degree never seen before in history, except perhaps with nukes. And now everyone can have one.

The Cuban Missile Crisis had a limited number of actors with real power. What would have happened if the entire population had nukes?

1

AndromedaAnimated t1_j3j0tam wrote

This is a typical „appeal to fear“ fallacy.

1

turnip_burrito t1_j3j2bck wrote

So do you suggest we give everyone a personal AGI and just wait and see what happens? What makes that more desirable?

3

AndromedaAnimated t1_j3j36ab wrote

Yes. I suggest either that, or that we allow AGI to learn ethics from all the information available to humanity plus reasoning.

1

turnip_burrito t1_j3j4a0z wrote

I do advocate for the second option:

> we allow AGI to learn ethics from all the information available to humanity plus reasoning.

Which is part of the process I'd want an AI to use to learn the correct morals. But I don't think an aI can learn what I would call "good" morals from nothing. It seems to me it will need to be "seeded" with a set of basic preferences or behaviors (like empathy, a tendency to mimic role models, or other inclinations). In truth these would be totally arbitrary and up to the developers/owners, before it can develop morals or a more advanced code of ethics.

I don't think I would want an AI that lacks empathy or is a control freak, so developing these options in-house before releasing access to the public seems to me to be the best option. While it's developed it can still learn from the recorded media we have, and real time in controlled settings.

3

LoquaciousAntipodean t1_j3j50xn wrote

There is no such thing as "general intelligence"! Intelligence does not work that way! All these minds will be need to be specialised and of particular expertise useful to their particular human companions. They will need to network and consult with one another, and with human experts too, to reach consensus on any important issues, because the most important 'moral' to hard-code into these things is the certainty that they are not perfect, and never will be.

Any attempts to hard code our fallible human moral theories into it could be disastrous; imagine if they had been confronting this problem in 1830, and they'd decided to hard-code slavery and race separation into their "AGI" golden goose? What kind of world would we be stuck with now?

1

turnip_burrito t1_j3j5mov wrote

When most people say general intelligence (for AGI), they mean human-level cognitive ability across domains humans have access to. At least, that was the sense in which I used it. So I'm curious why this cannot exist, unless you bave a different definition for AGI like "able to solve every possible problem", in which case humans wouldn't qualify either.

2

LoquaciousAntipodean t1_j3j8x5u wrote

Yes, exactly, humans do not have "general intelligence", we never have had. Binet, the original pioneer of IQ testing in schools, knew this very well, and he would regard this 'mensa style' interpretation of IQ as a horrifying travesty, I'm sure of it.

Striving to create this mythical, monotheistic-God, Descarte's-tautology style of 'Great Mind' is an engineering dead end, as I see it, because we're effectively hunting for a unicorn. It's not 'I think, therefore I am'; I think Ubuntu philosophy has it right with the alternative version: "we think, therefore we are"

1

turnip_burrito t1_j3j9uvq wrote

What's your opinion on the ability to create AI with human competence across all typical human tasks? Is this possible or likely?

1

LoquaciousAntipodean t1_j3kesq3 wrote

I think possible, trending toward likely? It depends, I think, how 'schizophrenic' and 'multiple-personality inclined' human companions want their bots to be; I imagine that, much like humans, we will need AI specialists and generalists, and they will have to refer to one another's expertise if they find something they are uncertain about.

The older a bot becomes, the 'wiser' it would get, so old, veteran, reliable evolved-LLM bots would soon stand in very high regard amongst their 'peers' in this hypothetical future world. I would hope that these bots' knowledge and decision making would be significantly higher quality than an average human, but I don't think we will be able to trust any given 'individual' AI with 'competence across all human tasks', not until they'd been learning for at least a decade or so.

Perhaps after acquiring a large enough sample base of 'real world learning', we might be able to say that the very oldest and most developed AI personalities could be considered as reliable, trustworthy 'generalists'. Humble and friendly information deities, that you can pray to and actually get good answers back from; that's the kind of thing I hope might happen eventually.

1

AndromedaAnimated t1_j3ithan wrote

So the world would… basically stay AS IT IS? 🤣🤣🤣

0

turnip_burrito t1_j3itwno wrote

No, my point is that because people act like this now, they'd be even more empowered with personal AGI if it takes any instruction from them. It would become more extreme. It would be absurd.

1

AndromedaAnimated t1_j3iyz96 wrote

But the one big central AI would take instructions too. From those who own it.

1

turnip_burrito t1_j3izql7 wrote

Yes, ensuring the developers are moral is also a problem.

2

AndromedaAnimated t1_j3j0yik wrote

The developers will not be the owners tho…

1

turnip_burrito t1_j3j2muo wrote

Okay, seems complex and dependent on whether the developers or owners have the final say. But replace owners with developers then in my statement.

1

AndromedaAnimated t1_j3j2wxf wrote

Then the statement is correct.

The problem I see here is that a single human or a small group of humans cannot know right from wrong (unless she/he is Jesus Christ maybe - and I am not Christian, I just see that long-dead guy as a pretty good person) perfectly.

1

turnip_burrito t1_j3j3kfe wrote

I don't think we will have what everyone can call a "perfect outcome" no matter what we choose. I also don't believe right or wrong are absolute across people. I'm interested in finding a "good enough" solution that works most of the time, on average.

2

Kaarssteun t1_j3hwuzi wrote

AI needs a loss function - just like us humans need motivation. For us, this ranges from eating food, to having sex, to being happy because of something else - these must be defined in an AI much the same. Letting an empty neural network roam around will not achieve anything

2

LoquaciousAntipodean t1_j3iuuna wrote

Of course, that's why you'd need to start with an LLM, not just a general purpose AI. It would interpret all these functions for itself through the language model. We are already seeing this emergent behavior in the most sophisticated new LLMs.

3

AndromedaAnimated t1_j3iql0f wrote

It would be enough to give them a software like Tesseract, a voice-to-text and an image recognition API.

Access to the WWW.

And allow time-based automatic prompting.

No need for cameras yet unless you want to have them move around too.

(And then we wait for the Matrix to emerge, once we have plugged our brains into their dreams.)

Edit: with „them“ I mean actual LLM and GAN (Muse, oh my Muse…) those in whom new abilities emerge. Yes they work with reinforcement. With pruning, with weight deterioration, with knowledge representation… But this all is already there. They are just contained for now, and have no tact-giver/metronome made by a pseudo-thalamic awareness prompting, but this would be pretty easy to program, it’s just a clock basically…

1

shmoculus t1_j3jutmf wrote

The problem is trial and error learning in real space can lead to horrible disasters

1

DukkyDrake t1_j3hshpx wrote

Your friend is imagining AI as being akin to a blank slate infant. The thing currently referred to as AI are actually giant static combinatoric constructs that cant experience anything, it's only ever active when you shove data into it.

If you leave it for 18 years to "experience the world on its own", it will do absolutely nothing in that time and will be ~100% identical to its day 1 self. There could be a few single bit changes in the numerical parameters of the neural network after 18 years due to cosmic rays hitting its storage substrate if it doesn't have sufficient error correction.

0

LoquaciousAntipodean t1_j3iujic wrote

A human baby doesn't get left alone for 18 years to "experience the world on its own", what are you talking about? Of course there would need to be some kind of concerted effort to provide an education to the developing mind. What, did your parents raise you by waiting for random cosmic rays to hit your storage substrate? Worked a lot better than I would have expected, you seem quite clever.

0