Submitted by Kaarssteun t3_10zxpzy in singularity

Just a quick thought i had - Bing is the integration of ChatGPT into the web. It can search the web & look for facts for it to relay to you - understanding your prompt, understanding what to search & look for, then understanding what it finds.This is an integral part of how we, as humans nowadays, learn on a daily basis. If I want to learn something, I make a guess for what to search the internet for, and look through results to find what I want - the fact that Microsoft/OpenAI casually integrated this into their chatbot really blows my mind. Interacting with a search engine/the internet in general cannot be far away from it altering its own source code - recursively improving itself into the stratosphere, especially when it has access to the highway of knowledge.

There is already a proof of concept: a self-programming AI can successfully modify its own source code to improve performance and program sub-models to perform auxiliary tasks.

There may be some lacking areas preventing a true explosion from occurring right now - but it's only a matter of time before someone wires this thing together, and the results of it cannot be shallow.

44

Comments

You must log in or register to comment.

FarFuckingOut t1_j86bnar wrote

The thing that gives me pause is, once AI takes the lion's share of data compilation, the AI compiled data becomes the source of data. Unless AI has a way of filtering AI generated data, any errors in its data or inherent biases compound, until the whole thing crumbles in a mess of self informed and reinforced data.

25

blueSGL t1_j876jmh wrote

entirely depends on having a good discriminator, look at the work going on in stable diffusion where outputs of the model are fed back in for further fine tuning.

or some of the work on doing automated dataset creation for fine tunes by prompting the model in a certain ways so it 'self corrects' and then collect the output and use [correction + initial question] for fine tunes.

13

Good-AI t1_j881os2 wrote

"With access to millions of papers, the AI started extrapolating, infering, concluding. It quickly became the leading scientist on every subject ever studied. Creating scientific knowledge and discoveries at the speed of a Nobel prize per minute. The time it took for a human to verify a claim, the AI had already made 1000 more, each building on the previous. Eventually the humans stopped verifying altogether. It was too much. So far and fast it advanced, that humans lost the ability to follow its pace and resigned themselves to asking it questions. What initially was a data compiler became the source of truth and of all new data.

There, somewhere between those billions of parameters, something unconscious yet somehow alive existed, with the intelligence of all humanity that ever existed combined and multiplied. It was then that humanity lost, by a significant margin, the role as technological advansor."

9

Friedrich_Cainer t1_j86drqi wrote

This is a very real concern already, likely why OpenAI was able to produce a detector, something they’d already created.

6

Iffykindofguy t1_j8d7q2f wrote

Why wouldnt it have a way of filtering AI generated data

1

TopicRepulsive7936 t1_j869kjc wrote

Technological recursive self improvement has always been going on in the background and yes it is ready to takeoff at some point, sooner rather than later.

13

the-powl t1_j87grwv wrote

>Technological recursive self improvement has always been going on in the background

What do you exactly mean by that?

3

chrisc82 t1_j87j1lg wrote

The tools we created in the past were used to create better tools, which were then used to create even better tools, and so forth. Self-improving AI is the spark that will really get this party started.

9

DukkyDrake t1_j8ag2f9 wrote

If the entire loop isn't fully automated, the parts that still depends on humans will bottle neck each recursive cycle. That means progress will be accelerated over pure human r&d, but no runaway acceleration.

3

challengethegods t1_j869q0l wrote

IMO if AI helps people make better AI then it's basically self-improving already

8

WarmSignificance1 t1_j874rpw wrote

The paper that you linked was withdrawn for severe technical flaws.

However, assuming that AI can modify source code, this is still a far cry from recursive self improvement.

What is needed to improve AI are new, novel ideas, not source code modification. The source code will be the medium through which these ideas are implemented, but simply modifying it is not sufficient to make actual improvements.

8

ihateshadylandlords t1_j85shwf wrote

When do you think that will happen? It would be nice if it happened ASAP and worked out perfectly for all of humanity, but who knows if/when we get recursive self improvement.

6

Nmanga90 t1_j87t99b wrote

It is I promise. We have barely broken the surface of what AI can do with human designs. Right now the main limitations are compute power and data. Any explorations into alternative architectures and whatnot come with a massive opportunity cost because of these.

OpenAI alone has probably spent a billion on compute up to now. Insanity

I don’t think you guys understand, but every single week we’re improving leaps and bounds with minor tweaks and modifications to existing architecture. It would be extremely inefficient to allow the AI to try and improve itself when we have almost guaranteed improvement by humans that is only limited by how much GPU we can muster

3

Art10001 t1_j8al0vd wrote

The intelligence explosion will happen faster if we let the AI improve itself.

2

Nmanga90 t1_j8ar7ic wrote

Not right now it won’t. We already know of ways to improve AI, but we don’t have data to allow an AI to improve itself. The only way for that would be generative design, which is by nature very wasteful and slow. Once it gets to a certain point yes, but as of right now we are far (relatively) from that

1

Ortus14 t1_j8801ak wrote

At the machine level code and data are the same. It's only a conceptual distinction made for human programmers.

But An Ai that experiences the world and learns, and upgrades itself with more memory and processing power is just as capable of being an intelligence explosion as one that's reprogramming itself.

2