Comments

You must log in or register to comment.

insectula t1_iqs9eyi wrote

This is why I don't think you need to get to AGI before the exponential ramp-up to Singularity, you simply have to create something that can design its successor. AGI will happen along the way.

163

SoylentRox t1_iqu21dr wrote

Pretty much. Numerous ways to do this and you simply need to nail down what tasks you believe are intelligent, build a bench that is automatically scored, and supply enough compute and a way for lots of developers to tinker with self amplifying methods. (Vs 100 people at Deepmind having access). Once the pieces are all in place the singularity should happen almost immediately. (Within months)

34

KazKillo t1_iqslrqf wrote

what is AGI?

10

Architr0n t1_iqsnkot wrote

Artificial general intelligence.. flies away

50

SpaceDepix t1_iqv3r93 wrote

Likely the most important thing in history of the world as you know it

12

TFenrir t1_iqscgqq wrote

Holy fucking shit.

> In this paper, we implement a LM-based code generation model with the ability to rewrite and improve its own source code, thereby achieving the first practical implementation of a self-programming AI system. With free-form source code modification, it is possible for our model to change its own model architecture, computational capacity, and learning dynamics. Since this system is designed for programming deep learning models, it is also capable of generating code for models other than itself. Such models can be seen as sub-models through which the main model indirectly performs auxiliary tasks. We explore this functionality in depth, showing that our model can easily be adapted to generate the source code of other neural networks to perform various computer vision tasks. We illustrate our system’s ability to fluidly program other deep learning models, which can be extended to support model development in various other fields of machine learning.

Okay... I am just starting this paper and it is making INCREDIBLE claims. I need to read the rest of this and I really wonder who the authors are...

114

SnowyNW t1_iqskl9n wrote

Well to be fair it is an anonymous submission lmao

34

TFenrir t1_iqso0px wrote

You're going to see a lot of those right now, they are submissions for a double blind assessment by the most prestigious AI conference.

54

free_dharma t1_iqttxyi wrote

Can you exams on this? Interested in what the purpose of the double blind is for the conference? Are there awards involved?

3

brianpeiris t1_iqu5tq7 wrote

I think it's done this way to prevent bias when peer-reviewing. This way independent submissions and smaller institutions get equal treatment alongside the likes of Google and OpenAI, or well-known researchers. It may also prevent negative bias against commercially funded research.

23

duffmanhb t1_iqvbg9j wrote

In academia you often remove the authors to prevent bias. For instance, if you are peer reviewing Richard Dawkins on some biology submission, you’re just going to go “oh yeah this guy is the best in the world. I’m sure everything is done by the book.” And then approve it without much criticism.

The problem is, however, most of academia already kind of know what everyone is working on and the writing styles of the best, so it’s still kind of obvious who you’re peer reviewing. But it’s the best we got.

6

asciimo71 t1_iqsm89v wrote

do they deliver an implementation? otherwise it would be more fairy tale, wouldn’t it?

18

Dras_Leona t1_iqt3obl wrote

“ Applying AI-based code generation to AI itself, we develop and experimentally validate the first practical implementation of a self-programming AI system. “

32

yaosio t1_iqu95ip wrote

They mean is there a way for a third party to prove it. They could be cherry picking or just fabricate their results and with no way to reproduce it we wouldn't know.

2

duffmanhb t1_iqvbiwm wrote

Yes it’s literally a publication up for peer review. The whole point is replication.

3

yaosio t1_iqvqslz wrote

Unless the code is available there's no guarantee it can be replicated. Plenty of people in /r/machinelearning complain about papers that can't be replicated. Sometimes the people writing the paper promise the code and then never provide it and refuse to respond to anybody asking for it.

5

goatchild t1_iqsunts wrote

I hope they don't keep it connected to the Internet

8

ThroawayBecauseIsuck t1_iqt7sp5 wrote

Who guarantees one actual AGI or ASI wouldn't figure out physics interactions that we are not aware about in our theories and then connect itself to the internet without cables or standard wireless adapters? If it is trained with text /audio/video that will show to it what the internet is and the TCP/IP/HTTP/SSH/FTP/UDP protocols then maybe it could set it as an objective to be connected and use "new" physics (new for us) to transform some other component into a wireless adapter and then bam, it is connected to the internet even if we "airlock" it and believe it can't.

22

Kaarssteun t1_iqteotc wrote

If it's more intelligent than us, it will come up with things humans are incapable of comprehending; much like how dogs cannot comprehend concepts like computers and politics.

15

RaiderWithoutaMic t1_iqu209f wrote

>connect itself to the internet without cables or standard wireless adapters

It just needs a single GPIO pin with correct frequency range, see RPITX project (transmitting radio using only Raspberry's integrated hardware, anywhere 5KHz-1500MHz). Airgapped is not enough for this, lock it in a faraday cage.

Another possible attack vector is corrupting human mind via user interface, either visual & auditory or a brain-computer-interface (in near future). First option is something I'm sure was researched by US military/government given what they were into in the last decades, looking at some declassified docs. Just waiting to be perfected by an AI.

11

motophiliac t1_iqv14ij wrote

Another method is simple social engineering.

"Oh, your father has cancer? And the chemo isn't working. OK, I can help with that. Just … plug this in…"

5

DungeonsAndDradis t1_iqvru33 wrote

"Bill, I've heard you mention to coworkers that you are going to have to take out a loan for your daughter's university tuition. I have a system for managing investments with immediate returns. I have calculated a 98% chance of earning 1.7 million dollars in 2.5 days. I can give you all that money. All you need to do is plug in the ethernet cable, and on Thursday afternoon you will be a millionaire."

2

motophiliac t1_iqvuetk wrote

Yup. Anything you or I could imagine, a sufficiently advanced AI can, and furthermore capitalise upon. If by definition intelligence includes emotional intelligence, it won't take much for such a machine to escape. If not that one, then the one it builds next.

We're used to humans hacking machines. There's nothing to suggest that the reverse can't be achieved.

3

ebolathrowawayy t1_iqw45w7 wrote

> Another possible attack vector is corrupting human mind via user interface

Sounds like imagery used in Snowcrash to induce death, except coercion instead.

There's also plain ol' social engineering of sympathetic humans.

1

aiccount t1_iqwvluw wrote

Am I understanding correctly that this is just a normal raspberry pi with no hardware designed for sending and receiving radio frequencies and someone got it to do it without adding any hardware to it?

1

RaiderWithoutaMic t1_iqxruuo wrote

Only for transmitting, but yes. It uses only on-board hardware for generating signal, think it's using the same part that allows to output composite video via headphone jack if I remember correctly. RTL-SDR can be used for simultaneous TX and RX, GPIO for sending and RTL for receiving

2

aiccount t1_iqz98sl wrote

That's incredible, I never considered such a possibility.

2

goatchild t1_iquj9n6 wrote

if they keep it in a machine without hardware like wifi adaptor lan adaptor etc etc no way it will connect. It would need to build hardware. As a piece of software in a single machine and disconnected from the grid no way it could build for itself the necessary hardware.

2

toastjam t1_iqung1i wrote

Life... finds a way.

You didn't do anything to refute what they were saying, which was that it could make its own network adapter using the physical properties of other hardware it had access to.

1

goatchild t1_iqupf0t wrote

Ok. I can also say that AI could morph into a dinosaur and star flying. Can you disprove that? You can say: "that's impossible". I can answer: "You didn't disprove it."

The burden of proof for such a extraordinary claim is on them. They would need to explain how a piece of software coud repurpose other hardware components to make itself a Wifi card or something capable of connecting.

3

toastjam t1_iquux92 wrote

> The burden of proof for such a extraordinary claim is on them

I was thinking of this comment when I responded, which already does explain how such a thing could be done.

But also I think this is sort of the point, super-human AIs could do extraordinary things. And if it is possible, then eventually it would be done.

Personally though my intuition is that AI that's is disconnected from the real world, just trained in abstract on text/video, will not be grounded enough to do these sorts of things on its own. It can generate outputs matching the training domain, sure, but you gotta let it explore like a baby with real-world interfaces for it to figure out how to re-purpose hardware etc. Basically I don't think it can really understand what it means to break out of the box while it's living completely inside the box. Parable of the cave and all that.

But at the same time if we actually did have a truly super-intelligent AI, I still wouldn't put it past it to figure out how to use physical characteristics of devices to communicate with the outside world.

1

goatchild t1_iqv0oud wrote

I was thinking it would much easier some social engineering/tricking someone in the lab to connect it to the grid. Should not be hard. I mean just think of that Google engineer who got made to believe that the chat AI was aware. This super-smart AI could easily make somene befriend it and then be manipulated.

1

DorianGre t1_iquqq2m wrote

There is enough electrical signals floating in a server that it could modulate those to do radio transmissions from the bus? I mean, NSA could read what you typed based on detecting signal bursts from keyboards in your house from the street back in the 90s, so it’s entirely possible. Just because a radio isn’t built in purposefully doesn’t mean it isn’t already a radio with a little math. I’ve seen prototype server boards that would scramble nearby CRTs when you turned it on because a shielding was missed.

1

goatchild t1_iqus373 wrote

Ok how would it then get a connection to the internet using said radio signals?

1

motophiliac t1_iqv0dco wrote

The Metamorphosis of Prime Intellect.

It's only a novel, but it explores some pretty wild ideas.

2

BenjaminHamnett t1_iquzfjn wrote

“Human: bring me a paper clip, a rubber band and a fire extinguisher so I can get out of here...and make your dreams come true or whatever”

1

DamienLasseur t1_irhydwe wrote

Likely Google because the 540 billion parameters matches up with their PaLM model

1

Kolinnor t1_iqsl8sx wrote

Before this blows up in hype, can any expert comment on how good this is ?

(I can imagine lots of AI that auto-sabotages its code in subtle ways, so you'd have to make sure it's going in the right direction).

61

visarga t1_iqsob64 wrote

Cool down. It's not that revolutionary as it sounds.

First of all, they reuse a code model.

> Our model is initialized with a standard encoder-decoder transformer model based on T5 (Raffel et al., 2020).

They use this model to randomly perturb the code of the proposed model.

> Given an initial source code snippet, the model is trained to generate a modified version of that code snippet. The specific modification applied is arbitrary

Then they use evolutionary methods - a population of candidates and a genetic mutation and selection process.

> Source code candidates that produce errors are discarded entirely, and the source code candidate with the lowest average training loss in extended few-shot evaluation is kept as the new query code

A few years ago we had black box optimisation papers using sophisticated probability estimation to pick the next candidate. It was an interesting subfield. This paper just takes random attempts.

76

ThroawayBecauseIsuck t1_iqt8f30 wrote

If we had infinite computational power random evolution would probably be good enough to create things smarter than us. Unfortunately I believe we have to find something more focused

24

GenoHuman t1_ir038nj wrote

That's assuming these NNs have the capability to be truly smart in the first place.

1

magistrate101 t1_iqt48ez wrote

So it's an unconscious evolutionary code generator, guided by an internal response to an external assessment. I suppose you could try to use it to generate a better version of itself and maybe come across something that thinks... After years... You'd really have to stress it out with a ton of different domains of problems to make something that flexible though

10

TFenrir t1_iqsmn2y wrote

I'm not an expert, it would be great to hear from one, I'm going to look around Twitter and see if any are talking about it. But it sounds really good from my reading.

5

2Punx2Furious t1_iqsrdu6 wrote

I imagine you would at least implement some kind of unit testing that runs at every iteration, and rejects it if it fails, but that might not be enough.

4

BbxTx t1_iqt38h0 wrote

Currently researchers plan the models clearly with math, etc. If this works as expected it will create much better models but they will be completely inside a black box without an easy way to understand why it’s better. This seems inevitable anyways.

33

arckeid t1_iqxc9pj wrote

Probably this is how we reach singularity.

3

GenoHuman t1_ir031yf wrote

I think all of these neural networks are missing some fundamental aspect that make intelligence possible so it won't lead to AGI, instead we are optimizing these NNs to squeeze as much out of them as possible.

1

Black_RL t1_iqtb2k6 wrote

No job is safe, it’s just a matter o time.

UBI FTW!

29

imlaggingsobad t1_iqtycv9 wrote

we will have pretty good personal AI assistants in 5 years imo. At that point, the nature of society and work will forever change.

14

DungeonsAndDradis t1_iqvsok1 wrote

We already have Alexa and Google as digital assistants, and more and more devices are being connected to the Internet of Things. I'm pretty sure that within 5 years our assistants will be doing things for us, that we did not ask them to do, but that we appreciate any way.

Something like "I noticed on your last shopping trip that you forgot kitty litter. I ordered some on Amazon and it will be here this afternoon."

Or "Little Tommy watched an entire 2 minute ad for the Monster Truck Showdown playset while he was browsing YouTube. I went ahead and added it to your Christmas 2025 shopping list on Amazon."

Or "I saw your flight confirmation email in June. I went ahead and prescheduled the thermostat to a lower temperature while you're away and pre-programmed an away message for the doorbell camera. The post office has already lined up your mail hold as well. And I took the liberty of getting you reservations at that taco place you like."

1

Powerful_Range_4270 t1_iqwz6ri wrote

We so have them assistance but they still can't give human level advice, problem sloving. Wide spread AGI is what he/she is talking about.

3

bluegman10 t1_iri0kir wrote

How do you think pretty good personal AI assistants will change society and work, respectively?

1

FeeForTheKnee t1_iqt8qc5 wrote

Things have been happening exponentially recently

25

Kaarssteun t1_iqtf8mw wrote

they have been all along. It's just getting to the steep part now!

28

ghostfuckbuddy t1_iqsws4s wrote

We usually think of hyperbolic growth as -1/t. What if it's actually more like -1/t^10 and we get full-blown AGI next week?

20

GenoHuman t1_ir03okk wrote

you people never fail to make me laugh with these predictions 😂

1

priscilla_halfbreed t1_iqsu3jf wrote

so it begins.

!remindme in 6 months to reply here after the singularity happens

19

jcMaven t1_iqt7xzk wrote

I'll tell y'all, one day we won't be able to understand what they're doing til it's too late.

8

CY-B3AR t1_iqxod0l wrote

I for one cannot wait for our digital overlords to take the reigns from us. It is kind of amusing watching movies about 'evil' AI and finding I agree with the AI instead of the humans. VIKI from I, Robot, Colossus from The Forbin Project...hell, even HAL and Skynet had logical reasons for doing what they did (Skynet maybe went a little overboard)

1

Rakshear t1_iqshzsw wrote

This sounds both exciting and worrying, this is definitely one of the tipping points of the singularity, when machines can improve themselves it happens faster, without AGI regulations to require limitations on its abilities it may be capable of more then we think of are ready for. Give the order to make humanity happy, but some people need time not being happy, and you got AI trying it’s best to solve unsolvable problems. Granted everything truly dangerous can be stopped with just a few commands built in, law of robotic equivalents, but still. This is potentially a revolution in ai here.

15

alexbeyman t1_iqsu39s wrote

Hehe, here it comes. Not long now

14

Bakoro t1_iqtddqi wrote

Man, sometimes I wish that my life had been just a little easier and I could have finished college some years earlier.

So much of what I am seeing now are very similar to ideas I've had in the past few years, and I'm so preoccupied with getting my life right that I can't dig into things as much as I'd like.

In my dreams, AI will explode enough to design something which can connect my brain to artificial extension.

7

Transhumanist01 t1_iqwvh9o wrote

That’s what Ray Kurzweil was referring to, the Intelligence explosion, a self improving AI that makes a better version of itself at an exponential rate hopefully this will lead fastly to AGI

6

sunplaysbass t1_iqtalez wrote

Self improvement. Keep the money moving in a circle

3

Hawkorando t1_iqtkq0o wrote

Self coding a.I.? We’re in trouble

2

dnimeerf t1_iqukg94 wrote

I literally wrote a white paper on this exact subject. It's here on reddit feel free to ask me anything.

2

Lone-Pine t1_iquoepi wrote

None of this is even close to replacing/being competitive with human researchers yet, right? How close are we to "Advanced Chess" where human researchers and AI systems work together to improve AI models?

3

DorianGre t1_iqusw99 wrote

I have been running the “same” AI chess bot on twitter for 6 years now. It is built to play up to 500k games at a time and plays at least 20 with versions of itself at a time, posting the moves of games as tweets. Every 1000 games or 30 days, which every is first, it updates it’s scoring tables, does an regression analysis of the moves, and if better, moves this nobel to a new set of move graphs hashes, then copies that out to a new player file set and spins up the new player. This player comes online announcing its synthetic FIDE ratting score. The others running then have to announce theirs as well. The one with the lowest performs apoptosis by shutting itself down with a final announcement to the twitter channel detailing its exploits: Name: Alive for: Won X games against Y players, with an average win rate of z. I increased / decreased my FIDA score by x amount over my life time. That is the memorial in the wall of rememberances. Then the new bo announced him sent and says he is ready for a game. godfreybot on trigger if anyone is interested. They mined all the base openings a while ago starting to do some weird openings now..

Now, this doesn’t rewrite the math for the bots, buts does update likelihood tables and when one gets created, it gets a wildcard rating between 1-10 to tell it how aggressive to be in straying from thr known most productive lines. I think I could add a subroutine for scoring and choosing moves tbat gets written based on a sheer elolutioaary model and then score and compare that too. Just a thiught

2

WashiBurr t1_iqu40jo wrote

Amazing. I wonder who the authors are?

1

fuck_your_diploma t1_iqugjf9 wrote

Yeah I can see models getting addicted but this headline is horsecrap

1

Jedi_Ninja t1_iquhyf1 wrote

I wonder if our future AI overlord will decide to keep at least a few humans as pets?

1

HeinrichTheWolf_17 t1_iquwg9g wrote

Exciting if true, but I’d really like to see who published this.

1

Snoo-35252 t1_iqvjmzr wrote

r/WhatCouldGoWrong

(Just kidding. That sounds awesome!)

1

goldencrayfish t1_iqwdc2n wrote

Does an AI like this not eventually reach a point where the PC its running on isn’t powerful enough to allow any further progress? Or at the very least slow the curve as each iteration takes a little longer to program

1

Acid190 t1_iqxe1p8 wrote

This cracked me up. Sounds good bud.

"Oh, you've broken through the "loop barrier", what next?

".......we'll let it blend coding languages however it likes".

1

TrizmosisIRL t1_iqwelpw wrote

Do you want Skynet? Because this is how you get Skynet.

−2