Submitted by jsonathan t3_106q6m9 in MachineLearning
uoftsuxalot t1_j3hz4nm wrote
Reply to comment by phobos_0 in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
Not to take anything away from this project, but it’s just an api call to gpt3 with prompt “fix this error {error}”. I thought there was some training and fine tuning, but I guess LLMs can do it all now a days
jsonathan OP t1_j3i0txg wrote
Yeah, right now it’s just a thin wrapper around GPT-3, but there’s a lot that could be done to improve it, like using static code analysis to build a better prompt or even training a more specialized model (like this).
2Punx2Furious t1_j3l26ui wrote
Even fine-tuning the prompt could get much better results. Prompt engineering is important.
datamakesmydickhard t1_j3o73d6 wrote
Has it really come to this
2Punx2Furious t1_j3o7hps wrote
Yes, it's been like this for a while now.
ginger_beer_m t1_j3jlhaj wrote
How did you deal with incorrectness from ChatGPT?
jsonathan OP t1_j3joesx wrote
I didn't. Adrenaline won’t always correctly fix your error, but it can at least give you a starting point.
kelkulus t1_j3k8w2w wrote
Well for one, he's not using ChatGPT. GPT-3 is not the same.
danielswrath t1_j3l1fkb wrote
GPT-3 has the same problem though. ChatGPT is a successor of GPT-3, so it's not the same but it's not extremely different either.
Glum-Bookkeeper1836 t1_j3lhgfx wrote
I'm not sure if we know this for certain, but it appears to be davinci instruct 3 with a custom prompt prefix.
[deleted] t1_j3l3f45 wrote
[deleted]
cloudedleopard42 t1_j3p7pr3 wrote
is it possible to fine tune GPT for static code analysis ? if yes...what would be the training set looks like?
satireplusplus t1_j3i1grq wrote
LLMs are our new overlords, it's crazy
2Punx2Furious t1_j3l29nx wrote
And it's not even AGI yet. The singularity is closer than a lot of people think.
TrueBirch t1_j3mcg7g wrote
I don't think AGI will ever happen, but with enough task-specific applications, the difference may become academic.
iamnotlefthanded666 t1_j3muxea wrote
Why don't you think AGI will ever happen?
TrueBirch t1_j3mwk67 wrote
Check out this comment. Some things that we take for granted from low-wage humans are incredibly hard for computers and robots. Think about valet parking. Our society doesn't think "Oh my goodness, valet parkers are geniuses!!!" But it's really really hard to build a robot that can do what they do.
TradeApe t1_j3rra6b wrote
If they can automate huge chunks of super busy cargo harbors, they can automate valet parking...and they won't even need AGI for that. Hell, valet parking will likely become obsolete once full self driving is here.
People also didn't think AI will make artists obsolete...but here we are.
TrueBirch t1_j3xlp2e wrote
Artists are hardly obsolete. Photoshop didn't make them obsolete and generative AI won't either. And I say that as someone who has extensively used Stable Diffusion for work and personal projects.
Regarding valets, I'm referring to the ability to toss your keys to a robot and have it drive your car. Even when true self driving cars are first produced (which always seems to be ten years away), we'll be a long way away from a robot being able to park a non-automated car. That's just one example of a task that seems really easy for humans but is shockingly hard for robots. Folding laundry is another one, which is especially relevant since I'm ignoring the fact that my dryer just finished a load.
2Punx2Furious t1_j3mopda wrote
Yeah, I see a lot of goalpost-moving, but in the end, it depends on how you define "AGI", some people have varying definitions. I think even a language model can become AGI eventually.
TrueBirch t1_j3mw92i wrote
There are some things that are incredibly hard. Imagine you work on a farm. You toss the keys to the ATV to a 17yo farmhand who's never worked for you before. You say, "Head over to field 3 and tell me if it's dry enough to plow. You can see where it is on this paper map. Radio back using this handheld." The farmhand duly drives the ATV to field 3, sees that it's muddy, picks up the radio, and says, "Sorry boss, field 3's a no-go."
We're a long way from a robotic farmhand being able to perform those skills, certainly not for a price comparable to a farm laborer.
You could definitely train an application-specific AI to monitor fields and report on their moisture levels. You could even have an algorithm that schedules all of your farm equipment based on current conditions and other factors. So it's not that AI can't revolutionize how we work, it's just that it'll be different from true AGI.
eldenrim t1_j4kmltf wrote
I'm curious how you feel about the following:
There are humans that can't do the task you outlined. Why use it as a metric for AGI? Put in other words, what about a "less intelligent" AGI, that crawls before it walks? An AGI equivalent to a human with lower IQ, or some similar measurement that correlates with not being capable of the same things as those in your example?
Second, if an A.I can do 80% of what a human can, and a human can do 10% of what an A.I can, would you still claim the system isn't an AGI? As in, if humans can do X, A.I can do X * 100 things, but there's a venn diagram with some things unique to humans and many things unique to A.I, does it not count because you can point to human examples of tasks it cannot complete?
Finally, considering a human system has to account for things irrelevant to an AGI (body homeostasis with heart rate and such, immune system, etc) and an AGI can build on code before it, what do you see as the barrier to AGI? Is it not a matter of time?
TrueBirch t1_j4kv71p wrote
I think "AGI" is a silly concept overall and never really happening. Computers are good at doing things in different ways from humans. Rather than chasing AGI, you can make a lot more of an impact by leveraging a computer's strengths and avoiding its weaknesses.
For my example, I picked an occupation with an average salary south of $30,000/year (source). I'm not saying everybody can do it, but the market puts a price on this kind of labor that suggests many people can do it. A true AGI system could replicate how a low-salary human does a job. In reality, a computerized system would use a few wireless sensors that call home instead of physically driving around looking at fields.
Similarly, consider meter readers, another low-wage job. Imagine what it would take to create a robot that could drive from house to house, get out of the car, find the power meter, gently move anything blocking it, and take a reading. Instead, utilities use smart meters that call home. It's cheaper, more reliable, and simpler.
It's beyond hard to create a true AGI system, and there are plenty of ways to make tons of money with application-specific systems.
eldenrim t1_j4l7ilw wrote
I'm currently interested in ML to alleviate the suffering of my disabled partner and myself, I just enjoy theoretical discussion with AGI.
Maybe making money will come later. :)
TrueBirch t1_j4ldkum wrote
I'm talking about where the funding is going. Anything remotely approaching AGI would require billions and billions of dollars of funding.
eldenrim t1_j4lfc8z wrote
So you don't think that repeatedly making narrow AI, and then at some point bundling them together, is a valid way to get to AGI?
TrueBirch t1_j4qdbf2 wrote
It'll be something entirely new, but not capable of doing everything that my toddler can do. Systems will be designed to avoid those weaknesses. Again, think about replacing meter readers with cheap sensors instead of expensive robots.
2Punx2Furious t1_j3nzumw wrote
> We're a long way from a robotic farmhand being able to perform those skills, certainly not for a price comparable to a farm laborer.
If we get AGI, we automatically get that as well, by definition. Those you listed are all currently hard problems, yes, but an AGI would be able to do them, no problem.
The issue is, will AGI ever be achieved, and if yes, when?
I think the answer to the first one is simple, the second one not as much.
The answer (in very short) is: Most likely yes, unless we go extinct first. Because we know that general intelligence is possible, so I see no reason why it shouldn't be possible to replicate artificially, and even improve it, and several, very wealthy companies are actively working on it, and the incentive to achieve it is huge.
As for the when, it's impossible to know until it happens, and even then, some people will argue about it for a while. I have my predictions, but there are lots of disagreeing opinions.
I don't know how someone even remotely interested in the field could think it will never happen for sure.
As for my prediction/opinion, I actually give it a decent chance of it happening in the next 10-20 years, with probability increasing every year until the 2040s. I would be very surprised if it doesn't happen by then, but of course, there is no way to tell.
TrueBirch t1_j4jd8af wrote
A true AGI has way too many edge cases to be possible in the timeframe you describe. It's also not necessary to create AGI in order to make a lot of money from AI. You can find the specific jobs that you want to replace and create a task-specific AI to do it.
2Punx2Furious t1_j4kyyhq wrote
True that you don't need AGI to disrupt everything. But I don't think the edge cases matter, it's not like it will be coded manually.
TrueBirch t1_j4lb7fg wrote
>I don't think the edge cases matter
Being able to handle those weird edge cases is what distinguishes AGI from the kinds of AI that companies are currently developing...
2Punx2Furious t1_j4lbgjj wrote
Yes, I'm saying the fact that there are edge cases doesn't matter, because it's not us who have to address them. As we get closer and closer to AGI, it will get better at handling them, we won't have to find them, and code solutions for them. I think it will be an emergent quality of AGI.
Viewing a single comment thread. View all comments