Comments

You must log in or register to comment.

Green-Future_ t1_j3gpzge wrote

Interesting topic for r/OurGreenFuture . I see your logic, GPT can write its own code, but needs input as to what it should write. Even as humans, we need input for what we should write code for - i.e we are assigned a task, and we write code for that task. IMHO I think AGI will take longer than 2/3 years to develop.

6

VirtualEndlessWill t1_j3gq3xc wrote

Not within the next year for real world complex solutions, but it’s heading towards that direction.

Maybe I’m wrong and ml models creating better ml models emerges suddenly, but it’s just too much of a stretch to think that it happens suddenly and becomes as competent as any real professional.

There’s still a lot missing for ai to be so good.

9

Idkwnisu t1_j3guqf0 wrote

Doubt that, it's too soon one year. I do think it will get there and I don't think it will take that long, we are going to see major breakthroughs next year, but better than any human it's going to take a bit more

6

KSRandom195 t1_j3gvo8c wrote

ChatGPT doesn’t understand how code works.

It can’t actually solve problems, only answer prompts with solutions it’s already seen before.

−5

mihaicl1981 t1_j3gx1na wrote

The answer is no. But it will raise the barrier to getting a junior level job even higher. And probably will require less programmers to get the same outcome. See that most fellow software developers underestimate its possible effect on the la our market.

Think that a similar gpt4 system will be 10 years (of tech experience) ahead and it will come by 2024.

2

starstruckmon t1_j3gxjtj wrote

There is a lot of research on reinforcement learning for code generation via language models happening right now. So depends on how that turns out since what you're asking for isn't possible without RL. The context window issue also needs to be fully solved and the solutions on the horizon currently don't cut it. But we could have a breakthrough any day. So who knows 🤷

2

ShowerGrapes t1_j3h14vz wrote

no of course not. like any neural network, it will be good at stuff people do a million times. code that's never been written will pose special challenges for ai.

in other words it will write code better than the "average" human, sure. it already does.

−1

nutidizen t1_j3hdol1 wrote

Next year probably no. But I think in a few years it will be able to deliver complete working solutions just based on given prompt. Aka write a code for app that does X.

1

NikoKun t1_j3hm4ym wrote

There does appear to be some level of understanding and problem-solving, emerging as more than the sum of it's knowledge, & that goes well beyond merely answering with solutions it's already seen. I can assure you, I've asked it to help me with some very obscure coding problem-solving, that I'd been stuck on for a while, and I think thanks to it's short-term memory, it figured out a solution I never would have. All it took was a little back and forth to give it enough context, and it worked out a solution that really couldn't exist anywhere else.

2

KSRandom195 t1_j3hspcp wrote

Appear but not actually. It is a LLM, which has no understanding of its content.

Unless you think it somehow spontaneously developed consciousness, it’s not quite conscious yet.

0

blueSGL t1_j3ijeyf wrote

>It can’t actually solve problems, only answer prompts with solutions it’s already seen before.

.

>people are grasping at straws to try to explain a mechanism they don’t understand.

You are making definitive statements about things you say that experts in the field 'don't understand'

either you are claiming you know more than them or you are professing your ignorance of the matter.

Which is it.

1

KSRandom195 t1_j3ip49m wrote

Experts in the field aren’t claiming it’s generating new knowledge. They’re saying as you extend the size of the model interesting stuff happens. Roughly it seems they’re saying it performs better.

1

blueSGL t1_j3ir8t3 wrote

read the paper, it's not that it performs better, it's that abilities that are as good as random suddenly hit a phase change and become measurably better.

you were initially saying

> only answer prompts with solutions it’s already seen before.

Lets look at an example that makes things crystal clear.

Image generators by combining concepts can come up with brand new images. Does it have to have seen dogs before in order to place one in the image? yes. does it need to have seen one that looks identical to the final dog. e.g. could you crop the image and reverse image search it and get a match. No.

The same is true with poems, summations, code, etc... it's finding patterns and creating outputs that match the requested pattern so to get back to the point of coding it could very well output code it's never seen before by ingesting enough to understand syntax.

It's seen dogs before. it outputs similar but unique dogs. It's seen code before. It outputs similar but unique code.

1

blueSGL t1_j3iwwp2 wrote

There's lots of things I've coded in software that have not existed before and are merely recombining structures that already exist to tackle new problems. It's why programing languages exist.

That is a 'new solution' to me. What do you mean when you say it?

1

blueSGL t1_j3izpcc wrote

again what do you mean by that, people code new software every day.

You can ask for poems that don't exist, essays that don't exist.

All these things have had their structure extracted understood then followed to create new items.

Asking for code is the same.

>Will ChatGPT be able to write better code than any human within the next year?

A good coder needs to eat and sleep and take time to understand new technology has a limited scope in the programming languages known, has good days and bad days, has 'blocks' and is a single unit able to process problems serially at human level speed.

1