Submitted by [deleted] t3_106ixgv in singularity
[removed]
Submitted by [deleted] t3_106ixgv in singularity
[removed]
Not within the next year for real world complex solutions, but it’s heading towards that direction.
Maybe I’m wrong and ml models creating better ml models emerges suddenly, but it’s just too much of a stretch to think that it happens suddenly and becomes as competent as any real professional.
There’s still a lot missing for ai to be so good.
Probably not, my money is on 5-10 years
Doubt that, it's too soon one year. I do think it will get there and I don't think it will take that long, we are going to see major breakthroughs next year, but better than any human it's going to take a bit more
ChatGPT doesn’t understand how code works.
It can’t actually solve problems, only answer prompts with solutions it’s already seen before.
The answer is no. But it will raise the barrier to getting a junior level job even higher. And probably will require less programmers to get the same outcome. See that most fellow software developers underestimate its possible effect on the la our market.
Think that a similar gpt4 system will be 10 years (of tech experience) ahead and it will come by 2024.
There is a lot of research on reinforcement learning for code generation via language models happening right now. So depends on how that turns out since what you're asking for isn't possible without RL. The context window issue also needs to be fully solved and the solutions on the horizon currently don't cut it. But we could have a breakthrough any day. So who knows 🤷
no of course not. like any neural network, it will be good at stuff people do a million times. code that's never been written will pose special challenges for ai.
in other words it will write code better than the "average" human, sure. it already does.
Next year probably no. But I think in a few years it will be able to deliver complete working solutions just based on given prompt. Aka write a code for app that does X.
This year, next, definitely soon.
>only answer prompts with solutions it’s already seen before.
citation needed.
There does appear to be some level of understanding and problem-solving, emerging as more than the sum of it's knowledge, & that goes well beyond merely answering with solutions it's already seen. I can assure you, I've asked it to help me with some very obscure coding problem-solving, that I'd been stuck on for a while, and I think thanks to it's short-term memory, it figured out a solution I never would have. All it took was a little back and forth to give it enough context, and it worked out a solution that really couldn't exist anywhere else.
Appear but not actually. It is a LLM, which has no understanding of its content.
Unless you think it somehow spontaneously developed consciousness, it’s not quite conscious yet.
This is fundamental to how LLMs work. They don’t generate new knowledge.
well if that's the case plagiarism detectors should have no problem identifying the output then.
or maybe it's the case that by being trained on so much data the underlying structure about how data should be formed happens.
It would explain the emergent abilities.
Or people are grasping at straws to try to explain a mechanism they don’t understand.
>It can’t actually solve problems, only answer prompts with solutions it’s already seen before.
.
>people are grasping at straws to try to explain a mechanism they don’t understand.
You are making definitive statements about things you say that experts in the field 'don't understand'
either you are claiming you know more than them or you are professing your ignorance of the matter.
Which is it.
Experts in the field aren’t claiming it’s generating new knowledge. They’re saying as you extend the size of the model interesting stuff happens. Roughly it seems they’re saying it performs better.
read the paper, it's not that it performs better, it's that abilities that are as good as random suddenly hit a phase change and become measurably better.
you were initially saying
> only answer prompts with solutions it’s already seen before.
Lets look at an example that makes things crystal clear.
Image generators by combining concepts can come up with brand new images. Does it have to have seen dogs before in order to place one in the image? yes. does it need to have seen one that looks identical to the final dog. e.g. could you crop the image and reverse image search it and get a match. No.
The same is true with poems, summations, code, etc... it's finding patterns and creating outputs that match the requested pattern so to get back to the point of coding it could very well output code it's never seen before by ingesting enough to understand syntax.
It's seen dogs before. it outputs similar but unique dogs. It's seen code before. It outputs similar but unique code.
That’s not generating new knowledge.
You’re not going to use this to generate new solutions in software that don’t already exist.
There's lots of things I've coded in software that have not existed before and are merely recombining structures that already exist to tackle new problems. It's why programing languages exist.
That is a 'new solution' to me. What do you mean when you say it?
New to you, not new to the industry.
again what do you mean by that, people code new software every day.
You can ask for poems that don't exist, essays that don't exist.
All these things have had their structure extracted understood then followed to create new items.
Asking for code is the same.
>Will ChatGPT be able to write better code than any human within the next year?
A good coder needs to eat and sleep and take time to understand new technology has a limited scope in the programming languages known, has good days and bad days, has 'blocks' and is a single unit able to process problems serially at human level speed.
Green-Future_ t1_j3gpzge wrote
Interesting topic for r/OurGreenFuture . I see your logic, GPT can write its own code, but needs input as to what it should write. Even as humans, we need input for what we should write code for - i.e we are assigned a task, and we write code for that task. IMHO I think AGI will take longer than 2/3 years to develop.