Comments

You must log in or register to comment.

Ok-Welder-4816 t1_j135pbr wrote

Just like any other code completion tool, you have to understand what it's suggesting and evaluate it before accepting with the tab key.

I've only used whatever is built into VS 2022 (and sometimes Resharper), but it often suggests exactly what I was going to type anyway, especially if I'm making the same change in many different places. I don't use it for ideas, I just use it to save typing. But I already have a deep knowledge of the language, libraries, etc.

64

sesor33 t1_j142lmc wrote

Exactly. I'm worried about programmers using chatGPT because I've seen it produce extremely insecure code.

But here on Reddit I've already seen people say "yeah I copy paste from it so I have more time to slack off!"

10

SIGMA920 t1_j14eyp2 wrote

> But here on Reddit I've already seen people say "yeah I copy paste from it so I have more time to slack off!"

It's almost like reddit is a bag of items where half the time you get shit and the other half of the time you get gold. /s

3

DragonCz t1_j14q8nk wrote

Or plain wrong code. A friend if mine needed to check whether .NET 6 was installed via Wix Toolset. His co-worker suggested using ChatHPT. And it just took some code that detects .NET 5 and changes the 5s to 6s. Or course it would never work.

1

hippydipster t1_j1f54uq wrote

As a developer, what do they care about the results of the code they write?

This is the capitalist bargain, where you don't own the fruits of your labor, so naturally, people don't actually care.

1

Flam1ng1cecream t1_j13zqlv wrote

> especially if I'm making the same change in many different places

Have you considered that that might be a code smell?

−3

epic_null t1_j14aaor wrote

Sometimes code is gonna smell.

7

reconrose t1_j14flor wrote

Some people are completely beholden to DRY principals that that go into a fury when they hear code was repeated even if they have 0 context around it

4

Flam1ng1cecream t1_j14hwkk wrote

Nah, I repeat code sometimes too. Still sets off alarm bells in my head tho

1

Ok-Welder-4816 t1_j17dfvw wrote

Oh yeah, DRY is always top of mind for me. But only on new stuff, which is only part of my typical workload.

1

Ok-Welder-4816 t1_j17d8yg wrote

Yep, but I removed it for brevity.

In my line of work, we inherit other people's messes and patch them up for extortionate hourly rates. We're the ones you call when the Indian contractor writes a bunch of gobbledegook and then bails on you. Then the focus is more on the biggest impact, lowest-effort items, not "nice to haves" like clean code.

2

ohyonghao t1_j12x35o wrote

What could possibly go wrong learning to program from a cesspool of bad practices with no theory or understanding behind it? Then while YOU are learning you let this be the thing that helps you with no explanation of why this is the correct code to use? Sort of the blind leading the blind.

26

I_ONLY_PLAY_4C_LOAM t1_j15lk8t wrote

This is true of ChatGPT as well. It looks like really convincing text, but I asked it a math question that it got confidently completely wrong. I've heard the same from professional physicists. It's dangerous because it's convincing but also completely wrong.

2

mascachopo t1_j13b3au wrote

By own experience I must agree. They are great at producing very simple code or boilerplate stuff you may want to use as a starting point, but an inexperienced developer might miss a lot of the wrong stuff and introduce a myriad of issues. As a way of example it took me 15 iterations to get ChatGTP to implement a relatively simple batch at which point I’d would have rather written it myself from scratch.

16

Alberiman t1_j170z5s wrote

I asked chat GPT to produce a simple extrapolation method in matlab using Forward Finite Difference, it immediately just got the implementation wrong and it took me 5 minutes of repeated "No this line should be blah blah blah" before the code was actually usable

relying on it's probably not great to do

1

original_4degrees t1_j12wwdz wrote

let something else think for me. what could go wrong?

15

antigonemerlin t1_j14lzdi wrote

Except it doesn't even think, but produce a simulacrum of thinking that is designed to fool our heuristics without making the necessary deeper connections.

3

[deleted] t1_j14a51r wrote

I use GitHub Copilot constantly.

Producing code that is 30% worse but takes 85% less time to produce is worth it to me.

It also helps you solve bugs faster.. sometimes I legit just write a comment saying "The reason this returns 'Xyz' instead of 'Abc' is because..." and then it finishes it or "The way to make it do XYZ is:".

Oh and don't get me started on tests... for projects where I'm not forced to write tests, but would still significantly benefit from them they are a big game-changer.

They take a tiny fraction of the time, which means that I just bang them out instead of putting them off, and then can catch a ton of bugs before they arise.

Bugs are unavoidable, everyone has a backlog of bugs you burn through, usually people allocate X% of your time to new stuff and X% of your time to fixing broken stuff.

The net value-add of Copilot even after bugs etc. is enormous

And that's only today.... I remember playing with GPT 3 and similar models 2-3 years ago before they could code and they blew my mind at the time...

Seeing where Copilot is today (being able to solve virtually every algorithm problem I throw at it) is bananas... in 18 months this study will be meaningless.

At this point it's like self-driving cars -- they make mistakes every 10 miles or whatever, but humans also make similar mistakes every XXXX miles, so now it's just about closing the gap.

13

overzealous_dentist t1_j14chak wrote

It's been astounding to me, it's like the mid level dev I can order around I've never had. I will probably never go back to writing all the code myself.

6

[deleted] t1_j14hb26 wrote

Yeah it's amazing... being able to guide the process but not having to keep every little thing straight is excellent.

Programming literally feels like a different activity, like I'm kind of riding a higher-level wave of development.

4

Alberiman t1_j17143l wrote

well, shit, so is copilot worth the 10 dollars a month?

2

[deleted] t1_j172s1v wrote

I would personally pay $40 to $50 a month for it, yes.

1

wheat_beer t1_j14u10v wrote

Which language are you using? I'm been trying to use it with C++ and I've found the suggestions are terrible. It will call functions with the wrong number of arguments, return values of the wrong type, or call functions/methods that don't exist.

As a source code completion aid I also find that it suggests too much code. I just want to auto complete this line but instead it will suggest several lines of code that I'd have to delete if all I want is to complete the current line.

2

[deleted] t1_j14ywv9 wrote

Python, Node.js, React, HTML, vanilla JS.

My job is mainly Python back-end, my personal business is the JS stack.

For my job for example I had to write a program to do an exponential back-off retry for a flaky 3rd party API, and it nailed it instantly with just a description of what I wanted.

I definitely believe that it might be good at some languages and bad at others (e.g. I think C++ is used a lot in coding competitions and lowkey wonder if something about that may have lead to degraded quality of code).

I like the multi-line thing, tbh I wish it did it more.

It's definitely not perfect but intuitively I'd say I'm at ~40% faster (even after correcting for bugs etc.).

1

PMzyox t1_j12z834 wrote

It’ll get better. Soon we won’t need to really know how to program anymore

9

FocusedIgnorance t1_j13gwc0 wrote

Eventually, we’ll have tools where you just give the machine the exact specifications of what you want it to do and it’ll just hand you the output.

1

overzealous_dentist t1_j14c3q3 wrote

That's what it does right now, I strongly encourage people try it out. I spent a few hours last night getting it to write increasingly complex web views and it was unbelievable.

1

awfulconcoction t1_j13kx9d wrote

People that don't need help are better coders than people who do?

9

alehel t1_j14pcep wrote

Non lazy programmers are better than lazy programmer's is probably a better description.

5

Wings1412 t1_j13rznb wrote

Honestly I don't care about AI assistants for writing code, that's the easy bit... where is my AI assistant for writing documentation?

8

overzealous_dentist t1_j14c05h wrote

It does that too. It also generates thorough tests. It's truly revolutionary and will take over the industry, the gains are just way too high. You just have to be able to review and correct what it outputs, like a senior dev reviewing a PR.

7

MulticolorZebra t1_j15b0dw wrote

Does it really write documentation and tests? I had no idea

2

mormigil t1_j15bz5e wrote

Yes you can ask it to explain sections of code or write tests for certain functions. I'd think of it as really good as solving coding busy work. If the answer is relatively obvious but tedious to do then chatgpt fits perfectly.

2

goldfaux t1_j13qb6h wrote

The only AI that would be amazing is one that would automatically suggest accurate ways to remedy a bug. No, Im not suggesting it tell me to add a null check that it failed on. Go through the code and figure out how a scenario that should never have been null in the first place, due to the business cases, ended up null. Look at my databases, services, etc. Honesty, it wouldnt be able to without knowing the business logic, so Im confident that AI won't be replacing me in my lifetime. The AI would have to attend every meeting to determine exactly the customer wants. Could you imagine how upset the customer would be after telling a AI what they think they want, compared to what the AI produces. This is a real life problem that happens with people everyday.

3

epic_null t1_j14asi1 wrote

... okay a virtual stack tool would be neat. Like don't make me run the code to try to reproduce it, start building the state backwards and let me start to see what it would take to get there.

That sounds less like AI though and more like a comprehensive carefully built tool.

2

antigonemerlin t1_j14mnwn wrote

Instead of terrible developers randomly copying code from StackOverflow with no idea of how it works, they are now copying from ChatGPT, which probably has millions of StackOverflow answers embedded in its training data.

The more things change...

3

gurenkagurenda t1_j13rh7q wrote

I understand why they built their own editor and code completion tool based on Codex, since they wanted to be able to collect detailed data about the editing sessions, but I think doing so raises serious questions about the applicability of their results. They’re ignoring all of the UX design of a real code assistant and focusing only on the underlying model.

For example, including temperature control in the UI is just stacking the cards against the AI group. No sanely designed AI assistant would draw attention to that parameter, and there’s not much reason for it to be user configurable at all. It would be like if you were testing how well drivers performed in cars with the radio on, and you put a big dial in front of the radio group for controlling antenna position. You’re just encouraging them to waste effort on something they don’t know how to adjust properly.

1

antigonemerlin t1_j14m35u wrote

Jokes on you, my code is already terrible.

1

Konras t1_j151nnz wrote

So in other words. I know what I am doing vs I need someone who will tell me what to do.

1

steeltoedpancakes t1_j15igrz wrote

Well thankfully security is rarely a requirement in take home coding assessments. You know the ones that they like to throw at you right away in the interview process. Sounds like we just got a tool to fight back against bull shit interview tests. Questions like can you write fiz buzz in java script? I don't know but I bet chat gpt could do it. Hell I bet you could copy and paste most prompts into chat gpt and get halfway decent input.

This is going to be a nightmare for hiring managers and recruiters. They may have to access the skills of people the old fashioned way by actually getting to know some one. The horror...

1

DoofDilla t1_j13w2re wrote

And another click bait title to farm on the current AI buzz.

If you care to look at the study itself, first, N=47, which is not that much.

Second, they had two assignments involving encrypting and decrypting, as well as SQL.

So all that they found out is that AI is not very secure at doing encryption stuff (who could have known) and does not property check for SQL injection bugs.

Also, in the assignment given to the users, i don’t see anything like “make sure it’s safe against Injection”.

Overall, interesting study but bad title.

0

Bunkeryou t1_j13q8b5 wrote

For now. Ths technology will keep building on itself and become increasingly refined over time.

−2

lexartifex t1_j13piod wrote

Developers are definitely on the chopping block, way too expensive, way too scarce, job is logic based and tons of time is still wasted on small mistakes and plumbing and unclogging tasks.

−3

Uristqwerty t1_j13sro5 wrote

Developers' key value is their mindset for analyzing problems, and their ability to identify vagueness, contradiction, and mistakes in the given task, go back to the client, and talk through the edge cases and issues. AI might replace code monkeys who never even attempted to improve themselves, but as with every no-/low-code solution, management will quickly find that a) it's harder than it looks, as they don't have the mindset to clearly communicate the task in language the tool understands (this includes using domain-specific business jargon that the AI won't have trained on, or references to concepts prevalent in that specific company's internal email discussions), and b) a dedicated programmer has a time-efficiency bonus that makes it cheaper for them to do the work than a manager, so might as well delegate to the specialist anyway and free up time for managing other aspects of the business.

Thing is, developers are constantly creating new languages and libraries in an attempt to more concisely write their intentions in a way the computer can understand. Dropping back to human grammar loses a ton of specificity, and introduces a new sort of linguistic boilerplate.

2

overzealous_dentist t1_j14cabc wrote

You're talking about maybe 1% of developers here

2

lexartifex t1_j14ekym wrote

I think the number of developers doing less automatable tasks is probably much greater than 1%, but yeah, the other comment seems to ignore that the human element they describe is applicable to many industries and professions. It isn't elimination but "reduction in force" of "bloat" that I am talking about

1

[deleted] t1_j13bnog wrote

[deleted]

−5

Doom-Slayer t1_j13ins3 wrote

Care to elaborate? I've messed around with ChatGPT and it only had a success rate of about 50% on basic pieces of code I got it to write for my area of expertise.

It would either references function that didn't exist (how it managed that I have no idea), or output values fine... but I would find that it performed the calculations wrong. And if the concept was too novel, it was straight up wrong no matter how many times I repeated myself.

1