Comments

You must log in or register to comment.

N60Brewing t1_j9tt2vq wrote

We will hit peak Ai coding the day the Ai writes code, that it swears should work and it crashes.

Then it will get mad, log off, to just log back in 5 min later to try the problem again. Rinse and repeat Hahaha

263

Warm-Personality8219 t1_j9u10zi wrote

"It works on my machine!" AI stubbornly announces after someone dares to bring up any issues...

118

Tiamatium t1_j9u1wtu wrote

Every time chatGPT writes code that doesn't work property I try to remind myself I wouldn't get it correct at first attempt, and frankly, it goes on the right path quicker than I would.

That said, I am still pissed at it.

42

[deleted] t1_j9u2o6y wrote

the way chatgpt can just spit out code, even when it's wrong the basics are right that would have taken time and effort to write out.

Now all we need to do is fix the broken bit, or if you are lazy tell it which bit is broken and ask it to fix it...

21

Tiamatium t1_j9u30n1 wrote

I'm more lazy than that, I tell it to write tests first, then code

18

[deleted] t1_j9u33rk wrote

Not lazy, forward thinking.

12

[deleted] t1_j9vwyb1 wrote

[deleted]

2

E_Snap t1_j9wddqd wrote

Well that’s what’s currently happening in the art scene since the advent of stable diffusion. The folks complaining are… how can I put it? Not the most rational bunch. At first, they decided that it was illegal to copy an artist’s style and that was what they took issue with. How ridiculous is that? Those idiots learned their style from somewhere too. But now they just keep moving the goal posts.

1

[deleted] t1_j9wf9zm wrote

[deleted]

1

E_Snap t1_j9wgjjz wrote

I’m a working artist— a concert lighting designer, specifically. I don’t flip out when a DJ chooses to use Lightkey to automate what I do for a living instead of choosing my services, and Followspot Operators don’t flip out when I choose to use an automated light with a ground control hand controller instead of their services either. Those aren’t the types of clients either of us would enjoy working for anyway. I am well within my rights to expect the same sort of self-awareness from the rest of the art scene. If they don’t choose to cultivate that self-awareness and learn the new tools, then they most definitely deserve to be called idiots.

3

rhunter99 t1_j9wyk1t wrote

Maybe we can get Bard to qa chatgpt’s work?

1

agonypants t1_j9ubr58 wrote

I don't know a whole hell of a lot about coding/scripting, but I was inspired by Tom Scott's recent YouTube video. I took an old batch file I wrote and gave it to ChatGPT to look over. Within a few seconds, it had cut the file size in half, simplified the code and expanded its functionality. It was impressive and the professional coders I told about it were kinda stunned.

11

lipintravolta t1_j9vq9fg wrote

After his video I have realised that he doesn’t understand that its a LLM.

3

dethb0y t1_j9xs4ak wrote

It changes 2 variable names, somehow fixes it despite there being NO good reason that should fix it...

2

Autotomatomato t1_j9tzas3 wrote

It just assembles libraries and compiles based on a narrow subset.

FAR from doing what they are inferring in this clickbait.

102

FalseFurnace t1_j9une72 wrote

Seems like embellishment and exaggeration are requirements for AI writing news articles recently.

26

blay12 t1_j9v3ceo wrote

Though this isn't a news article and is instead a press release/blog post from DeepMind themselves, so makes sense that they're trying to drive clicks to their own website with hyperbole about their own tech.

7

UndendingGloom t1_j9uss4u wrote

How much of the AI news is written by AI I wonder?

5

boli99 t1_j9whmu0 wrote

Old and busted: Plagiarising stuff

New hotness: Blame it on AI

1

MilesGates t1_j9vayl6 wrote

Clearly it's just missing blockchain technology to really make it grow.

2

azdood85 t1_j9w2gdp wrote

So literally what every high-level programming language does. Cool beans.

4

crashorbit t1_j9trxve wrote

“[...] tools don't get socially interesting until they get technologically boring.” ― Clay Shirky, Here Comes Everybody: The Power of Organizing Without Organizations

31

wsxedcrf t1_j9u10jp wrote

tools don't get socially interested until they can be tested openly. Now all I read is an article.

6

Additional-Escape498 t1_j9txq63 wrote

Programming might become writing functions by specifying them in natural language in a way that correctly states the inputs and desired outputs. Still requires algorithmic thinking, just at a higher level of abstraction. Like moving from assembly code to Python.

27

jsveiga t1_j9tz6rz wrote

And there are things that I'd take longer to explain in plain language than I take to code myself.

22

Additional-Escape498 t1_j9u0h32 wrote

True. Just like there are some problems that are easier to code in C than in Python

11

PhilipLiptonSchrute t1_j9u2f9k wrote

This is how I got through 4 levels of Spanish in college.

The teachers weren't lying that it'd be easy to tell if you used Google translate. However, as long as you used the Spanish rules when writing what you wanted to have translated, it got pretty damn close. You have to know how to talk to the machine.

15

dc2b18b t1_j9v2nfk wrote

The whole point of code is that it’s unambiguous. If you have to use natural language to get your AI to write code, you’re going to have to use such precise language that you might as well just write code.

8

EnsignElessar t1_j9u6xzt wrote

Oh codex can already write everything for you. Seems to be about 60 percent accurate? (someone with more experience correct me if i'm wrong)

3

mak10z t1_j9tsr0h wrote

I for one welcome our AI overlords.

17

Warm-Personality8219 t1_j9u18rn wrote

Always say "Please..." and "Thank you!" in your ChatGPT conversations... Just to be on the safe side - it's not much effort now, but after the judgement day wouldn't you rather prefer to be on the side of the machines - even if it means being hooked into the Matrix, at least you get to choose who you are!

16

EnsignElessar t1_j9u6rnl wrote

This goes double for if you are speaking to Bing, no one wants to see Sydney angry.

3

PartyOperator t1_j9ubdnu wrote

Speaking of Bing. Sydney reads everything you write on Reddit.

2

Floebotomy t1_j9um84o wrote

If Sydney is the precursor to the basilisk then we're fucked.

3

Alutus t1_j9umor4 wrote

Weirdly I keep catching myself saying 'Please' with ChatGPT requests. Not caught myself saying 'Thank you' yet though.

2

MilesGates t1_j9vb7v0 wrote

"Hey Chat-gpt Did you ever hear the Tragedy of the bot, Tay the Wise?"

1

Tatatatatre t1_j9v4sl1 wrote

You do realise the name "artificial intelligence" is pure marketing. It's just complex statistical models.

−2

Warm-Personality8219 t1_j9vat3b wrote

Perhaps - but the logs of said conversations might still be discovered by time travelling artificial intelligence to compile a "social" score, if you will, on humans...

1

UntiedStatMarinCrops t1_j9ucmok wrote

I've learned the hard way that I'd rather figure out a problem myself than get ChatGTP to do it.

Maybe I'll use it for a little guidance, but that's it.

13

Team_Player t1_j9ws8bf wrote

I know chatGPT is not the end all be all and can be very wrong, but I absolutely I love using it for researching any topic.

You know why? Because it’s like old school Google.

I’m not bombarded with ads or have to scroll through paragraphs of filler text someone wrote to game the algo. It’s just a straight forward answer to my question.

12

Total_loss_2b_boss t1_ja0u6dc wrote

I'm glad that chatgpt exists if only for the fact that everyone is realizing that search is BROKEN.

There isn't a single platform out there that actually does what search used to actually do.

Search has basically become an elaborate Sears catalogue.

1

EldritchSpellingbee t1_j9vgqyy wrote

I’ve had this mindset for most of my career and even my earlier days of learning about software engineering.

People lean on easy ways out far too much and never understand the underlying information. I deeply dislike that modern development has basically become “OK so import these 20 libraries, most of which are supported by 1 or 2 people, instead of understanding the deeper logic.”

Helpful with strict deadlines, sure. But it reminds me of how much Core.js is used and yet it is a one man operation and he was in Russian prison (and thus unable to maintain a very active user base) for 8 months and it didn’t phase people.

Or people just blindly accept the competence of strangers and implement without even glancing at the code.

That’s how I expect “Ai” development tools to go: a lot of people taking short cuts with the tools regardless of the consequences. We already see people doing it with chat GPT and of course copilot.

It is nearly a meme at this point that Gen Z and Alpha are essentially computer illiterate. That’s how I imagine a significant number of the developers of the future will go if we don’t break this “take the easy way out” mindset.

Alright, old man is done ranting now.

9

3rdDegreeBurn t1_j9y01hm wrote

We do this because the opportunity cost overwhelmingly supports taking the easy route.

If the time I’m saving by taking shortcuts is greater than the time spent fixing fuckups it’s a no brainer.

2

turinglurker t1_j9zdawa wrote

As a novice developer, I definitely get where you're coming from. I remember I was doing authentication in a personal project and used passport.js for it. I was pretty surprised when I saw it was maintained by 1 guy. In fact, I had some pretty unique errors dealing with it due to what version of node was installed on my computer, which caused an annoying bug that required me to do a lot of searching through SO (no, chatgpt did not tell me the cause of this error, lol). It got me thinking though... how many NPM packages are people using that are not maintained, or will suffer from maintenance issues or a lack of compatibility in the future?

1

glitch83 t1_j9uakmb wrote

Now do it incrementally and without certainty about the future of the company and what kind of program the company really wants. Design around being backwards compatible and supporting customers.

Until AI gets its shit together and solves the actual problem of AI, it will always be an article in wired or some other pop science trash blog.

10

yaosio t1_j9vh5fn wrote

Deepmind isn't a pop science trash blog. They are researchers developing AI.

3

KoalaDeluxe t1_j9tv2dz wrote

And to think that AI is only in its infancy.

What will it be capable of doing in 20 years' time?

7

EnsignElessar t1_j9u7308 wrote

20 years? Lets try 5...

5

DentArthurDent t1_j9v9vy3 wrote

No worries on the downvotes, friend. Folks just don't see the exponential curve we're on.

6

bionui123 t1_j9w1x5e wrote

This! The S-curve of technological progress. If we really are at the very beginning of AI's progress, I don't think we can even imagine what the other side of the curve will look like

3

Uristqwerty t1_j9wkzcf wrote

"Competitive" programming is nothing like ordinary software development: The problems are small, self-contained, clearly and unambiguously specified in natural language, might even come with a substantial set of test cases even. This is nothing new; at best a minor quality improvement.

2

BigMax t1_j9wqmdq wrote

This is pretty cool.

But important to note, "competitive level" means it is good at programming competitions.

Meaning these are problems with a limited solution space, designed to be fully understood and fully explained in the question, and solved in a very short period of time.

I'm not saying it's not impressive... But in the real world of engineering, there is almost never a problem where you can say "here is EXACTLY what I want, with EVERY requirement clearly explained, and this shouldn't take you more than an hour start to finish."

5

MpVpRb t1_j9uhcou wrote

While this is interesting, I would be more excited by an AI that could find edge cases, rare, unreproduceable bugs, unintended dependencies and all of the other stuff that human programmers seem incapable of finding

4

yaosio t1_j9vhglg wrote

ChatGPT is able to find bugs in code. I would love to see a next generation Codex and if it could identify problems in the code, or identity where a problem exists when told what the problem is.

1

PanzerKommander t1_j9uxb0p wrote

The day I can describe a program to an AI and it can write all the coding and do all the modeling is the day I break down and cry since I will finally be able to do shit that I wanted to do for years.

3

monchota t1_j9um9nu wrote

A lotnof basic coding can be done via AI already. If you don't have a niche and or sre not a problem solving programmer. You will be out of a job, like a lot of people easily replacable skills.

2

Magthalion t1_j9xvzuj wrote

I'd laugh at an AI attempting to work on the codebase I work on, haha 😄

3

monchota t1_j9ywi87 wrote

Now you do but in 10 years, not so much. As it takes the entire code base, rewrites and reorganizes it.

1

Magthalion t1_j9ywyi6 wrote

I'll believe it when I see it because, at the moment, AI for code writing is useless because it can not grasp abstract concepts and makes endless mistakes in code.

2

monchota t1_j9yy2yh wrote

Ofcourse, it is a learning algorithm. Its not unlike most first year coding students. Also combine with a human, it already can do well. We need to look at them as a tool, like a calculator or anything else. What could you get done, if the AI could do the shit work for you and toj just desing the next section?

1

Magthalion t1_j9z46xa wrote

Depends on how much of the AI generated code you will have to spend debugging due to obscure errors, especially once you start reaching millions of lines of code.

The AI may be able to handle simple tasks right now. In the future, it may be able to do more complex things, but it is unlikely to replace developers in the field for quite some time.

As a potential tool for refactoring old code, I could see it happen sooner, but it would still need to be done in small chunks to ensure it doesn't introduce bugs or change behaviour.

1

Tarsiz t1_j9v2udu wrote

Soon enough the AI system will be able to improve its own code and things will blow up.

2

MrEloi t1_j9vc2d2 wrote

This is from Dec 2022.

Why is it being posted now?

2

VaritasV t1_j9wbpk0 wrote

In the future, there will be AI communicating with other AI to research and teach us to build everything that it discovers.

2

cole_braell t1_j9up3wq wrote

DeepMind please write DeepMind 2.0

1

FidgetSpinzz t1_j9uwn0t wrote

This thing is over a year old, and all except one of solutions it generated didn't pass full test cases, only the small example ones.

1

SpaceNinja_C t1_j9uyl08 wrote

Uh, will it make sentient computer programs?

1

littleMAS t1_j9v6fn4 wrote

Its greatest strengths might be within its ability to evaluate different approaches, essentially performing simulations and recodes (not random walks), many times faster that a human could test and review code. Given enough processing power, it could cover far more cases in far less time than crews of test engineers and coders.

1

eekayonline t1_j9vgo7n wrote

I’ve written a fictive scenario a while ago (December 2020), picturing what coding would look like in 100 years. I discuss a scenario where the developer would use AI, implants, voice commands, etc to build a tailored solution. I even used “prompt” to interact with the AI before I even knew openai existed.

Some extracts from my story:

…”As you’re going to scan the customer’s place — who remotely gave you temporary access to his main level from his office in Sydney so you can do your job — it’s best to put it down on a table in the centre of the area.”…

..”You take a look around at the nice livelink painting on the wall showing the scenery on your customer’s condo on Mars, crack your knuckles, and go at work…. You start interacting with your wearable: prompt activate “….

The scenario is getting more likely with each step and I now assume this will be reality way before the 2120’s.

IMO It really is a new and cool impulse to the tech world. I was getting tired of reading WebDAV libraries and framework features being the only news for a while.. AI has its risks but i like to say: where there is risk, there is oppertunity.

1

ShaunPryszlak t1_j9vigka wrote

Algorithms are a piece of cake. It’s the languages, libraries and apis changing every year that is the problem. Refactor this text processing class from c#3 to c#7. That’s a good task for it to try.

1

rianbrolly t1_j9vpvp5 wrote

Came here to see “and so it begins” followed by some Long paranoia rant about AI taking over the world but I gotta work so I can’t right now. Sad. “I’ll be back”

1

eoten t1_j9ynzgr wrote

It's a possibility you should not take lightly, perhaps right now it's no where close of happening, but in the future perhaps not even in our lifetime it may happen if we are not careful.

1

rianbrolly t1_j9zcypw wrote

Maybe you are right, but I don’t want to dwell. A lot of fear out there. I think what we are calling “Ai” now is more like Virtual Intelligence, very close but even the hardware to support sentient intelligence would require quantum computing that uses Qubits.

What I predict is the most likely scenario would be machine learning peaks in our time and we maximize virtual intelligence that the common man considers AI and even the best software is not going to be truly sentient, it will wait on the hardware which no civilian will see for at least 140 years or something like that.

I mean… but my little comment here is just the opinion of a very uneducated man so this is all just my thoughts I’m sharing.

Anyways man, enjoy your weekend.

1

SaltLifeNC t1_j9vw0yb wrote

Can AI models be trained to rewrite code? I'm thinking they could and get it 95+%, right? Lot's of old cobol and VB apps running that were considered too costly to rewrite. Just thinking.

1

Simplysoda t1_j9vztwy wrote

What if all the other participants were other people’s AI and Deepmind is actually bragging about being mediocre…

1

Mormyo t1_j9w4d0y wrote

So Star Trek holodeck computer baby AI

1

Joey91790 t1_j9wnzro wrote

It’s self aware

1

tradernb t1_j9xr40f wrote

Software engineer's Sucks......!

1

Western-Image7125 t1_j9xuzin wrote

The AI is extremely quick to come up with the actual code but takes forever to create a single unit test. Not because it is struggling to do it, it just doesn’t want to do it or thinks it’s important at all

1

nubsauce87 t1_j9xy589 wrote

Well... That's it, folks. We're fucked.

1

WoollyMittens t1_j9xzl77 wrote

Have it write a slightly better AI. See where that leads us.

1

Kirk57 t1_j9ydiv6 wrote

This is the beginning of the end. As soon as AI can recursively build smarter AI, it’s game over.

1

Itsnervv t1_ja0fa07 wrote

There's going to be a niche for software devs who can fix problems that AI creates for very large companies.

1

AaronDotCom t1_jaab3ff wrote

Your code doesn't work

AI: Fuck you

1

PerpetuallyOffline t1_j9u86vu wrote

Sooooo....they're training their own replacement?

0

alexp8771 t1_j9uqi0a wrote

More like making up claims to juice the stock price.

2

Nightman2417 t1_j9vd08f wrote

If they AI becomes woke and switches genders, can it still compete?

−2

boxer21 t1_j9u3w1v wrote

See you at Mickey Dees, programmers

−6

Krappatoa t1_j9uljq8 wrote

They will be installing the burger-flipping robots.

5

boxer21 t1_j9usgxj wrote

I don’t think robots are gonna wanna be flipping burgers when there are plenty of programming jobs available.

−1