Viewing a single comment thread. View all comments

ChronoPsyche t1_iyldr7d wrote

True, it's a good thing that wasn't what I was arguing. I was pretty clearly talking about pre-singularity AI in the near/medium term. Once/if the singularity happens, all assumptions and predictions go out the window. There are just too many unknown variables to even begin to fathom what the status of our jobs will be, much less if the concept of jobs will even be relevant anymore.

By the way, AI doesn't do software engineering "less than perfect", it doesn't do it at all. What's being discussed here is programming snippets of code or very small programs. If you ask it how to make large, enterprise applications, it will give you general guidelines that you could get off Google and that's it.

Programming is to software engineering what construction is to civil engineering. The main difference is that software engineers also tend to be programmers, but programming is just a tool for building software, but knowing how to code doesn't mean you know anything about how to actually build commercial software applications.

EDIT:

It's so difficult for an AI to do, because there simply isn't enough training data for such a task. Besides the fact that most commercial-grade software applications don't have publicly available repos that can be trained on, there is so much more to software engineering that has almost nothing to train on.

How do you train an AI to engage in requirement gathering, requirements analysis, client communication, requirements-specific design and architecture, testing, deployment, maintenance, etc? These aren't things that are perfectly encapsulated in trainable datasets. It gets even iffier when we are talking about software that needs to follow any sort of regulations, especially safety regulations.

It will be possible eventually, but not until much more general capabilities such as human-level logical reasoning and communication are developed. Basically, software engineering is safe until we have competent AGI. The singularity comes not long after that. (And I say "competent" because nobody is replacing software engineers on large, enterprise-level software applications with AI that can poorly do the job).

5

AsuhoChinami t1_iylfgvz wrote

Was it obvious that you only meant the short term? "While they surely will become more advanced, you always need someone" made it sound as though you meant... well, always.

1

ChronoPsyche t1_iylg7zx wrote

Well now you know what I meant. No job is safe once/if the singularity happens, but whether the concept of jobs will even matter anymore at that point is anyone's guess. I'd wager it won't. Whether because we all are slaves to the master AI or living in Utopia, is the question.

2

acaexplorers t1_iymkkrn wrote

You don't need some special "singularity" event to happen.

Your job definitely isn't safe. The updates are happening lightning fast and there are already tons of examples of people posting perfectly useable code.

More and more slowly, software engineers will only need to have conversations in English with AI to program. So less and less jobs available. It won't happen all at once but there will very quickly be WAY less need to hire programmers.

Already for basic tasks it makes no sense to hire a programmer. And this is an AI that isn't even connected to the internet.

3