Submitted by PassingTumbleweed t3_10qzlhw in MachineLearning

Every day, there seems to be new evidence of the generalization capabilities of LLMs.

What does this mean for the future role of deep learning experts in academia and business?

It seems like there's a significant chance that skills such as PyTorch and Jax will be displaced by prompt construction and off-the-shelf model APIs, with only a few large institutions working on the DNN itself.

Curious to hear others' thoughts on this.

76

Comments

You must log in or register to comment.

uchi__mata t1_j6srpnd wrote

I don't see prompt construction obviating the need for coding skills, even as the prompts improve I still think you're going to want knowledgeable humans to review the scripts before using them in critical apps, but I do think tools like GPT will rapidly speed up prototyping and eliminate boilerplate dev for most engineers.

That said, model APIs strike me as a much more likely disruptor of workaday software dev because as they prove themselves out it'll just make financial sense for firms to have fewer people creating bespoke models vs pulling stuff off the shelf and modifying it as needed. In this world data science largely becomes an orchestration task with ML ops/data engineering + understanding of business need and available data being translated into ML pipeline creation to solve problems. People working directly on model creation from scratch would mostly be academics and highly skilled CS/stats/math PhDs working at a handful of large tech companies and model API firms. This seems like the most probable future to me as almost every innovation in tech goes this route eventually.

Basically, if a task doesn't require deep understanding of business needs, it's subject to commoditization.

82

Screye t1_j6tu8mc wrote

> in ten years?

10 years ago was 2012. Deep Learning didn't even exist as field back then.

Tempting as it might be, I'd recommend caution in predicting the future of a field that went from non-existence to near-dominance within its profession in the last 10 years.

49

gdahl t1_j6uc1bh wrote

Deep learning existed as a field in 2012. The speech recognition community had already adopted deep learning by that point. The Brain team at Google already existed. Microsoft, IBM, and Google were all using deep learning. As an academic subfield, researchers started to coalesce around "deep learning" as a brand in 2006, but it certainly was very niche at that point.

23

[deleted] t1_j6uj4vr wrote

[deleted]

4

gdahl t1_j6upct4 wrote

I would say the turning point was when we published the first successful large vocabulary results with deep acoustic models in April 2011, based on work conducted over the summer of 2010. When we published the paper you mention, it was to recognize that these techniques were the new standard in top speech recognition groups.

Regardless, there were deep learning roles in tech companies in 2012, just not very many of them compared to today.

8

PassingTumbleweed OP t1_j6tz0wq wrote

I agree everyone should take predictions with a huge grain of salt (obviously some clever person might find a way to make Open-ChatGPT on mobile... We can only hope), however this does seem like a conversation worth having, since LLMs appear to have a massive impact across many areas at once. Already I find a lot of the insights here interesting!

2

lmericle t1_j6udkpc wrote

For the last freakin time, LLMs are not the be-all end-all of machine learning...

25

EducationalCicada t1_j6wkdlr wrote

I would even say that neural networks are not the be-all end-all of machine learning.

3

data_wizard_1867 t1_j72a77y wrote

I would even say machine learning is not the be all and end all of solving problems with data.

1

fastglow t1_j6t6el1 wrote

"DL roles" have only existed for like a decade. Machine learning engineers will continue to be in demand, though the required skills will change.

12

evanthebouncy t1_j6wpf34 wrote

I made a bet in 2019 to _not_ learn any more on how to fiddle with NN architectures. It paid off. Now I just send data to a huggingface API and it figures out the rest.

What will change? What are my thoughts?

All well identified problems become rat races. If there's a metric you can put on it, engineers will optimize it away. The comfort of knowing what you're doing has a well-defined metric is paid for in the anxiety of the rat race of everyone optimizing the same metric.

What do we do with this?

Work on problems that don't have a well defined metric. Work with people. Work with the real world. Work with things that defies quantification, that are difficult to reduce to a mere number that everyone agrees on. That way you have some longevity in the field.

5

ok531441 t1_j6sv32e wrote

There’s off the shelf stuff now and we have easy enough model API for a bunch of use cases. I don’t know what you mean expect LLMs to change - be a better autocomplete or better search? Maybe but it doesn’t seem like a fundamental change.

3

Ulfgardleo t1_j6tnxul wrote

I vager a guess that most DL applications can't really make use of language models and tye cost of said models make it infeasible for many applications.

3

visarga t1_j6u0vyz wrote

I think the road to trusted AI is going to be long, even a great AI is useless unless we can verify it aligns with our intentions and truth. So we are going to see lots of work around it.

3

MemeBox t1_j6wfbl3 wrote

ha. So all people are useless? The walking talking GAI that is the human form is completely useless?

1

visarga t1_j6x07jz wrote

I was actually saying the opposite - AIs need human validation to do anything of value. Generating tons of text and images without manually checking them is useless. So there is work around AIs.

3

Cherubin0 t1_j6u6lj8 wrote

LLMa will be seen as outdated already.

3

ktpr t1_j6t8124 wrote

It'll look like something that you can't start preparing for right now because a lot of it hasn't been invented yet.

2

neanderthal_math t1_j6v9qoj wrote

OK, I’ll bite. : )

The vast majority of coding data ingestion, mooel discovery, and training that we currently do will all go away.

The job will become much more interesting, because researchers will try and understand why certain architectures/training regimes are unable to perform certain tasks. Also, I think the architectures for some fundamental tasks like computer vision, and audio are going to become modular. This whole training models end to end is going to be verboten.

2

gdahl t1_j6ubet7 wrote

Deep learning roles 10 years ago (in 2013) were pretty similar to what they look like now, except they are much more numerous now. I'm sure there will be some changes and a proliferation of more entry-level roles and "neural network technician" roles, but it isn't going to be that different.

1

minhrongcon2000 t1_j6vox3u wrote

Maybe a resource-hungry industry that occupies 85% of the world's energy

1

emotionalfool123 t1_j6w29e1 wrote

It will solve that problem by solving for nuclear fusion. Everybody will get energy as Oprah would say.

2

rePAN6517 t1_j6u13mg wrote

It won't be a job for humans at that point.

−1

MemeBox t1_j6wffzx wrote

In 10 years I'm not sure we will need humans at all, let alone DL specialists. Look at the progress curve, we are a hop skip and a jump from an Einstein in every home.

−1

bubudumbdumb t1_j6uux46 wrote

I would expect a lot of work around regulation. Like probably formal qualifications requirements will emerge for who can tell a legal jury how to interpret the behavior of ML models and the practices of who develops them. In other words there will be DL lawyers. Lawyers might get themselves automated out of courtrooms: if that's the case humans will be involved only in DL trials and the LLMs will settle everything else from tax fraud to parking tickets. Do you want to appeal the verdict of the LLMs? You need a DL lawyer.

Coding might be automated but it's really a question of how much good code to learn from is out there.

Books, movies, music, VR experiences will be prompted. Maybe even psychoactive substances could be generated and synthesized from prompts (if a DL lawyer sign off the ML for it). Writing values will change: if words are cheap and attention is scarce writing in short form is valuable.

The real question is who we are going to be to each others and even more importantly to kids up to age 6.

−2