Viewing a single comment thread. View all comments

AdditionalPizza OP t1_it6o96e wrote

They may not be the be all end all, though they sure are looking like they are a very significant step at the very least.

But I've said in the comments before, this post is about the time before AGI. We don't need AGI to see massive disruptions in society. I believe LLM's are the way we will get there, but language models are "good enough" to increase productivity by enough across enough IT sectors that we will start seeing some really big changes soon.

Advancements like this are going to lead to more powerful LLM's too. Highly suggest reading this article from deepmind as the implications are important.

4

ftc1234 t1_it7c7j6 wrote

The problem is often the last mile issue. Say you use LLMs to generate a T-shirt style or a customer service response. Can you verify correctness? Can you verify that the response is acceptable (eg., not offensive)? Can you ensure that it isn’t biased in its response? Can you make sure it’s not misused by bad actors?

You can’t represent all that with just patterns. You need reasoning. LLMs are still a tool to be exercised with caution by a human operator. It can dramatically increase the output of a human operator but it’s limitations are such that it’s still bound by the throughput of the human operator.

The problems we have with AI is akin to the problem we have with the internet. Internet was born and adopted in a hurry but it had so many side effects (eg. Dark web, cyber attacks, exponential social convergence, counduit for bad actors, etc). We aren’t anywhere close to solving those side effects. LLMs are still so limited in their capabilities. I hope the society will choose to be thoughtful in deploying them in production.

2

AdditionalPizza OP t1_it7dt3m wrote

All I can really say is issues like that are being worked on as we speak and have been since inception. Assuming it will take years and years to solve some of them is what I'm proposing we question a little more.

But I'm also not advocating that fully automated systems will replace all humans in a year. I'm saying a lot of humans won't be useful at their current jobs when an overseen AI replaces them, and their skill level won't be able to advance quickly enough in other fields to keep up, rendering them unemployed.

3

ftc1234 t1_it7f3se wrote

I am postulating something in the opposite direction of your thesis. The limitations of LLMs and modern AI are so much that the best it can do is enhance human productivity. But its not enough to replace it. So we’ll see a general improvement in the quality of human output but I don’t foresee a large scale unemployment anytime soon. There maybe a shift in the employment workforce (eg. A car mechanic maybe forced to close shop and operate alongside robots at the Tesla giga factory) but large scale replacement of human labor will take a lot more advancement in AI. And I have doubts if society will even accept such a situation.

2

AdditionalPizza OP t1_it7hczg wrote

Yeah we have totally opposite opinions haha. I mean we have the same foundation, but we go different directions.

I believe increasing human productivity with AI will undoubtedly lead to a quicker rate with which we achieve more adequate AI and then the cycle continues until the human factor is unnecessary.

While I'm not advocating full automation of all jobs right away, I am saying there's a bottom rung of the ladder that will be removed, and when there's only so many rungs, eventually the ladder won't work. As in, chunks of corporations will be automated and there won't be enough jobs to fill elsewhere for the majority of the unemployed.

2