ExtraFun4319

ExtraFun4319 t1_je2nmcu wrote

>There is some serious cope going on in programming subs

There's cope going on this sub, too. "AGI 2023!" is clearly cope to me, cope that comes from people who desperately want AI to rescue them ASAP.

And the fact that no serious AI scientist (or any AI scientist) believes such a thing (AFAIK) only bolsters my view.

3

ExtraFun4319 t1_jclafdx wrote

I REALLY hope you're wrong. What a terrifying future! And you will be, because in case you didn't know, not all people are selfish sociopaths and would actually care enough about their friends, partners, and families and humanity in general to spend time even in the face of advanced chatbots. Imagine thinking that couples are just gonna break up or divorce, or best friends are going to stop seeing each other.

And some of you guys LOOK forward to this?!?!?! How lonely and anti-social must one be to have no problem with this?

3

ExtraFun4319 t1_j9ijmok wrote

As someone who actually works in a related field and is pretty familiar with the actual field of AI itself, and have met and know people with all sorts of work backgrounds which has given me insight about many work fields, I am extremely doubtful that ChatGPT (in any capacity) will result in major layoffs.

The technology just isn't there, and I don't see it getting there (to the level where it'd cause the economic damage you're describing) anytime soon.

9

ExtraFun4319 t1_j6jk3xf wrote

This restaurant isn't even close to being fully automated yet, so deriving that conclusion from this post strikes me as a bit odd.

I think those numbers might rise in the next few years (think somewhat less than Covid numbers at most), but I'm highly skeptical there'll be this unemployment crisis that you're describing at some point this decade.

AI and robotics has indeed made significant amounts of progress over the past few years (especially AI) the technology today is still nowhere near capable of performing the entirety of a large chunk of the workforce's jobs; fully replacing an employee's complete set of tasks is a much higher bar than merely augmenting the employee. And even the augmentation era has yet to fully get underway (although we're obviously seeing early signs of that with ChatGPT and the like).

And that's not even taking into consideration that mass adoption of new technology takes a good while and in many cases has to go through legal hurdles before being adopted at all.

3

ExtraFun4319 t1_j6ex9nu wrote

>In the meetings, Altman told policymakers that OpenAI is on the path to creating “artificial general intelligence,”

If they get there, it won't be as a private company.

Why do I think this? Personally, I believe it's painfully obvious that once private AI organizations come anywhere near something resembling AGI, they'll get taken over/nationalized by their respective national governments/armed forces. OpenAI won't be an exception.

There is absolutely no reason why the US government/military would just sit there and watch a tiny group of private citizens create something that dwarves the power of nuclear weapons.

And no, I doubt the average US senator is up to date with what is happening in AI, but I'm almost positive that there are people in the government/military who are keeping a close eye on progress in this field, and I have no doubt that the gov/military will pounce when the time is right (assuming that time ever arrives).

Ballsy of Altman to tell lawmakers to their faces that they're on the path to creating something that would potentially eclipse their own power. But like I said, I highly, highly doubt that that will ever be the case.

39

ExtraFun4319 t1_iuzmqfa wrote

I assume there's people in gov who keep up with tech progress (military leaders, DARPA scientists, etc.) so they would have a good idea of when the time for nationalization/confiscation would be right and relay this belief to the lawmakers/head of state so that they could take action.

In any case, I expect politicians to raise their interest in AI as the technology advances, and future senators may not even need a heads up to introduce legislation to make AI-researching companies state owned.

3

ExtraFun4319 t1_iuzj858 wrote

I strongly believe that if world governments suspected that private organizations within their jurisdiction were close to solving AGI they'd either nationalize them or confiscate their technology/algorithms, etc.

There is NO WAY any government would just stay idle and allow a private company to develop a technology much more powerful than nukes. That's why I find it funny when I see people say that Google or Meta or some other company will be the first to achieve AGI. Maybe they will, but as a state-owned enterprise under the control of their government, not under the control of their CEO's/boards.

7

ExtraFun4319 t1_iustvpq wrote

>so basic income has to be implemented as soon as possible.

And if all jobs don't become obsolete this decade (which is an extreme take)?

Though I will admit that UBI should be a thing regardless of how many people are employed. Nobody should have to work to survive.

14

ExtraFun4319 t1_iuj390m wrote

That is one hell of an extreme position to take, considering that the US unemployment rate today is 3.5% and while technology has made significant amounts of progress the past few years, especially in AI, technology today is still nowhere near capable of performing the entirety of a large chunk of the workforce's jobs and thus creating the need for UBI; fully replacing an employee's complete set of tasks is a much higher bar than merely augmenting the employee.

And that's not even taking into consideration that mass adoption of new technology takes a good while and in many cases has to go through legal hurdles before being adopted at all.

1