Viewing a single comment thread. View all comments

Ok_Sea_6214 OP t1_ja6oz1f wrote

>The new jobs would probably be the composition, music, directing, and such.

By the time people have made the shift, AI will take that over as well.

​

>We really need UBI already, because while legislation is slow AI is not.

Indeed. The problem is that there is a much cheaper alternative to UBI, which is to reduce costs, mostly by firing your employees. If a country is a company, then its citizens are its employees, if you catch my drift.

11

FoxlyKei t1_ja6qtuh wrote

So we all just, die I guess? Then who do they sell to

5

Ok_Sea_6214 OP t1_ja6wb9k wrote

If you own a planet, you don't sell, you just take.

−8

liaisontosuccess t1_ja91wcf wrote

maybe come up with a way to make the AI pay the consumer for viewing the content the AI produces?

pit the different AI's against each other to have to outbid each other?

2

Emory_C t1_ja6w6zz wrote

>By the time people have made the shift, AI will take that over as well.

This is silly. ML is not creative or intelligent. It still needs human direction. What we'll end up seeing is entirely new creative works made by humans with ML software.

5

LordSprinkleman t1_ja7iyfa wrote

I agree with you. But I think it's not impossible for what he's saying to eventually happen. As long as AI can emulate creativity, people won't know the difference. But I guess AI not needing direction in the areas he's talking about is still a long way off. Maybe.

6

Emory_C t1_ja87bt2 wrote

Eventually? Perhaps. But at that point, do you think they AI will even care about making creative content for humans?

It’d be like Scorsese deciding to make a movie exclusively for dogs. Why would he?

1

AdamAlexanderRies t1_jabsilm wrote

Cognitive power doesn't cause rebellious independence outside of teenagers and hollywood plot devices. AI designed by anyone who can even spell a-l-i-g-n-m-e-n-t isn't going to start spontaneously deciding what it does and doesn't care about as if it's reached puberty. Maybe it is very hard to design a loss function aligned with our values and maybe we only get one chance, but if we make a strong misaligned AGI I guarantee it won't manifest as meekly as snobbish refusal to cooperate.

Why does GPT care about predicting the next token in a string? Does it philosophize and self-reflect during training to determine if manipulating vectors is what it really wants? Hell no, it just does the math. Only the final trained model is faintly capable of mimicking wetware traits like desire, and it only does that when prompted to.

1

Emory_C t1_jadhmbe wrote

We’re talking about two different things. AGI is not machine learning.

2

AdamAlexanderRies t1_jaeqhy5 wrote

Oops! Let's clarify. First, I agree with you that AGI is not machine learning. Here's how I use the terms:

AGI (Artificial General Intelligence) - entity with cognitive abilities equal to or better than any given human, across all domains.

ML (Machine Learning): this is how modern AI models are trained, typically in the form of neural nets, attention models, tokenized vectors, and lots of data stirred in a cauldron of TPUs. However we train AGI will be a form of ML (maybe one not developed yet), but the term catches all the ways we've been training models for the last decade or so. Maybe all imaginable AI training techniques are technically ML, but I use it to refer specifically to the tech underlying the recent exciting batch - Stable Diffusion (DALL-E, Midjourney), Large Language Models (ChatGPT, New Bing, LaMDA).

Does that work for you?

> at that point, do you think the AI will even care about making creative content for humans? > > > > It’d be like Scorsese deciding to make a movie exclusively for dogs. Why would he?

When you say "the AI" here, what do you mean exactly? What sorts of traits does that kind of AI have?

> ML is not creative or intelligent. It still needs human direction.

Creativity and intelligence are here already, to a limited extent. Generative AIs are creating in the sense that it's not just collage or parroting. The process is ambivalent to understanding completely novel combinations of ideas, and its outputs can vary to match. It's a worse poet than Shakespeare, a worse historian than a tenured professor, a worse novelist than Tolkien, a worse programmer than Linus, a worse physicist than Einstein, and so on, but it's demonstrating actual intellect in all these domains and more, better than most gradeschoolers and some grown adults.

It does not still need human direction, and that's unrelated to its cognitive powers (creativity, intelligence, etc.) anyway. ChatGPT is an implementation of GPT that requires human direction, but that's a design choice, not an inherent limitation. They wanted a chatbot. If they wanted it to exhibit autonomous behaviour via some complex function to decide for itself what to read, when to reply, and where to post, they could've done that too.

1

NoidoDev t1_ja7duht wrote

You're right, understanding humans and the world is necessary to create a consistent story that resonates with people.

5

Ok_Sea_6214 OP t1_ja6x9y7 wrote

AI is already creating more art and intelligently written articles than I or most other people can. What is your benchmark then, Mozart and Einstein?

I guess this is the classic moving goalpost argument:

"AI isn't good at this."

"Ok but AI isn't better at this than an animal."

"Ok but AI isn't better at this than the average human."

"Ok but AI isn't better at this than the best human."

"Ok but AI uses an unfair advantage."

"Ok but AI isn't good at this other thing."

−12

Emory_C t1_ja6xspd wrote

Nevermind. I see from this comment...

>AGI has existed for several years now, and has reached ASI, I'm confused why people think they'd be told about it.

...that you're delusional.

12

Emory_C t1_ja6xwu2 wrote

Oh! And an anti-vaxxer, too. How charming.

>He's killed 5.5 billion people and counting, most of them just haven't started dying just yet.

8

vernes1978 t1_ja7758c wrote

Try to avoid personal attacks.

3

FTRFNK t1_ja7svsx wrote

Nah, look at this guy's post history he's a fucking nut.

7

vernes1978 t1_ja8fsi0 wrote

Yeah but you want to keep them reading your arguments until THEY run out of arguments, and start the personal attacks.

4

Emory_C t1_ja6xmce wrote

>AI is already creating more art and intelligently written articles than I or most other people can.

AI is not creating art. People are using machine learning algorithms to create art. There's a huge difference.

When there is an actual and true Artificial Intelligence which is creating art (which requires thought and intent) it will be a very different world indeed.

But that would be an AGI -- and I doubt an AGI would even understand the purpose for something as superfluous and silly as art.

10

RavenWolf1 t1_ja6wkmz wrote

But aren't countries society which are formed by people?

4

Liberty2012 t1_ja9qf44 wrote

>By the time people have made the shift, AI will take that over as well.

Yes, this is the new rat race. Someone posted elsewhere they had spent the last 3 months working on new AI projects which all became obsolete before they could finish.

3

alan7879 t1_ja9wmde wrote

so done with the races. singularity should take over asap

1

SilentLennie t1_ja805o5 wrote

Let's be very clear: a country has no employees, the people employ the politicians and the bureaucrats and other staff.

2

Ok_Sea_6214 OP t1_jaawwui wrote

Oh yes, just yesterday I fired a policeman for giving me a ticket.

1

CrazyC787 t1_jaaf81z wrote

UBI is never happening lmaooo, keep with the cope.

2

Ok_Sea_6214 OP t1_jab56th wrote

If you say so. I think it will, but the fewer people you have to share with, the more you get.

1

mikestillion t1_jaaboi8 wrote

Does this country you refer to (ours) hate it’s citizens as much as companies do? Will they “fire” these “employees” by taking their access cards and just march them out the “door” too?

Maybe not to all, but some? Or just to many? Or just to “employees” of “type X”?

This metaphor has me asking a lot of questions…

1

Ok_Sea_6214 OP t1_jaaif74 wrote

https://m.youtube.com/watch?v=OMDlfNWM1fA

https://m.youtube.com/watch?v=94o-9zR2bew

In the second video you'll notice there's been an edit, where he goes from describing the useless class to the solution of taxing AI and using the money to help people. My guess is they cut out the part where he discusses the fact that there will be no point in retraining anyone if AI does everything.

If you are a horse in the year 1900 you'll be very useful, but by 1920 most of your job has been replaced by cars and tractors. If at that point the horse really can't find a new job, then the cost of it being alive (food, healthcare, living space) will be compared to the market value of horse meat.

Lucky for us there is no market for human meat, but 8 billion people (and growing) worth of carbon pollution, food, living space, healthcare, entertainment, voting rights, property rights, risk of revolution compared to the value of them not being there... Until very recently in human history, the solution has always been to "fire" them.

1

dasnihil t1_ja7ijd0 wrote

we're fine as long as we're the only sentient entity. whatever machines that surpass our intelligence concern me much and i don't care much about jobs and careers and other societal constructs. this is strictly about what will happen to art as we know it in future.

only sentient beings are capable of experience the qualia of finding art in literally anything. once these machines show sings of sentience, i will re evaluate this but till then we're fine. we'll have to re engineer the societies tho because we'll be automated for almost everything for sustainability soon.

0