Comments

You must log in or register to comment.

SkyeandJett t1_jef25bp wrote

Great measured response. Acknowledge the valid issues while ignoring the clearly ridiculous suggestion of pausing development.

43

brown2green t1_jefj5wh wrote

Silicon Valley's values aren't aligned with mine and those of a great majority of the world population, I can say this.

28

azriel777 t1_jefok98 wrote

Silicon Valley values align with money and that is it.

5

sillprutt t1_jefss3x wrote

Who's values are more important, yours or SV:s? Who decides which humans values are the best to align towards?

Is it my values? What if my values are detrimental to everyone else's wellbeing?

There is no way we can make everyone happy. Do we try to make as many people as possible happy? When is it justified to align an AI to the detriment of some? At what %?

3

AsthmaBeyondBorders t1_jegx580 wrote

About 1% of the general population are psychopaths. About 12% of corporate C-suite are psychopaths. It's their values that have a higher priority as of today.

7

FreakingFreaks t1_jefnyyv wrote

GPT 4: Is Elon Musk's Fear of AI and LLMs Driven by Capitalism and the Threat to Luxury Markets?

As many of you know, Elon Musk has been quite vocal about his concerns regarding artificial intelligence (AI) and large language models (LLMs). He's called for strict regulation and oversight, even going so far as to say that AI could be more dangerous than nuclear weapons. While the potential risks of AI are not to be taken lightly, I can't help but wonder if Musk's fears are influenced by his capitalist mindset and the potential threat AI poses to luxury markets like his own Tesla cars.

Think about it: one of the most significant concerns surrounding AI is its potential to displace jobs across various industries. As AI becomes more advanced, more people could find themselves out of work, and subsequently, with less disposable income. In such a scenario, purchasing luxury items like Tesla cars might become less of a priority for the average person.

This brings us to the broader implications of AI on wealth distribution and power dynamics. As a billionaire entrepreneur, Musk thrives in an environment where resources and power are concentrated among a select few. However, AI has the potential to democratize access to knowledge, resources, and decision-making. This could eventually lead to a more equitable distribution of wealth and power, which may not bode well for the ultra-wealthy, like Musk.

So, are Musk's concerns about AI and LLMs genuinely about the potential dangers they pose, or is there an underlying fear of losing control over his empire and the luxury market? While we can't say for sure, it's essential to consider all possible motivations when discussing such a complex and far-reaching topic.

What do you all think? Is Musk's fear of AI driven by capitalism and the potential impact on the luxury market, or is it solely based on the potential harm AI could cause? Let's have a thoughtful discussion in the comments below!

10

Kelemandzaro t1_jeg7b83 wrote

Musk is the least impressive person on that list, and media is making like he's the only one there.

8

whirly212 t1_jeftxss wrote

No, it's something he's talked about for over a decade.

4

AndiLittle t1_jefzaud wrote

We, as a species, are unable to 'align' among ourselves. Whose values should AI adopt then?

10

Relevant_Ad7319 t1_jeg5c4m wrote

It will be very difficulty for non westerners to accept an AI that only knows the western perspective

2

VetusMortis_Advertus t1_jegvdst wrote

Wow, sure it was hard to read past 3 sentences in the tweet. Sam talks about a "democratic" process of alignment, where people can participate

1

3_Thumbs_Up t1_jeh041j wrote

Humans are extremely aligned compared to what's theoretically possible. We just generally focus on the differences rather than the similarities, because the similarities seem so obvious that we don't even consider them.

1

ShamanicHellZoneImp t1_jeg4jv5 wrote

I will say i watched that interview he did with Lex real closely because i didn't know much about him. He seems more thoughtful and level headed than 95% of the people who could have wound up in his position. Small comfort i guess, hopefully that counts for something.

7

activatore t1_jegrwun wrote

He is a true futurist and I wouldn’t be surprised if he lurked here sometimes

3

genericrich t1_jef6keo wrote

Is it even possible to "align" a system, if you can't reliably understand what is happening inside it? How can you be sure it isn't deceiving you?

3

vivehelpme t1_jefdfv0 wrote

We can't align a hammer to not hit your fingers, or a human to not become a criminal. Thinking a dynamic multi-contexual system will somehow become a paragon saint is ridiculous.

And no matter how many alignment training sets you have it all goes out the window as soon as someone needs a military AI to kill people and ignore those sets.

5

Acalme-se_Satan t1_jefgy8g wrote

I certainly believe it's impossible to guarantee it's aligned for sure, but it's probably very possible to make it aligned 99.99% of the time with smart techniques to align them

2

Ambiwlans t1_jefp92b wrote

Sort of. We do understand what is happening internally more than you might think. And we could further develop that. Or better develop a secondary ai that is used to determine what the main ai is thinking.

0

Heinrick_Veston t1_jefhmai wrote

Could we not just hard code these models to constantly ask if they're behaving properly and doing what we want?

Perhaps we could have a democratic system we use to respond to those queries, to make sure they best represent us all.

2

DaggerShowRabs t1_jefm4ex wrote

If the system needs approval before it takes any actions at all, the system is going to be extremely slow and limited.

2

Heinrick_Veston t1_jefmvuu wrote

I don't mean that it would ask before every action, more so that it'd regularly ask if it was acting in the right way.

1

DaggerShowRabs t1_jefnl06 wrote

Ah, I get what you mean. I still don't think that necessarily solves the problem. It could be possible for a hypothetical artificial superintelligence to take actions that seem harmless to us, but because it is better at planning and prediction than us, the system knows the action or series of actions will lead to humanity's demise. But since it appears harmless to us, when it asks, we say, "Yes, you are acting in the correct way".

3

ReasonablyBadass t1_jefxvhx wrote

And we would trust the guys who sold out with alignment because...?

2

acutelychronicpanic t1_jefy3hn wrote

I do agree that there needs to be collaboration on this.

One caution though:

If all our alignment efforts are too similar, they might have overlapping weaknesses and blindspots.

I think a diversity of approaches is important.

2

dr_doug_exeter t1_jefz63s wrote

And how are we supposed to make sure that this democratic process isn't undermined by those with more wealth/resources, in the way that our "democratic" country has been?? Won't the wealthy just hijack/corrupt the process for their own purposes the way they do to everything else?

How are human beings supposed to properly align AI when we can't even get our shit together and properly manage states or the country in general? People don't know what the fuck they're doing or the unintended consequences of their actions.

1

Relevant_Ad7319 t1_jeg4qmg wrote

Very difficult to build an alignment data set that everyone agrees on

1

Kelemandzaro t1_jeg7qw3 wrote

We should start developing agent Smith type of AI agents working for us, dealing with alignment.

1

Formal_Overall t1_jegngjk wrote

How much will openai charge to use these alignment datasets per training epoch, and when can we got on the waitlist for it? (Investors can skip the line though, don't worry. Wouldn't want anyone to threaten their hegemony over their market)

1