Viewing a single comment thread. View all comments

agorathird t1_jebonvf wrote

>This letter is basically the equivalent of the early 20th petition by scientists that asked to limit and regulate the proliferation of nuclear weapons. And yet, its being sold as a capitalist stratagem to gain time.

Oh, if this is what the media is saying then they're right for once. Capitalist gain, trying to get more time to milk their accolades whatever.

1

BigZaddyZ3 t1_jebvn3k wrote

Isn’t rushing a potentially society-destroying technology out the door with no consideration for the future impacts on humanity also a very capitalist approach as well? If not more so even? Seems like a “damned if you don’t, damned if you don’t” situation to me.

−2

agorathird t1_jebwpvf wrote

There's consideration from the people working on these machines. The outsiders and theorists who whine all day saying otherwise are delusional. Not to mention the armchair 'alignment experts'

Also, we live in a capitalist society. You can frame anything as the capitalist approach but I don't think doing so in this sense is applicable to its core.

Let's say we get a total 6 month pause (somehow) and then a decade pause because no amount of reasonable discussion will make sealions happy. Good now we get to fight climate change with spoons and sticks.

−3

BigZaddyZ3 t1_jebxmik wrote

Yeah… because plastic manufacturers totally considered the ramifications of what they were doing to the world right? All those companies that were destroying the ozone layer totally took that into consideration before releasing their climate destroying products to market right? Cigarette manufacturers totally knew they selling cancer to their unsuspecting consumers when they first put their products on the market right? Social media companies totally knew the products would be disastrous for young people’s mental health, right? Get real buddy.

Just because someone is developing a product doesn’t mean that they have a full grasp on the consequences of releasing said products. For someone who seems so against capitalism, you sure put a large amount of faith in certain capitalists…

2

agorathird t1_jebykbw wrote

Suuure, this would track. If only the same businesses men running the companies were also scientists and the people developing it lol. AI companies have the best of three worlds when it comes to people who are at the helm. Also, social media is just an amplifier of the current world we live in. Most tech is neutral, thinking otherwise is silly. But I still don't think the example is comparable.

I'm not against capitalism. I love markets and stopped considering communism a long time ago as most of it's proponents conflict with my love for individualism. If you're a communist then how don't you know the difference between the managerial parts of the company and the developers?

1

BigZaddyZ3 t1_jec0gun wrote

I never said I was a communist… Your first comment had a heavy “anti-capitalist” tone to it.

And lol if you think AI companies are somehow immune to the pitfalls of greed and haste… lol. You’re stuck in lala-land if you think that pal. How exactly do you explain even the guys like Sam Altman (senior executive at OpenAI) saying that even OpenAI were a bit scared about the consequences?

1

agorathird t1_jec1fq5 wrote

I never said any of that. I just don't think it's sci-fi doomsday that's incentivized, especially if you have all the data in the world for prediction. But alas, no amount of discussion or internal risk analysis will some satisfy people.

Being scared doesn't mean you think you're incapable. Even so, I think Sam Altman tends to not put on a disagreeable face. Your public face should be "I'm a bit scared." as to not rock the boat. Being sure of yourself can ironically create more alarmism.

This whole discussion is pointless though. Genie is out of the bottle, I'll probably get what I want you probably won't. The train continues.

0

blueSGL t1_jecv6ta wrote

> There's consideration from the people working on these machines.

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

>In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems.

If half the engineers that designed a plane were telling you there is a 10% chance it'll drop out of the sky, would you ride it?

edit: as for the people from the survey:

> Population

> We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.

0

agorathird t1_jecwk6a wrote

What does behind mean? If it's not from someone who knows all of the details holistically for how each arm of the company is functioning then they're still working with incomplete information. Letting everyone know your safety protocols is an easy way for them to be exploited.

My criteria for what a 'leading artificial intelligence company' is would be quite strict. If you're some random senior dev at numenta then I don't care. A lot of people who work around ML think themselves a lot more impactful and important than what they actually are. (See: Eliezer Yudkowsky)

Edit: Starting to comb through the participants and a lot of them look like randoms so far.

This is more like if you got random engineers (some just professors) who've worked on planes before (maybe) and asked them to judge specifications they're completely in the dark about. It could be the most safe plane known to man.

Edit 2: Particpant Jongheon Jeong is literally just a phd student that appears to have a few citations to his name.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

1

blueSGL t1_jecxney wrote

>a lot of them look like randoms so far.

...

>Population

>We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021.

I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

−1

agorathird t1_jecybw0 wrote

​

>who published at the conferences NeurIPS or ICML in 2021.

누구? Conferences are meme. Also they still don't know about the internal workings of any companies that matter.

>I mean just exactly who do you want to tell you these things. I can pull quotes from people at OpenAI saying they are worried what might be coming in future.

Already addressed this to another commenter, no matter how capable they are it freaks people out less if they appear concerned.

One of the participants is legit just a PHD student, I'm sorry I don't take your study with credibility.

[Got blocked :( Please don't spread disinformation if you can! I see you've linked that study a lot for arguments. ]

2