Viewing a single comment thread. View all comments

ouaisouais2_2 OP t1_it1o7kq wrote

yes, I am proposing that the ***might*** is more important than ***you***, because the ***might*** is absolutely ridiculously more dangerous than the current diseases.

It is a question of time before AI allows for the wildest forms of biological terrorism, which a company couldn't predict. The individual developer isn't necessarily a "bad person" but we should collectively decide to halt the advancements and subject them to collective ethical considerations.

Edit: It is important to note that I don't blame you personally if you have happen to run an AI enterprise. The problems are always systemic. I just wanted to know your motivations.

−1

beachmike t1_it2etm9 wrote

I completely disagree. We absolutely SHOULD NOT stop AI advancements and benefits to mankind because of your hypothetical AI nightmares. All important technologies can be used for great good or great evil: the wheel, fire, nuclear power, computers, as well as AI. We don't "halt advancement" of these technologies because some evil people among us might abuse them. -

3

ouaisouais2_2 OP t1_it2ie9o wrote

If "evil people" use ASI to its fullest extent even once, then it won't be an advancement.

Let's say a warmongerer or a terrorist (Vladimir Putin for example) got their hands on this. What would happen?

1

beachmike t1_it3aevx wrote

Same thing can be said of nuclear weapons. We don't shut down the nuclear energy industry because of the risk of nuclear weapons. Nuclear reactors can produce nuclear material used to make nuclear weapons.

1

ouaisouais2_2 OP t1_it3y0lf wrote

We should also have waited a while before we built that, but there was a cold war in the way. We avoided absolute calamities multiple times by luck.

We could abolish the reactors and the weapons that exist, which would require a lot of collaboration, surveillance between countries, more green energy. It's very, very ambitious but if it succeeded, nuclear war would be an impossibility.

AI and ASI are different because they're fuelled with the easily available materials, code and electricity, which provides many smaller groups with the ability of mass destruction or mass manipulation. That means not only nation states can join in, but also companies, cults, advocacy groups and maybe even individuals.

So either we spend a fortune on spooky, oppressive surveillance systems to ensure nobody's using it like dangerously or we negotiate on how we use it right, in some places at certain times in certain ways as we slowly understand it more and more.

It'd be great if we as international society could approach AI, especially ASI, extremely carefully. It is, after all, the final chapter of History as we know it.

1

beachmike t1_it6jo26 wrote

I totally disagree. Nuclear energy is a clean and very safe way to meet our energy needs. The last thing we should do is abolish nuclear reactors.

You say ASI is "different" because it's fueled with "easily available materials, code, and electricity." OK, build one.

1

ouaisouais2_2 OP t1_it8bole wrote

I was presenting ASI as a technology that is extremely risky to invent, then you bring up nuclear reactors in what seems to be an attempt to disprove me by saying "we use risky technology all the time but things work out anyway". Now you claim nuclear reactors are close to risk-free, which makes the comparison irrelevant. It'd been easier to say you just don't think ASI is that risky.

>OK, build one.

I didn't say it was easy to build one, but once it is built by somebody, it can easily be distributed and run by anyone who happens to own strong computing power.

Secondly, are you interested in gaining knowledge from this exchange or are you trying to slam-dunk on an idiot? You seem to be in keyboard warrior mode all the time.

1