Submitted by ouaisouais2_2 t3_y8qysb in singularity
Apollo24_ t1_it2ucvn wrote
Reply to comment by ouaisouais2_2 in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
That is not what you were suggesting in your post at all.. you were asking why people don't try to stop AI development, not regulate.
Anyways, let's suppose that's what you were suggesting. Of course there's nothing wrong with being extra cautious, but regulations on international scales for this are just inherently impossible. Not because of greed or capitalism, AI just has such huge potential, any country slowing down their own progress would assure their economic disadvantage in the future, maybe even their destruction.
You'd probably get some EU countries to agree on such regulations, but that'd just make things worse for those countries later on.
ouaisouais2_2 OP t1_it39a33 wrote
It might not have been very clear but I said : "inhibit or manage".
>Not because of greed or capitalism, AI just has such huge potential, any country slowing down their own progress would assure their economic disadvantage in the future, maybe even their destruction.
That's exactly what I'd call a trademark of capitalism (mixed with the idiocy of warmongering in general). People are too afraid of immediate death or humiliation to step off a road of insanity.
Viewing a single comment thread. View all comments