ouaisouais2_2
ouaisouais2_2 t1_j6f0sio wrote
Reply to Acceleration is the only way by practical_ussy
I think there's a number of things wrong with this reasoning. I can point them out of you ask me to. Otherwise, thank you for a long post which clearly had effort and thought put into it.
ouaisouais2_2 t1_j0myebu wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
the richest miners are not those who are the first to find an ore. the richest are those who follow those who are the first to find an ore
ouaisouais2_2 OP t1_itageg9 wrote
Reply to comment by beachmike in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
👍
ouaisouais2_2 OP t1_it8bole wrote
Reply to comment by beachmike in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
I was presenting ASI as a technology that is extremely risky to invent, then you bring up nuclear reactors in what seems to be an attempt to disprove me by saying "we use risky technology all the time but things work out anyway". Now you claim nuclear reactors are close to risk-free, which makes the comparison irrelevant. It'd been easier to say you just don't think ASI is that risky.
>OK, build one.
I didn't say it was easy to build one, but once it is built by somebody, it can easily be distributed and run by anyone who happens to own strong computing power.
Secondly, are you interested in gaining knowledge from this exchange or are you trying to slam-dunk on an idiot? You seem to be in keyboard warrior mode all the time.
ouaisouais2_2 OP t1_it6ey67 wrote
Reply to comment by Gilded-Mongoose in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
ok. that's all good although i regret the lack of critique and concern
ouaisouais2_2 OP t1_it65jab wrote
Reply to comment by Gilded-Mongoose in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
what are you guys on this subreddit if it's not asking that question lmao
ouaisouais2_2 OP t1_it3y0lf wrote
Reply to comment by beachmike in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
We should also have waited a while before we built that, but there was a cold war in the way. We avoided absolute calamities multiple times by luck.
We could abolish the reactors and the weapons that exist, which would require a lot of collaboration, surveillance between countries, more green energy. It's very, very ambitious but if it succeeded, nuclear war would be an impossibility.
AI and ASI are different because they're fuelled with the easily available materials, code and electricity, which provides many smaller groups with the ability of mass destruction or mass manipulation. That means not only nation states can join in, but also companies, cults, advocacy groups and maybe even individuals.
So either we spend a fortune on spooky, oppressive surveillance systems to ensure nobody's using it like dangerously or we negotiate on how we use it right, in some places at certain times in certain ways as we slowly understand it more and more.
It'd be great if we as international society could approach AI, especially ASI, extremely carefully. It is, after all, the final chapter of History as we know it.
ouaisouais2_2 OP t1_it39a33 wrote
Reply to comment by Apollo24_ in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
It might not have been very clear but I said : "inhibit or manage".
>Not because of greed or capitalism, AI just has such huge potential, any country slowing down their own progress would assure their economic disadvantage in the future, maybe even their destruction.
That's exactly what I'd call a trademark of capitalism (mixed with the idiocy of warmongering in general). People are too afraid of immediate death or humiliation to step off a road of insanity.
ouaisouais2_2 OP t1_it3615d wrote
Reply to comment by Rogue_Moon_Boy in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
>Pretty much every new technology ever in history was doomed as the end of the world initially.
I doubt that people literally predicted the extinction of humanity or
dystopias in all the colors of the rainbow. Besides, all that shouldn't be a reason to not take serious predictions seriously.
We know there is a risk that is only possible with ASI/wide application of narrow AI. We know it can get unfathomably bad in numerous ways. We know it can only get unfathomably good in relatively few ways. It's highly uncertain how high the chances are that it lands on respectively bad or good.
It's only reasonable to be more patient to spend more time researching what risks we're accepting and how to lower them. I think that's the most reasonable at least on the extremely long-term
ouaisouais2_2 OP t1_it2lxdx wrote
Reply to comment by Apollo24_ in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
I'm suggesting that we slow it down, put it through more law-enforced security checks and make its application a major political subject, preferably on an internation scale.
>Sure then, let's ban knifes as they can be used as weapons by irresponsible people.
No, that doesn't make sense. What does makes sense, is to not sell atomic bombs to profit hungry CEOs, terrorists or schizophrenic idiots.
ouaisouais2_2 OP t1_it2ie9o wrote
Reply to comment by beachmike in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
If "evil people" use ASI to its fullest extent even once, then it won't be an advancement.
Let's say a warmongerer or a terrorist (Vladimir Putin for example) got their hands on this. What would happen?
ouaisouais2_2 OP t1_it20avp wrote
Reply to comment by Effective-Sir7388 in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
how is this poverty defined?
ouaisouais2_2 OP t1_it1zzec wrote
Reply to comment by Apollo24_ in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
You aren't "solving life's problems" if you make a super powerful tool that is going to be used by irresponsable people whether you want them to or not.
ouaisouais2_2 OP t1_it1yqb9 wrote
Reply to comment by digitalthiccness in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
"bring back dead relatives". the authenticity of the relationship is completely broken at that point. hopefully, people don't do it for more than asking a question.
ouaisouais2_2 OP t1_it1yfud wrote
Reply to comment by Rogue_Moon_Boy in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
By "high-technology", I primarily meant AI. I admit that the term was a bit of a stretch.
I think however that you continue to underestimate the chaotic danger and uncertainty of the situation when it comes to AI.
Poverty, education and medical treatments are but rough estimates of well-being
>Misery is just vastly overreported, because again, it generates more clicks.
... as it should be, generally. Pain and anxiety are largely more important for human survival than pleasure and reassurance.
ouaisouais2_2 OP t1_it1q0dm wrote
Reply to comment by Quealdlor in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
>Do you want to do things by yourself for the rest of your life or do you prefer robots and computers taking care of (at least some of) them?
No, I don't, but I wish we'd have a have more democratic ethical consideration when going into these things, so that we don't pull a black ball.
Also I think slaves and serfs are mostly needed to keep an empire together in times of war. If we stop wars and the worst forms of economic exploitation, we might all be able to work without slave-like conditions. With lives like that people will have
more time to be consider the changes they make to society.
ouaisouais2_2 OP t1_it1o7kq wrote
Reply to comment by beachmike in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
yes, I am proposing that the ***might*** is more important than ***you***, because the ***might*** is absolutely ridiculously more dangerous than the current diseases.
It is a question of time before AI allows for the wildest forms of biological terrorism, which a company couldn't predict. The individual developer isn't necessarily a "bad person" but we should collectively decide to halt the advancements and subject them to collective ethical considerations.
Edit: It is important to note that I don't blame you personally if you have happen to run an AI enterprise. The problems are always systemic. I just wanted to know your motivations.
ouaisouais2_2 OP t1_it1n9d4 wrote
Reply to comment by Mortal-Region in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
>"They simply don't think that'll happen. And history backs them up."
We've been replacing our strength with tools, motor-skills with machines and now our brains with AI. I see no reason for there to be "jobs" in around 50 years. The only activity humans will need to do, given that they control the tools they have created, is to request their wishes, and I'm not so sure everyone will be allowed to have wishes.
>"Technological advancement has lead to enormous reductions in poverty."
I don't know what your definition of poverty is, but I have the impression that the ratio between the aristocratic 0.1%, the semi-comfortable middle-class of 9.9% and the 90% who are overexploited into misery has been the same since the dawn of civilization. We have simply been able to make more people.
These two unhealthy patterns are likely to express themselves in the singularity in morbid and unpredictable ways. That is, if they aren't reversed.
TLDR; how can so many in this subreddit be so nauseatingly positive about high-technology? Excuse the harsh words but that's what I think.
Submitted by ouaisouais2_2 t3_y8qysb in singularity
ouaisouais2_2 t1_j6h83to wrote
Reply to comment by practical_ussy in Acceleration is the only way by practical_ussy
>These economic systems have ranged from simple hunter-gather societies to globally interconnected ones and although the differences might seem stark, since the very start humanity has always been interconnected and has been a global society.
Hunter gather societies were definitely not a global society. Most couldn't even cross their subcontinent, unless by means of an extremely risky boat mission.
>all things must evolve for them to continue to exist in the universe. This law is universal at every level of the universe.
I've never heard of such a law and it seems entirely made up. I might even argue, that something isn't "the same thing" anymore once it evolves.
>Because everything must evolve it must adapt and this includes humans and its meta information.
I'm sorry but the paragraph following this line makes me want to say "Jesse, what the fuck are you talking about?". I don't know if it's me who doesn't get it or it's poorly written.
>The point is that all information structures evolve and that includes human societies.
I don't disagree that societies recognized as human have evolved, but human societies mght be more than information structures. I don't think it's definitively decided upon wether every physical entity can be reduced to the concept of information, especially if you take subjective experience (qualia) into consideration.
>Technology as we think of it can be boiled down to a tool. A tool that optimizes something in the universe to accomplish some task. We like to think that our tools don’t control us and this is actually true at the local level but at the meat level technology controls everything because it is the form of information that can optimize itself at a speed biology and chemistry cannot.
Technology does indeed NOT control us, but humans control each other by threatening to destroy if one doesn't use it or make more of it. Technology development is therefore necessary to survive, but only in our global society as we know it. You could largely escape this dynamic by means of some grand revolution or world federalism.
>This is because capitalism is the system that leads to technology to faster and faster progress
I don't think so. There are a lot of theoretically possible societies, that would seem very non-capitalist yet have furious technological development. Capitalism was fitting in the historical context. It allowed a lot of people to be united under the same country and for technological development but it's also an imperfect compromise. The workers are relatively satisfied by being able to vote, the rich are satisfied by well... being rich. It might not even be the most competitive system for its time in history yet the only one that had a reasonable chance of appearing
>We deserve capitalism not because of some moral consequence but because that is who we are as a species. Our purpose is to be another node in the technology evolution tree.
Might be your purpose, not mine :D
>We deserve because we selfishly refuse to die out and will continue to improve technology because without it we cannot exist.
You're making some overly generalizing metaphysical claims here.
>We cannot exist without technology and it cannot exist without us. We will follow the trees path to acceleration .
Seems like this was some kind of love letter to technology and capitalism. Few points were made other than that our relationship with technology "is meant to be" or something. All in all, not very interesting now that I've read it a second time.