Submitted by razorbeamz t3_zf0q7a in singularity
crua9 t1_iz9xt9h wrote
Reply to comment by HeinrichTheWolf_17 in What do you think of all the recent very vocal detractors of AI generated art? by razorbeamz
^
This
See, back in the day people were litterally antiwriting because they thought it would make people forgetful. People were anti car, TV, and so on. People were anti books, computers, internet, and now crypto. And like everything there is people anti AI.
Do they have worries? Yes. But are they founded? Not really. They think AI will kill us. But they never ask why. Like do you go out of your way to kill random bugs and germs for no reason? Same here
Now should people worry about it taking their jobs. Yes. But that is a good thing. There needed to be an economic shift. It hasn't happen, but there needs to be changes. Too many hard working people can't really survive on what they have, and they basically turned into a lifetime slave. AI is likely to fix this.
freeman_joe t1_izbb373 wrote
We need UBI period.
crua9 t1_izbcci2 wrote
UBI, UBS, and UBN
Cryptonasty t1_izdgn71 wrote
Paired with a LVT
Steel_Waffles69 t1_izahn3e wrote
Very well said. I’m optimistic.
KidKilobyte t1_izc5evg wrote
Nick Bostrom would like a word with you.
I will include the following quote from Contact:
We pose no threat to them. It would like us going out of our way to destroy a few microbes on some ant hill in Africa.
Interesting analogy. And how guilty would we feel if we went and destroyed a few microbes on an ant hill in Africa?
Commercial-Phrase-37 t1_izca465 wrote
AI will likely be used to concentrate wealth more effectively.
ninjasaid13 t1_izcmcr6 wrote
more effectively towards the rich.
any1particular t1_izakii8 wrote
^ This is the way.
[deleted] t1_izahhj4 wrote
[deleted]
[deleted] t1_izahkg3 wrote
[deleted]
mootcat t1_izh3vne wrote
It isn't that our demise is particularly desired, it's that it is ultimately an inconsequential side effect of AI exponentially scaling an objective.
Max Tegmark (I beleive) compares it to us worrying about destroying an ant colony while constructing a highway. It isn't even a consideration.
crua9 t1_izhiomb wrote
Here is a back and forward
Person A: Max Tegmark (I beleive) compares it to us worrying about destroying an ant colony while constructing a highway. It isn't even a consideration
Person B: Do you like your fridge or should we go back to ice boxes? Keep in mind fridges save lives because you can store medical stuff.
Person A: wants fridges over ice boxes
Person B: the biggest industry in the world and history was the ice industry. What killed it was the fridge.
So pick killing the biggest industry humans ever known. But in return countless people can get medical stuff, food can go to more places, and so on. Or keep that industry, all the people working it in the job, etc. But have everyone who is living today thanks due to the fridge dead.
There is always outcomes to every choice. Sometimes good and sometimes bad. But a simple risk assessment shows way more lives and way more good will come with AGI. And like the fridge. Even if you delay it, it will still come out at some point.
mootcat t1_izhwgxu wrote
Are you not aware of the existential risk that AGI/superintelligence poses?
I'm obviously pro AI, but it's also the greatest risk to humanity and all of life.
crua9 t1_izhwoib wrote
Ya it is a risk. But this is my viewpoint
- It makes our life better (good)
- It doesn't really change anything (whatever)
- It makes things worse (well I guess now is a good time to die)
- It kills us all (we all die one day anyways, and it isn't like my life is getting better)
Viewing a single comment thread. View all comments