Mindrust
Mindrust t1_je87i5p wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
>since he's almost certainly wrong you get utopia
It's good to be optimistic but what do you base this claim off of?
We don't know exactly what is going to happen with ASI, but both the orthogonality thesis and instrumental convergence thesis are very compelling. When you take those two into account, it's hard to imagine any kind of scenario that isn't catastrophic if value alignment isn't implemented from the start.
Mindrust t1_je86s4y wrote
Reply to comment by MichaelsSocks in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
>I'll take a 50% chance of paradise
That's pretty damn optimistic, considering Yudkowsky estimates a 90% chance of extinction if we continue on our current course.
>Issues like climate change are actually a threat to our species, and its an issue that will never be solved by humans alone
I don't see why narrow AI couldn't be trained to solve specific issues.
Mindrust t1_jdrugqb wrote
Reply to comment by Low-Restaurant3504 in You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills - Yuval Harari on threats to humanity posed by AI by izumi3682
No, like the very real possibility of s-risks
Mindrust t1_j9mc5he wrote
Reply to Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
Because no one knows whether or not it will be a hard or soft takeoff.
The gap between AGI and ASI could be several years to decades.
Mindrust t1_j4rn3j8 wrote
Reply to comment by SlouchyGuy in Inside an insect farm: Are mealworms a sustainable meat alternative? by vpuetf
The difference is those things all live in the ocean, and not crawling on my leg at 3 AM when I'm in bed.
Mindrust t1_j4rmr5k wrote
Reply to comment by APEHASKILLEDAPE in Inside an insect farm: Are mealworms a sustainable meat alternative? by vpuetf
Seriously, don't understand why this is being pushed so hard recently. It will never be a thing in first world countries.
Mindrust t1_je89g09 wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
> but too stupid to understand the intent and rationale behind its creation
This is a common mistake people make when talking about AI alignment, not understanding the difference between intelligence and goals. It's the is-vs-ought problem.
Intelligence is good at answering "is" questions, but goals are about "ought" questions. It's not that the AI is stupid or doesn't understand, it just doesn't care because your goal wasn't specified well enough.
Intelligence and stupidity: the orthogonality thesis