digitalthiccness
digitalthiccness t1_iwfqsxb wrote
>But what if we compress everything but the stuff needed for processing a certain event happening, then do the same for the next event, etc.?
Then it's not a 1:1 simulation of our universe.
digitalthiccness t1_it71y37 wrote
Reply to comment by HoneyWhistle in I want to read so badly but my brain won’t allow it by heroicgamer44
But it's one human feat they can't do! They've been to the moon, swum the English Channel, skied down Mt. Everest, invented cures for diseases, took home gold at the Olympics, written the Great American Novel, and gone triple platinum with their debut album, but reading Goosebumps simply eludes them. It's maddening!
digitalthiccness t1_it6zkzr wrote
> I have therapy today but I don’t know how effective that’s ultimately going to be
I'd just focus on that for now. Talk to your therapist about it. It sounds like this is probably a mental health issue that this subreddit isn't qualified to deal with.
digitalthiccness t1_it20vrc wrote
Reply to comment by Rogue_Moon_Boy in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
> We have the least amount of war ever in the history.
Sure, but now all it'd take is one nasty one and the uninhabited surface of the planet will be glowing for several million years. Having the sword of Damocles hanging over mankind's head 24/7 isn't nothing.
>better technology has always turned out positive for us humans in the big picture, even given short term drawbacks.
So far, sure, but the more powerful technology becomes, the greater the chance that the initial drawbacks are more than we can survive. Civilization survived the invention of nuclear weapons (...so far) through little more than blind, stupid luck. There's no reason to think that it's inevitable we will always survive great leaps in technological capability.
At this point I think we have no real choice but to push forward and try to progress while avoiding the dangers, but technological advancement is an existential threat and that threat should be respected.
digitalthiccness t1_it1z723 wrote
Reply to comment by ouaisouais2_2 in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
If an AI can recreate my dead loved ones in a way where I can't tell the difference, I'll take it. You're entitled to be creeped out about it, though.
digitalthiccness t1_it1obzz wrote
Reply to comment by ouaisouais2_2 in Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
>how can so many in this subreddit be so nauseatingly positive about high-technology? Excuse the harsh words but that's what I think.
You do know where you are, right? Most people interested in the Singularity just want to be raptured by benevolent AI gods into eternal virtual heaven. They're not here because they think we're going to get turned into paperclips, they're here because Ray Kurzweil told them Skynet's gonna give them their dead relatives back.
digitalthiccness t1_it1id4n wrote
Reply to Why do companies develop AI when they know the consequences could be disastrous? by ouaisouais2_2
>Is it just a race to win the next sack of money?
You cracked it. /thread
digitalthiccness t1_iwfs0t5 wrote
Reply to comment by Ivan_The_8th in Would 1:1 simulation of our universe be possible? by Ivan_The_8th
>There's no need to process two events that do not influence each other in any way at the same time.
No two such events exist. The factors needed to perfectly simulate anything are everything.