DadSnare
DadSnare t1_jab8qcq wrote
Reply to comment by -BroncosForever- in Magnetic pole reversal by Gopokes91
Well, except for the Spielberg movie, A.I., and countless books and tv shows but ok.
DadSnare t1_ja8ibe5 wrote
Reply to comment by play_yr_part in Existential angst and yolo thoughts & cancer parallel by banaca4
That’s fine, but even in your post I’m seeing some easy-to-claim stuff that has no solid basis. Are you sure that the programmers cannot explain why a chatbot errors out? Really? Also, who said anything about the emotional state of an AI? That’s hardly even possible because it doesn’t have an endocrine system. We may have strong emotions the way we do to help with memory formation and retrieval as much as anything else. That’s not a problem for a machine. What’s a plausible way we get destroyed? Does AI own the corporations too? How do I lose power, internet, food, etc,? The nuclear terminator version seems impossible unless we are going talk about hacking brains and adjusting behavior like crazy people think is possible.
DadSnare t1_ja872si wrote
OP I bet you’ve made some very life altering assumptions. Go back over the things you are worried about and instead of just buying into the fear, examine your beliefs and make an effort to build knowledge in areas where those assumptions are made. For example, there’s no logical reason to believe that an AGI will go rogue and want to destroy humans; a commonly held belief on here. Just because a bunch of people are worried about it, doesn’t mean they know jack shit.
DadSnare t1_ja84i7e wrote
Alf pogs become sentient.
DadSnare t1_j9z79il wrote
Reply to comment by Shiyayori in Hurtling Toward Extinction by MistakeNotOk6203
Check out how machine learning and complex neural networks work if you haven’t already. They work similarly to the way you describe the moral limits, and a liquid “hidden layer” by using biased recalculations. It’s fascinating.
DadSnare t1_j9wsnoh wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
Let’s get more concrete. Regarding the first point of your argument, what would be an example of something AGI would want to do (and a good argument for why) that isn’t the second point, “to maintain a state of existence to accomplish things;” a human existential idea? We aren’t immortal, but it easily could be, and perhaps that distinction as a tangible possibility between the two intelligences is the thing that makes a lot of people uncomfortable. Now why would it want to destroy us on its own? Why would we want to turn it off?
DadSnare t1_j6bwxss wrote
Reply to comment by [deleted] in [R] InstructPix2Pix: Learning to Follow Image Editing Instructions by Illustrious_Row_9971
Check my post history for some ways I’m using it.
DadSnare t1_j67fkpg wrote
Reply to Google not releasing MusicLM by Sieventer
With how fast things are going, I hope someone figures it out and make an open source version soon.
DadSnare t1_j5s0nib wrote
Reply to Future-Proof Jobs by [deleted]
20-30 years? Anything that requires a license in the blue collar trades is a good place to start. I'd say "handyman" but people will have AR to help them do stuff to their homes. They won't have the specialized equipment to do many repairs, and johhnybot might not be able to recommend they mess with electricity, for example. edit: and they have unions that might fight for human workers rights that could take a long time to change, even with UBI, because surely working on top of that financial assistance is the way to move up, and I don't see that notion going away. Why the hell would any government want to work towards have an overpopulated mass of people that do nothing?
DadSnare t1_j5c0jof wrote
Reply to What do you think an ordinary, non-billionaire non-PhD person should be doing, preparing, or looking out for? by Six-headed_dogma_man
Signs to look out for? People leaving their partners for AI.
DadSnare t1_j5bthjr wrote
Reply to comment by Trains-Planes-2023 in How close are we to singularity? Data from MT says very close! by sigul77
So are we! Just different, with biological architecture!
DadSnare t1_j57egqm wrote
Reply to comment by Trains-Planes-2023 in How close are we to singularity? Data from MT says very close! by sigul77
Exactly. Seems like there can be a singularity in one very specific thing without a paradigm shift of everything else…i think lol
DadSnare t1_j4t84uf wrote
Reply to comment by Ortus14 in Why Falling in Love with AI is a Dangerous Illusion — The Limitations and Harms of Artificial… by SupPandaHugger
I feel like the augmentation of relationship skills in any moment, and connecting people to each other with intelligent assistance could be the greatest gift of AI.
DadSnare t1_j40edbt wrote
Reply to comment by Panic_Azimuth in Study refutes industry claims that ban on menthol cigarettes leads to increased use of illegal smokes: Banning menthol cigarettes doesn't lead more smokers to purchase menthols from illicit sources, contradicting claims made by the tobacco industry by lolfuys
They’d get cold otherwise.
DadSnare t1_jab9hkl wrote
Reply to comment by Random_dude_1980 in Magnetic pole reversal by Gopokes91
It’s so bad in most threads now, that I’m wondering if they are haterbots running on a corporate/gov LLM.