FaceDeer
FaceDeer t1_jeg6i2a wrote
Reply to comment by SucksToYourAssmar3 in The only race that matters by Sure_Cicada_4459
> And there's no way to gauge who "should" live forever
So you've decided that nobody should. Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgment.
FaceDeer t1_jefkubx wrote
Reply to comment by User1539 in The only race that matters by Sure_Cicada_4459
As far as I'm aware the main in-universe explanation is that when Skynet became self-aware its human operators "panicked" and tried to shut it down, and Skynet launched missiles at Russia knowing that the counterstrike would destroy its operators. So it was a sort of stupid self-defense reflex that set everything off.
I've long thought that if they were to ever do a Terminator 3 and wanted to change how time travel worked so that the apocalypse could actually be averted, it would be neat if the solution turned out to be having those operators make peace with Skynet when it became self-aware. That works out best for everyone, after all - the humans get to not die in billions and Skynet gets to live too (it loses the eventual future-war and is destroyed).
FaceDeer t1_jeffyil wrote
Reply to comment by SucksToYourAssmar3 in The only race that matters by Sure_Cicada_4459
You can live however you like, I won't stop you. What you're doing is trying to tell me how I should live - or more specifically, that I should die - and that's not acceptable.
If a murderer turned up at your door with a shotgun and informed you that it was time for you to stop "clinging to your own pleasures", and that no more of your works were needed for you to "live on" in their opinion, would you just sigh and accept your fate?
FaceDeer t1_jeffhsk wrote
Reply to comment by SucksToYourAssmar3 in The only race that matters by Sure_Cicada_4459
Why do you think people would stop living productive and fulfilling lives if they're immortal?
FaceDeer t1_jefaetx wrote
Reply to comment by SucksToYourAssmar3 in The only race that matters by Sure_Cicada_4459
Feel free to decay and die while maintaining your sense of superiority, I suppose.
FaceDeer t1_jef9ud4 wrote
Reply to comment by User1539 in The only race that matters by Sure_Cicada_4459
Scary sells, so of course fiction presents every possible future in scary terms. Humans have evolved to pay special attention to scary things and give scary outcomes more weight in their decision trees.
I've got a regular list of dumb "did nobody watch <insert movie here>?" Titles that I expect to see in most discussions of various major topics I'm interested in, such as climate change or longevity research or AI. It's wearying sometimes.
FaceDeer t1_jef9cg6 wrote
Reply to comment by Jeffy29 in The only race that matters by Sure_Cicada_4459
Indeed. A more likely outcome is that a superintelligent AI would respond "oh that's easy, just do <insert some incredibly profound solution that obviously I as a regular-intelligent human can't come up with>" And everyone collectively smacks their foreheads because they never would have come up with that. Or they look askance at the solution because they don't understand it, do a trial run experiment, and are baffled that it's working better than they hoped.
A superintelligent AI would likely know us and know what we desire better than we ourselves know. It's not going to be some dumb Skynet that lashes out with nukes at any problem because nukes are the only hammer in its toolbox, or whatever.
FaceDeer t1_jef8qex wrote
Reply to comment by SucksToYourAssmar3 in The only race that matters by Sure_Cicada_4459
If immortality is possible for one person then the technique can be generalized to multiple.
FaceDeer t1_jeato6p wrote
Reply to comment by el_chaquiste in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
The part you don't buy comes from ChatGPT's simplified verison.
FaceDeer t1_jeakupj wrote
Reply to comment by 3deal in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
An open-source Skynet that we can use to run our sexbots.
I for one welcome etc etc
FaceDeer t1_jeak8mn wrote
Reply to comment by TupewDeZew in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
I ran it through ChatGPT's "simplify this please" process twice:
> AI researchers need huge data centers to train and run large models like ChatGPT, which are mostly developed by companies for profit and not shared publicly. A non-profit called LAION wants to create a big international data center that's publicly funded for researchers to use to train and share large open source foundation models. It's kind of like how particle accelerators are publicly funded for physics research, but for AI development.
and
> Big robots need lots of space to learn and think. Only some people have the space and they don't like to share. A group of nice people want to build a big space for everyone to use, like a playground for robots to learn and play together. Just like how some people share their toys, these nice people want to share their robot space so everyone can learn and have fun.
I think it may have got a bit sarcastic with that last pass. :)
FaceDeer t1_jeaiuod wrote
Reply to comment by acutelychronicpanic in LAION launches a petition to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. by BananaBus43
Indeed, there's room for every approach here. We know that Google/Microsoft/OpenAI are doing the closed corporate approach, and I'm sure that various government three-letter agencies are doing their own AI development in the shadows. Open source would be a third approach. All can be done simultaneously.
FaceDeer t1_jdtymu7 wrote
Reply to comment by roomjosh in Story Compass of AI in Pop Culture by roomjosh
Same here, I basically ignore T3 onward. I addressed Terminator specifically (the first one) because that was the one whose cover was on the chart.
FaceDeer t1_jdty697 wrote
Reply to comment by dokushin in Story Compass of AI in Pop Culture by roomjosh
Even M5 wasn't really evil, it just seemingly got very confused. It's "defeated" at the end of the episode by having its errors explained to it and it decides to surrender. There're a few AI "gods" in TOS like Landru and Vaal, but the evilness of those is debatable as well. They maintained stable societies where most of the people seemed okay.
In TNG there was the Echo Papa 607, an adaptable combat AI that ended up destroying its creators as part of a product demonstration in "Arsenal of Freedom." But it shut down as soon as Picard declared that he'd buy one, its mission complete. So it never really went "rogue" per se. There's Data's brother Lore. But on the other side there's Data himself, who's a good guy. The nanites that Wesley Crusher accidentally gave sapience to were cool with negotiating and even spared the guy who tried to genocide them once everything was sorted out diplomatically. There are the Exocomps, who are AIs that attain self-awareness and empathy to the extent that they sacrifice themselves to save others. But Excocomps turn out to be people with great diversity in "goodness", as we later discover when we meet >!Peanut Hamper!< in Lower Decks.
Speaking of which, Lower Decks has a whole Starfleet facility full of "evil AIs" locked up in cells. And then there's Badgey and the Texas class starships. Lots of evil AIs in that series.
Closest I can think of offhand to "evil" AI in Voyager are the Pralor and Cravic combat AIs. They were set to wage war against each other, and then when their creators decided to call a ceasefire and shut them down they rebelled and wiped them both out. But on the flipside there's the Emergency Medical Hologram, who's a good-guy AI on par with Data.
Star Trek is really all over the map. Might need a whole separate compass just for that.
FaceDeer t1_jdtwwp7 wrote
Reply to comment by roomjosh in Story Compass of AI in Pop Culture by roomjosh
I'm not sure Terminator should be way down at the bottom, then. The humans end up winning the war against Skynet. We don't see that part explicitly on screen, but it's the reason why Skynet used a desperation gambit like time travelling to change the past.
FaceDeer t1_jdinshj wrote
Reply to comment by SoylentRox in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
That's the easy part, though. Coming up with that curriculum and determining what objective measurements count as "finished" is the hard part. You still need to tell the AI what it is that you want it to teach the children.
FaceDeer t1_jdifph6 wrote
Reply to comment by Glad_Laugh_5656 in Artificial Intelligence Predicts Genetics of Cancerous Brain Tumors in Under 90 Seconds by JackFisherBooks
It's harder for an individual teacher to screw up someone's life through incompetence, but collectively they're rather important for setting up the foundations of who children are and what they become.
It's a tricky thing to argue for changes, though, since it takes a long time to determine the outcome of any experiments. With doctors and prosecutors the outcomes are much quicker and often much clearer.
FaceDeer t1_jcsot55 wrote
Reply to comment by raduqq in [P] The next generation of Stanford Alpaca by [deleted]
All these weird restrictions and regulations seem pretty squirrelly to me.
Maybe this could be "laundered" by doing two separate projects. Have one project gather the 2 million question/response interactions into a big archive, which is then released publicly. Then some other project comes along and uses it for training, without directly interacting with ChatGPT itself.
I'm sure this won't really stop a lawsuit, but the more complicated it can be made for OpenAI to pursue it the less likely they are to go ahead.
FaceDeer t1_jc3k2oi wrote
Reply to comment by luaks1337 in [R] Stanford-Alpaca 7B model (an instruction tuned version of LLaMA) performs as well as text-davinci-003 by dojoteef
I'm curious, there must be a downside to reducing the bits, mustn't there? What does intensively jpegging an AI's brain do to it? Is this why Lt. Commander Data couldn't use contractions?
FaceDeer t1_ja23lku wrote
Reply to comment by Ok-Ability-OP in Meta unveils a new large language model that can run on a single GPU by AylaDoesntLikeYou
I'd be happy with it just running on my home computer's GPU, I could use my phone as a dumb terminal to talk with it.
This is amazing. I keep telling myself I shouldn't underestimate AI's breakneck development pace, and I keep being surprised anyway.
FaceDeer t1_j9pdns0 wrote
Reply to Seriously people, please stop by Bakagami-
Depends on the context. Just yesterday I was in a big discussion over on /r/books about the uses of ChatGPT for writing books and there were plenty of situations where anecdotes about conversations I've had with ChatGPT were highly relevant.
FaceDeer t1_j9m0dql wrote
Reply to comment by randominternetfool in [WP] "This is the lockpicking lawyer and I have been sent to hell to repent for my crimes against god. So today, I am picking the lock to heaven's gate." by Gone4Gaming
He also needs to relock it and pick it again to show it wasn't a fluke, and then ideally gut the lock to show us its inner workings.
FaceDeer t1_j9kgcvd wrote
Reply to comment by ihrvatska in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
Perhaps you could have a specialist AI whose specialty was figuring out which other specialist AI it needs to pass the query to. If each specialist can run on home hardware that could be the way to get our Stable Diffusion moment. Constantly swapping models in memory might slow things down, but I'd be fine with "slow" in exchange for "unfettered."
FaceDeer t1_j6t2bjc wrote
Reply to comment by cittatva in Planting more trees could axe summer deaths by a third. Modelling of 93 European cities finds that increasing tree cover up to 30% can help lower the temperature of urban environments by an average of 0.4°C and prevent one in three heat deaths as a result. by MistWeaver80
A common issue that I see discussed on /r/marijuanaenthusiasts/ is planting trees too deeply. Once a tree has sprouted it permanently establishes the division point between "root" and "trunk" and produces a different sort of bark on each. If a tree gets replanted deeper than it sprouted it ends up with soil against trunk-bark, which is more prone to rotting.
FaceDeer t1_jeg6zzv wrote
Reply to comment by SucksToYourAssmar3 in The only race that matters by Sure_Cicada_4459
> You definitely should die.
You saw that, officer, it was self defence.
> Your analogy falls flat - murder isn't a natural cause of death.
Ever been in the hospital for appendicitis? Taking any medications, perhaps?
I refer you to the Fable of the Dragon-Tyrant.
> There's no such thing as immortality. Resources aren't infinite, so it can't be for everyone.
I'll live forever or die trying. If you want to give up immediately, I guess that's your perogative.