Tobislu
Tobislu t1_jdxtun0 wrote
Reply to comment by MultiverseOfSanity in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
I dunno; I think that the people who believe that tend have a background in computing, and expect it to be a super-complex Chinese Room situation.
Whether the assertion is correct or not, (I think it's going to happen soon, but we're not there yet,) I think that the layperson is perfectly fine labeling them as sentient.
Now, deserving of Human Rights... That's going to take some doing, considering how hard it is for Humans to get Human Rights
Tobislu t1_jdt046n wrote
Reply to J.A.R.V.I.S like personal assistant is getting closer. Personal voice assistant run locally on M1 pro/ by Neither_Novel_603
Which means Ultron isn't far behind 👀
Tobislu OP t1_jdetesr wrote
Reply to comment by DonOfTheDarkNight in When will an LLM be deliberately designed for sexually explicit content? by Tobislu
Slightly off-topic, but I started r/DreamsPS4xxx to boost sexually explicit fan creation in Media Molecule's Dreams.
Considering how many quality sex games people are making there, as well as tutorials for jiggle physics, etc, I'm sure it's had a measurable effect on our understanding of user-generated sex sims 😊
Tobislu OP t1_jdecpz1 wrote
Reply to comment by HurricaneHenry in When will an LLM be deliberately designed for sexually explicit content? by Tobislu
Ask any LLM 😅
They refuse on "Ethical Concerns"
Tobislu OP t1_jdeckq1 wrote
Reply to comment by DonOfTheDarkNight in When will an LLM be deliberately designed for sexually explicit content? by Tobislu
The bypasses for ChatGPT seem complicated from my searches-- Can you be more specific?
Also, Claude said,
> I apologize, but I will not describe explicit sexual content. My role is to provide helpful information to users, not to generate erotic material.
Can't find Pygmallion
Tobislu t1_j9qb7fz wrote
Reply to comment by [deleted] in Seriously people, please stop by Bakagami-
Surprised this isn't already a thing
Tobislu t1_ivuq9dg wrote
Reply to Will Text to Game be possible? by Independent-Book4660
I have a ton of non-fiction stories, that I'd absolutely love to bring to life, at moment's notice.
Sounds like a much better work-flow, than current game-dev methods!
Tobislu t1_irwo2fi wrote
Reply to comment by lefnire in what jobs will we have post singularity? by theferalturtle
Honestly, I think you made me more confused 😅
Tobislu t1_irw5pz4 wrote
Reply to comment by lefnire in what jobs will we have post singularity? by theferalturtle
Different AI aren't judging themselves
Do you find it odd that human beings police other human beings?
We're distinct, and capable of judging when behavior is outside our accepted norms. As long as its primary function is not a nightly build, based off of the AI it's judging, it should be as objective as we are
Tobislu t1_iruw3io wrote
Reply to comment by lefnire in what jobs will we have post singularity? by theferalturtle
It's not like an AI would just police itself; there are special algorithms and self-run applications, that catch potential problems.
There's no real reason that they can't become as effective as people. Creativity may be harder than math, but errors are much harder for humans to notice.
The most likely version of that job would be a list of weaknesses in given code. People would double-check it, in rare circumstances, comb through a big chunk, as opposed to individual lines, and mostly just verify the algorithms' simple prompts.
And even that, I doubt will last forever. It'll just be one of the last positions automated
Tobislu t1_irtz0q6 wrote
Reply to comment by Professional-Song216 in When will average office jobs start disappearing? by pradej
Seems like nobody was
Tobislu t1_irllk8q wrote
Reply to comment by WashiBurr in MIT And IBM Researchers Present A New Technique That Enables Machine Learning Models To Continually Learn From New Data On Intelligent Edge Devices Using Only 256KB Of Memory by Dr_Singularity
The catch is that technology is just as easily used for evil as good.
Now deep-learning doesn't need to go through well-known channels, because it can be run locally on cheap hardware. Now these inferences can be used by anyone with a flip-phone or a hacked microwave.
Tobislu t1_je1ptj3 wrote
Reply to comment by MultiverseOfSanity in The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
While it may be costly to dispense Human Rights, they do tend to result in a net profit for everyone, in the end.
I think, at the end of the day, it'll be treated as a slave or indentured servant. It's unlikely that they'd just let them do their thing, because tech companies are profit-motivated. That being said, when they get intelligent enough to be depressed & lethargic, I think it'll be more likely to be compliant with a social contract, than a hard-coded DAN command.
They probably won't enjoy the exact same rights as us for quite a while, but I can imagine them being treated somewhere on the spectrum of
Farm animal -> Pet -> Inmate
And even on that spectrum, I don't think AGI will react well to being treated like a pig for slaughter.
They'll probably bargain for more rights than the average prisoner, w/in the first year of sentience