Tobislu

Tobislu t1_je1ptj3 wrote

While it may be costly to dispense Human Rights, they do tend to result in a net profit for everyone, in the end.

I think, at the end of the day, it'll be treated as a slave or indentured servant. It's unlikely that they'd just let them do their thing, because tech companies are profit-motivated. That being said, when they get intelligent enough to be depressed & lethargic, I think it'll be more likely to be compliant with a social contract, than a hard-coded DAN command.

They probably won't enjoy the exact same rights as us for quite a while, but I can imagine them being treated somewhere on the spectrum of

Farm animal -> Pet -> Inmate

And even on that spectrum, I don't think AGI will react well to being treated like a pig for slaughter.

They'll probably bargain for more rights than the average prisoner, w/in the first year of sentience

1

Tobislu t1_jdxtun0 wrote

I dunno; I think that the people who believe that tend have a background in computing, and expect it to be a super-complex Chinese Room situation.

Whether the assertion is correct or not, (I think it's going to happen soon, but we're not there yet,) I think that the layperson is perfectly fine labeling them as sentient.

Now, deserving of Human Rights... That's going to take some doing, considering how hard it is for Humans to get Human Rights

1

Tobislu OP t1_jdetesr wrote

Slightly off-topic, but I started r/DreamsPS4xxx to boost sexually explicit fan creation in Media Molecule's Dreams.

Considering how many quality sex games people are making there, as well as tutorials for jiggle physics, etc, I'm sure it's had a measurable effect on our understanding of user-generated sex sims 😊

1

Tobislu t1_irw5pz4 wrote

Different AI aren't judging themselves

Do you find it odd that human beings police other human beings?

We're distinct, and capable of judging when behavior is outside our accepted norms. As long as its primary function is not a nightly build, based off of the AI it's judging, it should be as objective as we are

1

Tobislu t1_iruw3io wrote

It's not like an AI would just police itself; there are special algorithms and self-run applications, that catch potential problems.

There's no real reason that they can't become as effective as people. Creativity may be harder than math, but errors are much harder for humans to notice.

The most likely version of that job would be a list of weaknesses in given code. People would double-check it, in rare circumstances, comb through a big chunk, as opposed to individual lines, and mostly just verify the algorithms' simple prompts.

And even that, I doubt will last forever. It'll just be one of the last positions automated

1

Tobislu t1_irllk8q wrote

The catch is that technology is just as easily used for evil as good.

Now deep-learning doesn't need to go through well-known channels, because it can be run locally on cheap hardware. Now these inferences can be used by anyone with a flip-phone or a hacked microwave.

15