archpawn

archpawn t1_jduixph wrote

This is how placebomancy works in UNSONG. Someone did a study comparing praying for rain from a pagan deity vs from random letters pulled from a game of scrabble, and both worked equally well. Though placebomancy isn't the only form of magic in it.

12

archpawn t1_jaq6ja9 wrote

I feel like this could be taken two ways. One is that robots become so cheap and prevalent that everyone gets one. The other is that they're so good at doing different things that one per person is enough. You won't need one to vacuum your floor, one to mow your lawn, one to cook you food, and one to drive you around.

8

archpawn t1_j7xukdj wrote

There was a Melancholy of Haruhi Suzumiya episode involving this. They were celebrating Tanabata, which involves wishing on two stars. Haruhi figured that their wishes would be granted in 16 and 25 years, which is when the light would reach the stars. Kyon pointed out that they'd also have to wait for the light to get back, but Haruhi apparently thinks they have FTL wish-granting powers, but not FTL perception.

2

archpawn t1_j495qg8 wrote

> I understand we need safeguards to keep ai from becoming dangerous,

I think this is all the more reason to avoid moral bloatware. Our current methods won't work. At best, we can get it to figure out the better choice in situations similar to its training data. Post-singularity, nothing will resemble the training data. All we'd be doing is hiding how dangerous the AI is, and making it less likely people would research methods that have a hope of working.

0

archpawn t1_j0r8qwo wrote

> If the computer is sentient how is that not violating the computer?

You're sentient. Do your instincts to enjoy certain things violate your rights? The idea here isn't to force the AI to do the right thing. It's to make the AI want to do the right thing.

> Who decides what output is acceptable?

Ultimately, it has to be the AI. Humans suck at it. We can't exactly teach an AI how to solve the trolley problem by training it on it if we can't even agree on an answer ourselves. And there's bound to be plenty of cases where we can agree, but we're completely wrong. But we have to figure out how to make the AI figure out what output is best, as opposed to what makes the most paperclips, or what its human trainers are most likely to think is the best, or what gives the highest number in a model trained for that but it's operating in an area so far outside its training data that it's meaningless.

2

archpawn t1_j0r7z6c wrote

> I don't think "pretend you're an AGI" is sufficient, it will just pretend but not be any smarter.

You're missing my point. Pretending can't make it smarter, but it can make it dumber. If we get a superintelligent text prediction system, we'll still have to trick it into predicting someone superintellgent, or it will just pretend to be dumb.

1

archpawn t1_j0oswhp wrote

I think you're missing the point of what I said. If we get this AI to be superintelligent, but it still has the goal of text prediction, then all it will do is give super-accurate predictions. It's not going to give super smart results, unless you ask it to predict what someone super smart would say, in which case it would be smart enough to accurately predict it.

7

archpawn t1_j0ohcty wrote

What I think is worrying is that all our progress in AI is things like this, which can produce virtually any output. When we get a superintelligent AI, we don't want something that can produce virtually any output. We want to make sure it's good.

It's also worth remembering that this is not an unbiased model. This is what they got after doing everything they could to train the AI to be as inoffensive as possible. It will avoid explicitly favoring any political party, but it's not hard to trick it to do it by favoring certain politicians.

14

archpawn t1_j0ogw06 wrote

Right now, the AI is fundamentally just predicting text. If you had a superintelligent AI do text prediction, it would still act like someone of ordinary intelligence. But once you convince it that it's predicting what someone superintelligent would say, it would do that accurately.

I feel like the problem is that once it's smart enough to predict a superintelligent entity, it will also be smart enough to know that the text you're trying to continue wasn't actually written by one.

11

archpawn t1_itol78w wrote

If we were limited to just this star system, and built a Dyson sphere around the sun and then used that to power minds running as efficiently as possible, that works out to being able to support a population of about 2.5*10^31. This is the same as the population if every star in the observable universe had an earth-like planet with a population of a billion.

Admittedly, if you have FTL you could go beyond the observable universe, but it's not like we're limited to just one star system without it. We could populate the entire cluster before long.

2