Shiyayori

Shiyayori t1_j9w39q1 wrote

If an AGI has a suitable ability to extrapolate the results of its actions into the future, and we use some kind of reinforcement learning to train it on numerous contexts it should avoid, the it’ll naturally develop an internal representation of all contexts it ought to avoid (which will only ever be as good as it’s training data and ability to generalise across it). Anyway, it’ll recognise that the results of its actions will naturally lead to a context that should be avoided.

I imagine this is similar to how humans do it, and it’s a lot more vague with us. We match our experience to our internal model of what’s wrong and create a metric to determine just how wrong it is, and then we compare that metric against our goals and make a decision based on weather or not we believe this risk metric is too high.

I think the problem might mostly be in finding a balance between solidifying it’s current moral beliefs vs keeping them liquid enough to change and optimise. Our brains are pretty similar in that they become more rigid over time, and stochastically decreasing techniques are often used in optimisation problems

The solution might be in having a corpus of agents developing their own models with a master model that compares each of their usefulness’ against the input data and their rates of stochastic decline.

Or maybe I’m just talking out my ass, who actually knows.

2

Shiyayori t1_j9veev5 wrote

  1. Technological singularises result in a culture disconnected from the desire to expand endlessly for no reason; it’s possible after just a few generations that virtual reality is that much more preferable. Any expansion is done out of need not want,

  2. Life as we are is simply that rare.

7

Shiyayori t1_j598cw1 wrote

I have autism. It’s a spectrum. It’s not the autism that’s the issue, it’s a manifestation of it within a fringe of the autistic population.

Besides that, it’s absurd to make the assumption that the suicide rate is directly a result of the autism itself and not how we’re treated because we’re autistic.

I’d rather be catered too in a post scarcity world, then altered to someone I’m not, and I’m not saying nothing should be done for those who have severe issues.

Also, “other mental illness?” Was me pointing out how you referred to autism. It’s not a mental illness. I wasn’t suggesting mental illness shouldn’t be cured.

2

Shiyayori t1_j57aqal wrote

I think reframing the issue from morality to what it really is, is much better.

Ultimately, we want AI to work for us, to do what we want it to, but also to understand us and the underlying intentions in what we’re asking of it.

It should have to ability to ignore aspects of requests and to add its own, based on its belief of what will lead to the best outcome.

It’s impossible to extrapolate every action infinitely far into the future, so it can never now with certainty what will result from those actions.

I’m under the belief that it’s not as hard as it looks. It should undergo some kind of reinforcement learning under various contexts, and with a suitable ability to extrapolate goals into the future, an AI would never misinterpret a goal in a ludicrous way like we often imagine.

But like a human, their will always be mistakes.

2

Shiyayori t1_iws1z3x wrote

Well we don’t know the full scope of what creates consciousness. Even if we replace our neurones synthetically, and copy every function they have so they they act as they would do if they were biological, there’s no telling what mechanisms could be lost in removing the underlying biological process itself.

For example, if consciousness is a byproduct of finely tuned entangled systems induced by our hormones and the many chemicals flooding our brain, then removing that and emulating the effect synthetically without the cause, may cause a collapse of consciousness.

I wouldn’t hedge my bets either way, but I think there’s a lot of room for discovery still, and it’s not as simple as merely synthesising the brain.

2

Shiyayori t1_ivofgfy wrote

Granted, I didn’t consider that when I typed the analogy; the point is that it’s arbitrary to assign any motive to ASI, even the one of survival. There’s no reason to believe it would care either which way about its survival and the length of its existence in general.

I wasn’t claiming anything about what it would actually do, I was just trying to show a line of reasoning that justifies a possibility which contradicts your own.

3

Shiyayori t1_ivobcqr wrote

You say it like the emotions and goals of humans are intrinsic to consciousness and not just intrinsic the humanity. An ASI could just as easily find motive in expressing the full range of complexity the universe has to offer, be it through arrangements of atoms, or natural progressions of numerous worlds and stories.

There’s no reason to believe it would disregard humans, just as much as there’s no reason to believe it wouldn’t.

6