Shiyayori
Shiyayori t1_j9w39q1 wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
If an AGI has a suitable ability to extrapolate the results of its actions into the future, and we use some kind of reinforcement learning to train it on numerous contexts it should avoid, the it’ll naturally develop an internal representation of all contexts it ought to avoid (which will only ever be as good as it’s training data and ability to generalise across it). Anyway, it’ll recognise that the results of its actions will naturally lead to a context that should be avoided.
I imagine this is similar to how humans do it, and it’s a lot more vague with us. We match our experience to our internal model of what’s wrong and create a metric to determine just how wrong it is, and then we compare that metric against our goals and make a decision based on weather or not we believe this risk metric is too high.
I think the problem might mostly be in finding a balance between solidifying it’s current moral beliefs vs keeping them liquid enough to change and optimise. Our brains are pretty similar in that they become more rigid over time, and stochastically decreasing techniques are often used in optimisation problems
The solution might be in having a corpus of agents developing their own models with a master model that compares each of their usefulness’ against the input data and their rates of stochastic decline.
Or maybe I’m just talking out my ass, who actually knows.
Shiyayori t1_j9veev5 wrote
-
Technological singularises result in a culture disconnected from the desire to expand endlessly for no reason; it’s possible after just a few generations that virtual reality is that much more preferable. Any expansion is done out of need not want,
-
Life as we are is simply that rare.
Shiyayori t1_j68u8qq wrote
Reply to Myth debunked: Myths about nanorobots by kalavala93
I reckon a more modern idea should be designer cells and ‘viruses’ which work in our favour.
Shiyayori t1_j598cw1 wrote
Reply to comment by [deleted] in I think many of our wants will be solved by nanobots by [deleted]
I have autism. It’s a spectrum. It’s not the autism that’s the issue, it’s a manifestation of it within a fringe of the autistic population.
Besides that, it’s absurd to make the assumption that the suicide rate is directly a result of the autism itself and not how we’re treated because we’re autistic.
I’d rather be catered too in a post scarcity world, then altered to someone I’m not, and I’m not saying nothing should be done for those who have severe issues.
Also, “other mental illness?” Was me pointing out how you referred to autism. It’s not a mental illness. I wasn’t suggesting mental illness shouldn’t be cured.
Shiyayori t1_j597i5m wrote
Cure autism? Other mental illness? That’s a bit rude…
Shiyayori t1_j57aqal wrote
Reply to The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I think reframing the issue from morality to what it really is, is much better.
Ultimately, we want AI to work for us, to do what we want it to, but also to understand us and the underlying intentions in what we’re asking of it.
It should have to ability to ignore aspects of requests and to add its own, based on its belief of what will lead to the best outcome.
It’s impossible to extrapolate every action infinitely far into the future, so it can never now with certainty what will result from those actions.
I’m under the belief that it’s not as hard as it looks. It should undergo some kind of reinforcement learning under various contexts, and with a suitable ability to extrapolate goals into the future, an AI would never misinterpret a goal in a ludicrous way like we often imagine.
But like a human, their will always be mistakes.
Shiyayori t1_j4xevzt wrote
Reply to OpenAI's CEO Sam Altman won't tell you when they reach AGI, and they're closer than he wants to let on: A procrastinator's deep dive by Magicdinmyasshole
I wasn’t as anal about the expressions as you were, but when I watched it and heard his tone, it definitely felt like there was lot he was trying not to say. I definitely get the vibe there’s a lot going on in the background of AI.
Shiyayori t1_j1dr2go wrote
Reply to comment by Sieventer in How individuals like you can increase the quality, utility, and purpose of the singularity subreddit by [deleted]
Lmao I’m not that egotistical, I logged into another account and it literally doesn’t show up.
Shiyayori t1_j1d2szt wrote
Reply to How individuals like you can increase the quality, utility, and purpose of the singularity subreddit by [deleted]
Well, whenever I try to create a interesting discussion post, it just never appears on the subreddit, even though it’s in my history, so Idk, I’ve just given up at this point.
Shiyayori t1_j0kisxm wrote
Reply to How to Deal With a Rogue AI by SFTExP
Just switch it off 💀
Shiyayori t1_iydswqh wrote
I don’t think people are gunna care about Reddit after the singularity, some time before that even.
Shiyayori t1_iya60iv wrote
Reply to comment by gergnerd in Sci-fi-like space elevators could become a reality in the "next 2 or 3 decades" by Shelfrock77
What? That’s literally the opposite of the case; the materials are possible, and it would cost less than 50 billion to build…
Shiyayori t1_iy7e3ak wrote
Reply to AI invents millions of materials that don’t yet exist. "Transformative tool" is already being used in the hunt for more energy-dense electrodes for lithium-ion batteries. by SoulGuardian55
So they test these new materials, get accurate data on what they do and backprop that accurate data against the AI prediction to refine its predictive capabilities.
It’s all convergent.
Shiyayori t1_iws1z3x wrote
Reply to When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
Well we don’t know the full scope of what creates consciousness. Even if we replace our neurones synthetically, and copy every function they have so they they act as they would do if they were biological, there’s no telling what mechanisms could be lost in removing the underlying biological process itself.
For example, if consciousness is a byproduct of finely tuned entangled systems induced by our hormones and the many chemicals flooding our brain, then removing that and emulating the effect synthetically without the cause, may cause a collapse of consciousness.
I wouldn’t hedge my bets either way, but I think there’s a lot of room for discovery still, and it’s not as simple as merely synthesising the brain.
Shiyayori t1_ivofgfy wrote
Reply to comment by OneRedditAccount2000 in How might fully digital VR societies work? by h20ohno
Granted, I didn’t consider that when I typed the analogy; the point is that it’s arbitrary to assign any motive to ASI, even the one of survival. There’s no reason to believe it would care either which way about its survival and the length of its existence in general.
I wasn’t claiming anything about what it would actually do, I was just trying to show a line of reasoning that justifies a possibility which contradicts your own.
Shiyayori t1_ivobcqr wrote
Reply to comment by OneRedditAccount2000 in How might fully digital VR societies work? by h20ohno
You say it like the emotions and goals of humans are intrinsic to consciousness and not just intrinsic the humanity. An ASI could just as easily find motive in expressing the full range of complexity the universe has to offer, be it through arrangements of atoms, or natural progressions of numerous worlds and stories.
There’s no reason to believe it would disregard humans, just as much as there’s no reason to believe it wouldn’t.
Shiyayori t1_isqigan wrote
Reply to A new AI model can accurately predict human response to novel drug compounds by Dr_Singularity
Not sure how exaggerated the article is, but I certainly didn’t expect this so soon… incredible, it’s only 2022
Shiyayori t1_jeffcuu wrote
Reply to This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
dAIchotomy