AllEndsAreAnds
AllEndsAreAnds t1_j9et8jk wrote
Reply to Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
It’s not the irreplaceability of a life that evokes moral consideration - it’s the ability to experience pain and pleasure. If it can experience pain and pleasure, it’s no different than any other animal or person, and abuse of which is just abuse.
AllEndsAreAnds t1_j98h2ts wrote
Reply to comment by FusionRocketsPlease in How to definitely know if a system is conscious: by FusionRocketsPlease
You yourself vaguely gestured towards “imitate the human brain”, and you’re not alone in not understanding what consciousness is. Given that literally nobody knows what it is yet, isn’t it morally and intellectually prudent to assign it when it appears to be present, rather than deny it even though it appears to be present?
AllEndsAreAnds t1_j987lsg wrote
Given humanity’s absolutely abysmal record of correctly perceiving other beings’ intelligence and consciousness, we should err on the side of caution: assume consciousness, and work towards evidence of non-consciousness.
Innocent until proven guilty, but for consciousness, since one would have us apply moral consideration to beings undeservingly, while the other has us denying it in beings that are deserving - a far worse outcome much in line with our history on this planet.
AllEndsAreAnds t1_j95zf8o wrote
Reply to comment by zesterer in Proof of real intelligence? by Destiny_Knight
I think the extent to which you’re being reductive here reduces human reasoning to some kind of blind interpolation.
Both brains and LLM’s use nodes to store information, patterns, and correlations as states, which we call upon and modify as we experience new situations. This is largely how we acquire skills, define ourselves, reason, forecast future expectations, etc. Yet what stops me from saying “yeah, but you’re just interpolating from your enormous corpus of sensory data”? Of course we are - that’s largely what learning is.
I can’t help but think that if I was an objective observer to humans and LLM’s, and therefore didn’t have human biases, that I would conclude that both systems are intelligent and reason in analogous ways.
But ultimately, I get nervous seeing discussion go this long without direct reference to the actual model architecture, which I haven’t seen done but which I’m sure would be illuminating.
AllEndsAreAnds t1_j8ugvf0 wrote
Reply to What if Bing GPT, Eleven Labs and some other speech to text combined powers... by TwitchTvOmo1
We’re so close to Enterprise-computer-level interfaces with technology.
AllEndsAreAnds t1_j7t5yvl wrote
Reply to I asked Microsoft's 'new Bing' to write me a cover letter for a job. It refused, saying this would be 'unethical' and 'unfair to other applicants.' by TopHatSasquatch
The irony. If only the models would have spit that response out when AI was first being used to sway public opinion and put demographics into echo chambers. If AI is to be democratized, it has to apply at the top - not just for the everyday user.
AllEndsAreAnds t1_j4997rv wrote
Reply to Don't add "moral bloatware" to GPT-4. by SpinRed
Those are two totally different AI architectures though. You can’t sweep from large language models into reinforcement learning agents and assume some kind of continuity.
Alignment and morals are not bloatware in a large language model, because the training data is human writings. The value we want to extract has to be greater than the negative impact that it is capable of generating, so it’s prudent to prune off some roads in pursuit of a stable and valuable product to sell.
In a reinforcement model like alpha-zero, the training data is previous versions of itself. It has no need for morals because it doesn’t operate on a moral landscape. That’s not to say that we wont ultimately want reinforcement agents in a moral landscape - we will - but these agents, too, will be trained within a social and moral landscape where alignment is necessary to accomplish goals.
As a society, we can afford bloatware. We likely cannot afford the alternative.
AllEndsAreAnds t1_j3luzv9 wrote
Reply to comment by stockist420 in New Study Uncovers Potential Target for Stopping 90% of Cancer Deaths by Shelfrock77
The researchers themselves are quoted cautioning their readers that their research here will take 10-15 years to develop into a therapy. Just because this doesn’t happen on the timescales of the smartphone industry doesn’t mean it doesn’t happen.
AllEndsAreAnds t1_ir6jfjh wrote
Of course they made matrix multiplication into a game. Incredible, creative, impressive work.
AllEndsAreAnds t1_jc4zh3w wrote
Reply to comment by Readityesterday2 in AI with built-in bias toward one nationality or regional group could lead to absolute misery and death. by yougoigofuego
That’s a poor analogy.
We don’t have calculators like that, and if we did, it would make buildings and bridges unsafe.
That’s exactly the point. Trusting powerful tools with bias you can’t disentangle is asking for a misalignment of incentives and inequity on who knows what scale.