AllEndsAreAnds

AllEndsAreAnds t1_jc4zh3w wrote

That’s a poor analogy.

We don’t have calculators like that, and if we did, it would make buildings and bridges unsafe.

That’s exactly the point. Trusting powerful tools with bias you can’t disentangle is asking for a misalignment of incentives and inequity on who knows what scale.

6

AllEndsAreAnds t1_j98h2ts wrote

You yourself vaguely gestured towards “imitate the human brain”, and you’re not alone in not understanding what consciousness is. Given that literally nobody knows what it is yet, isn’t it morally and intellectually prudent to assign it when it appears to be present, rather than deny it even though it appears to be present?

3

AllEndsAreAnds t1_j987lsg wrote

Given humanity’s absolutely abysmal record of correctly perceiving other beings’ intelligence and consciousness, we should err on the side of caution: assume consciousness, and work towards evidence of non-consciousness.

Innocent until proven guilty, but for consciousness, since one would have us apply moral consideration to beings undeservingly, while the other has us denying it in beings that are deserving - a far worse outcome much in line with our history on this planet.

3

AllEndsAreAnds t1_j95zf8o wrote

I think the extent to which you’re being reductive here reduces human reasoning to some kind of blind interpolation.

Both brains and LLM’s use nodes to store information, patterns, and correlations as states, which we call upon and modify as we experience new situations. This is largely how we acquire skills, define ourselves, reason, forecast future expectations, etc. Yet what stops me from saying “yeah, but you’re just interpolating from your enormous corpus of sensory data”? Of course we are - that’s largely what learning is.

I can’t help but think that if I was an objective observer to humans and LLM’s, and therefore didn’t have human biases, that I would conclude that both systems are intelligent and reason in analogous ways.

But ultimately, I get nervous seeing discussion go this long without direct reference to the actual model architecture, which I haven’t seen done but which I’m sure would be illuminating.

9

AllEndsAreAnds t1_j4997rv wrote

Those are two totally different AI architectures though. You can’t sweep from large language models into reinforcement learning agents and assume some kind of continuity.

Alignment and morals are not bloatware in a large language model, because the training data is human writings. The value we want to extract has to be greater than the negative impact that it is capable of generating, so it’s prudent to prune off some roads in pursuit of a stable and valuable product to sell.

In a reinforcement model like alpha-zero, the training data is previous versions of itself. It has no need for morals because it doesn’t operate on a moral landscape. That’s not to say that we wont ultimately want reinforcement agents in a moral landscape - we will - but these agents, too, will be trained within a social and moral landscape where alignment is necessary to accomplish goals.

As a society, we can afford bloatware. We likely cannot afford the alternative.

4