jdmcnair t1_j9uggnz wrote
- I understand a good deal about what's going on under the hood of LLMs, and I think it's far from clear that these chat models that are now going public absolutely lack sentience. I'm no expert, but I've spent more than a little time studying machine learning. The "it's just matrix multiplication" argument, though it's understandable to hold if you're close enough not to see the forest for the trees, is poorly thought through. Yes, it's just matrix multiplication, but so is the human brain. I'm not saying that they are sentient, but I am saying that anyone who is completely convinced that they are not is lacking in understanding or curiosity (or both).
- Thinking that anything that's happening now is limit setting is like thinking a baby's behavior is limiting of the adult that they may become.
strongaifuturist OP t1_j9uo718 wrote
Well to your point one, if it’s unclear whether the systems lack sentience (and I’m not saying your position is unreasonable), a big part of that lack of clarity is due to the difficulty in knowing exactly what sentience is.
jdmcnair t1_j9usu83 wrote
True. The meaning of the word "sentience" is highly subjective, so it's not a very useful metric. I think it's more useful to consider whether or not LLMs (or other varieties of AI models) are having a subjective experience during the processing of responses, even if intermittently. They certainly are shaping up to model the appearance of subjective experience in a pretty convincing way. Whether that means they are actually having that subjective experience is unknown, but I think simply answering "no, they are not" would be premature judgment.
strongaifuturist OP t1_j9v08bt wrote
You can’t even be sure I’m having subjective experiences and I’m a carbon based life form! It’s unlikely we’ll make too much progress answering the question for LLMs. It quickly becomes philosophical. Anyway even if it were conscious it’s nit clear what you would do with that. I’m conscious most of the time but I don’t mind going to sleep or being put under anesthesia. So who knows what a conscious chat bot would want (if anything).
jdmcnair t1_j9v5fet wrote
Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.
But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.
Viewing a single comment thread. View all comments