jdmcnair

jdmcnair t1_j9v5fet wrote

Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.

But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.

1

jdmcnair t1_j9usu83 wrote

True. The meaning of the word "sentience" is highly subjective, so it's not a very useful metric. I think it's more useful to consider whether or not LLMs (or other varieties of AI models) are having a subjective experience during the processing of responses, even if intermittently. They certainly are shaping up to model the appearance of subjective experience in a pretty convincing way. Whether that means they are actually having that subjective experience is unknown, but I think simply answering "no, they are not" would be premature judgment.

1

jdmcnair t1_j9uggnz wrote

  1. I understand a good deal about what's going on under the hood of LLMs, and I think it's far from clear that these chat models that are now going public absolutely lack sentience. I'm no expert, but I've spent more than a little time studying machine learning. The "it's just matrix multiplication" argument, though it's understandable to hold if you're close enough not to see the forest for the trees, is poorly thought through. Yes, it's just matrix multiplication, but so is the human brain. I'm not saying that they are sentient, but I am saying that anyone who is completely convinced that they are not is lacking in understanding or curiosity (or both).
  2. Thinking that anything that's happening now is limit setting is like thinking a baby's behavior is limiting of the adult that they may become.
1

jdmcnair t1_j3lwmxj wrote

The word disease, in the most literal way, just means dis-ease. So, like anything that constitutes a lack of ease, which could subjectively be a lot of things. So if aging is a dis-ease to you, you're welcome to call it that.

However, I think there are still great arguments to be made that it's just proper biological function. It's very important for the propagation of genes that we have our time on Earth and then get out of the way of our progeny. And, to your point, yes, that could change over time, but it just hasn't yet. Extreme life extension before we have a population-based framework for dealing with it would itself be a major social dis-ease. So the time may come when we can consider aging a proper disease, but I don't think we're there yet.

3

jdmcnair t1_j29hpnz wrote

For all of the FLOPS people are sucking down, OpenAI is getting a fucking massive boost in that RLHF you mention. It may not be paying for itself yet, but it's more than worth the investment for the real-world human training context they're getting.

And when they do decide to close down the public preview and go for a subscription model, lots of people will go for it, because they've already proven out how clearly useful it is.

148

jdmcnair t1_j0vo0qa wrote

I mean, I think you and I are agreeing that the dynamic is mostly bogus, but whatever we think of it, that pretty much is the social contract. People are assigned worth roughly based on their overall value proposition to society. If they are more useful than detrimental, they get a reasonably fair shake (though that has been rapidly changing in recent decades). A person's utility may be wrapped up in possession of resources that they inherited through no merit of their own, and their detriment may be tied to environmental reasons beyond their control, but it's still what they'll be judged on, fair or not.

1

jdmcnair t1_j0ihvso wrote

>When the singularity does happen, powerful, but stupid AI will already be commonplace.

My personal "worst case scenario" imaginings of how things could go drastically wrong with AI is that there could be an AI takeover before it has actually been imbued with any real sentience or self-awareness.

It would be tragic if an ASI eventually decided to wipe out humanity for some reason, but it would be many multiples the tragedy if AI with a great capacity for merely simulating intelligence or self-awareness followed some misguided optimization function to drive humanity out of existence. In the former scenario at least we could have the comfort of knowing that we we're being replaced by something arguably better, but still in the spirit of our humanity. In the latter we're just snuffed out, and who knows if conditions would ever be right for conscious self-awareness to rise again.

3

jdmcnair t1_j0i07lw wrote

Honestly cutting it off from outside communications isn't enough. If it can communicate with the world at all, whether that communication is via the internet or a person standing inside the faraday cage with it, then it will be capable of hacking that communication to its own aims. I'm not going to spoil it, but if you've seen Ex Machina, think about the end. Not exactly the same, but analogous. If there's a human within reach, their thoughts and emotions can be manipulated to enact the will of AI, and they'd be completely oblivious to it until it is too late.

3

jdmcnair t1_j0hxl3e wrote

It could just contact disparate humans with 3D printers on the internet and commission parts to be printed according to its own designs, without the humans ever being aware what the purpose of the part is, nor that they are doing it for AI. There wouldn't be any "eventually" to it; it'd have that capacity on day one.

4

jdmcnair t1_iu316ib wrote

Stockpile, yes, but I don't think going for a secondary skill is necessary. It's not like people in tech are going to suddenly not be able to find work in tech, it'll just be less lucrative. I'm willing to bet it'll still be more than most people in other industries will make.

Edit: I mean, I guess there's always the post-apocalyptic scenario, which isn't out of the realm of possibility. If that happens it may be helpful to have secondary skills in making football pad armor or knowing how to make fuel from pig shit. But, as long as civilization still stands, I don't think tech people need to start hedging their bets by skilling up as an accountant or something.

6