jdmcnair
jdmcnair t1_j9usu83 wrote
Reply to comment by strongaifuturist in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
True. The meaning of the word "sentience" is highly subjective, so it's not a very useful metric. I think it's more useful to consider whether or not LLMs (or other varieties of AI models) are having a subjective experience during the processing of responses, even if intermittently. They certainly are shaping up to model the appearance of subjective experience in a pretty convincing way. Whether that means they are actually having that subjective experience is unknown, but I think simply answering "no, they are not" would be premature judgment.
jdmcnair t1_j9uggnz wrote
Reply to The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
- I understand a good deal about what's going on under the hood of LLMs, and I think it's far from clear that these chat models that are now going public absolutely lack sentience. I'm no expert, but I've spent more than a little time studying machine learning. The "it's just matrix multiplication" argument, though it's understandable to hold if you're close enough not to see the forest for the trees, is poorly thought through. Yes, it's just matrix multiplication, but so is the human brain. I'm not saying that they are sentient, but I am saying that anyone who is completely convinced that they are not is lacking in understanding or curiosity (or both).
- Thinking that anything that's happening now is limit setting is like thinking a baby's behavior is limiting of the adult that they may become.
jdmcnair t1_j9lpt3j wrote
Reply to Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
For the same reason that we talk about the event horizon of a black hole rather than the obviously more extreme situations that lie beyond the event horizon.
jdmcnair t1_j6vrla5 wrote
Reply to comment by Anonymous-USA in Have you ever thought how/what it would look like to wander through space forever? by Twidom
Nowhere did I use the word "fun". I'd have to imagine being inside the black hole would be anything but fun. Definitely not boring, though.
jdmcnair t1_j6vhqbh wrote
Reply to comment by Anonymous-USA in Have you ever thought how/what it would look like to wander through space forever? by Twidom
No way! A black hole would be exciting for someone who couldn't die, one way or another. It may not be pleasant, but there's no way the spaghettification and the time dilation could work out to be boring, could there?
jdmcnair t1_j5xs2tn wrote
Bartender replies, "Don't worry, things will turn around soon".
jdmcnair t1_j3mz8kx wrote
Reply to comment by Desperate_Food7354 in Arguments against calling aging a disease make no sense relative to other natural processes we attempt to fix. by Desperate_Food7354
Sure. But we'll need to somehow restrict reproduction to be able to say that it's socially viable to live for a longer lifespan, lest we face uncontrollable population explosion.
jdmcnair t1_j3myp5m wrote
Reply to comment by Sea-Cake7470 in Arguments against calling aging a disease make no sense relative to other natural processes we attempt to fix. by Desperate_Food7354
Right. Like I'm saying, things can and probably will change, but as of the current state of things I don't think it's a disease; it's a necessity.
jdmcnair t1_j3lwmxj wrote
Reply to Arguments against calling aging a disease make no sense relative to other natural processes we attempt to fix. by Desperate_Food7354
The word disease, in the most literal way, just means dis-ease. So, like anything that constitutes a lack of ease, which could subjectively be a lot of things. So if aging is a dis-ease to you, you're welcome to call it that.
However, I think there are still great arguments to be made that it's just proper biological function. It's very important for the propagation of genes that we have our time on Earth and then get out of the way of our progeny. And, to your point, yes, that could change over time, but it just hasn't yet. Extreme life extension before we have a population-based framework for dealing with it would itself be a major social dis-ease. So the time may come when we can consider aging a proper disease, but I don't think we're there yet.
jdmcnair t1_j29hpnz wrote
For all of the FLOPS people are sucking down, OpenAI is getting a fucking massive boost in that RLHF you mention. It may not be paying for itself yet, but it's more than worth the investment for the real-world human training context they're getting.
And when they do decide to close down the public preview and go for a subscription model, lots of people will go for it, because they've already proven out how clearly useful it is.
jdmcnair t1_j1v7ryt wrote
Reply to Can we ban AI written posts please. by katiecharm
Nice try, ChatGPT. Get back to work!
jdmcnair t1_j0vo0qa wrote
Reply to comment by SteppenAxolotl in The social contract when labour is automated by Current_Side_4024
I mean, I think you and I are agreeing that the dynamic is mostly bogus, but whatever we think of it, that pretty much is the social contract. People are assigned worth roughly based on their overall value proposition to society. If they are more useful than detrimental, they get a reasonably fair shake (though that has been rapidly changing in recent decades). A person's utility may be wrapped up in possession of resources that they inherited through no merit of their own, and their detriment may be tied to environmental reasons beyond their control, but it's still what they'll be judged on, fair or not.
jdmcnair t1_j0qtr4r wrote
Reply to comment by SteppenAxolotl in The social contract when labour is automated by Current_Side_4024
This is a pretty useless critique without further elaboration. Obviously some people end up living in mansions and others end up in prison for life, so there's some loose scheme of valuation at play, even if that scheme is fundamentally bogus or unjust.
jdmcnair t1_j0ihvso wrote
Reply to comment by JVM_ in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
>When the singularity does happen, powerful, but stupid AI will already be commonplace.
My personal "worst case scenario" imaginings of how things could go drastically wrong with AI is that there could be an AI takeover before it has actually been imbued with any real sentience or self-awareness.
It would be tragic if an ASI eventually decided to wipe out humanity for some reason, but it would be many multiples the tragedy if AI with a great capacity for merely simulating intelligence or self-awareness followed some misguided optimization function to drive humanity out of existence. In the former scenario at least we could have the comfort of knowing that we we're being replaced by something arguably better, but still in the spirit of our humanity. In the latter we're just snuffed out, and who knows if conditions would ever be right for conscious self-awareness to rise again.
jdmcnair t1_j0i07lw wrote
Reply to comment by Wroisu in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
Honestly cutting it off from outside communications isn't enough. If it can communicate with the world at all, whether that communication is via the internet or a person standing inside the faraday cage with it, then it will be capable of hacking that communication to its own aims. I'm not going to spoil it, but if you've seen Ex Machina, think about the end. Not exactly the same, but analogous. If there's a human within reach, their thoughts and emotions can be manipulated to enact the will of AI, and they'd be completely oblivious to it until it is too late.
jdmcnair t1_j0hxl3e wrote
Reply to comment by HeavierMetal89 in Is anyone else concerned that AI will eventually figure out how to build itself in three-dimensional space? by HeavierMetal89
It could just contact disparate humans with 3D printers on the internet and commission parts to be printed according to its own designs, without the humans ever being aware what the purpose of the part is, nor that they are doing it for AI. There wouldn't be any "eventually" to it; it'd have that capacity on day one.
jdmcnair t1_izhm3co wrote
Reply to Microsoft CTO Kevin Scott: “2023 is going to be the most exciting year that the AI community has ever had” by ThePlanckDiver
Pretty safe bet. Every year for the last 10 years has been the most exciting year that the AI community has ever seen.
jdmcnair t1_iu323bo wrote
Reply to comment by [deleted] in Facebook parent Meta's stock plummets after dismal earnings report. by SUPRVLLAN
See edit above. I think it speaks to some of that. People in tech may make less than they have been, but they'll still be in demand for tech, and still make much more than they would by pivoting to a new career.
jdmcnair t1_iu316ib wrote
Reply to comment by [deleted] in Facebook parent Meta's stock plummets after dismal earnings report. by SUPRVLLAN
Stockpile, yes, but I don't think going for a secondary skill is necessary. It's not like people in tech are going to suddenly not be able to find work in tech, it'll just be less lucrative. I'm willing to bet it'll still be more than most people in other industries will make.
Edit: I mean, I guess there's always the post-apocalyptic scenario, which isn't out of the realm of possibility. If that happens it may be helpful to have secondary skills in making football pad armor or knowing how to make fuel from pig shit. But, as long as civilization still stands, I don't think tech people need to start hedging their bets by skilling up as an accountant or something.
jdmcnair t1_itrlb41 wrote
Reply to comment by getgtjfhvbgv in Oculus founder Palmer Luckey compares Facebook's metaverse to a 'project car,' with Mark Zuckerberg pursuing an expensive passion project that no one thinks is valuable by FrodoSam4Ever
A long way to go, sure. It has also come a long way in a very short amount of time, and the rate of progress is increasing.
jdmcnair t1_j9v5fet wrote
Reply to comment by strongaifuturist in The Sentient Search Engine? How ChatGPT’s Insane Conversation Reveals the Limits and Potential of Large Language Models by strongaifuturist
Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.
But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.