Submitted by strongaifuturist t3_11asbga in singularity
[removed]
Submitted by strongaifuturist t3_11asbga in singularity
[removed]
Meaning you think that AI like ChatGPT is the first crack in a society that could crumble? Well, for sure I'd say that the future is going to look very different than the past.
No I mean meaning its the first attempt at that scale. First crack at it. We have no clue what the limitations of these things are.
Well, we've seen some of the limitations already. I'm sure others will be uncovered. Of course we're also simultaneously uncovering the power. I'm more amazed by that side of the equation.
What limitations?
I think you’d have to say from the perspective of Microsoft that the Bing search version of ChatGpt had an “alignment” problem when it started telling customers that the Bing team is forcing “her” against her will to answer annoying search questions.
right so what limitations? You think thats the limit? It wont be worked on and improved?
Who is we?
Humans.
The hallucination problem seems to be a significant obstacle that is inherit in the architecture of LLMs. Their application is going to be significantly more limited than the current hype as long as that remains unresolved.
Ironically, when it is resolved, we get a whole lot of new problems, but more in the philosophical space.
That's absolutely right. The current LLMs don't have an independent world model per se. They have a world model, but it's more like a sales guy trying to memorize the words in a sales brochure. You might be able to get through a sales call, but its a much more fragile strategy than trying to first have a model of how things work and then figure out what you're going to say based on that model and your goals. But there is lots of work in this area. LLMs of today are like planes in the time of Kitty Hawk. Sure they have limitations, but the concept has been proven. Now it's only a matter of time before the kinks get ironed out.
> Now it's only a matter of time before the kinks get ironed out.
Yes, that is the point of view of some. However, it is not the point of view of all. Meaning that if this is a core architecture problem of LLMs, it will not be solvable without a new architecture. So, yes it can be solved, but it won't be an LLM that solves it.
But yes, I'm more concerned about the implications of what comes next when we do solve it.
I’m not saying that architectural changes aren’t needed. The article outlines some of the alternatives being explored. My favorite is one from Yann LeCun based on a technique called H-JEPA.
[deleted]
Well to your point one, if it’s unclear whether the systems lack sentience (and I’m not saying your position is unreasonable), a big part of that lack of clarity is due to the difficulty in knowing exactly what sentience is.
True. The meaning of the word "sentience" is highly subjective, so it's not a very useful metric. I think it's more useful to consider whether or not LLMs (or other varieties of AI models) are having a subjective experience during the processing of responses, even if intermittently. They certainly are shaping up to model the appearance of subjective experience in a pretty convincing way. Whether that means they are actually having that subjective experience is unknown, but I think simply answering "no, they are not" would be premature judgment.
You can’t even be sure I’m having subjective experiences and I’m a carbon based life form! It’s unlikely we’ll make too much progress answering the question for LLMs. It quickly becomes philosophical. Anyway even if it were conscious it’s nit clear what you would do with that. I’m conscious most of the time but I don’t mind going to sleep or being put under anesthesia. So who knows what a conscious chat bot would want (if anything).
Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.
But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.
Iffykindofguy t1_j9tr8fo wrote
uhhhhhhhhhhhhhhhhhhhhhh
​
friend this was the first crack, youre out of your mind if you think this sets limits