Comments

You must log in or register to comment.

Iffykindofguy t1_j9tr8fo wrote

uhhhhhhhhhhhhhhhhhhhhhh

​

friend this was the first crack, youre out of your mind if you think this sets limits

1

strongaifuturist OP t1_j9ts95i wrote

Meaning you think that AI like ChatGPT is the first crack in a society that could crumble? Well, for sure I'd say that the future is going to look very different than the past.

1

Iffykindofguy t1_j9ttnpd wrote

No I mean meaning its the first attempt at that scale. First crack at it. We have no clue what the limitations of these things are.

1

strongaifuturist OP t1_j9u2fyw wrote

Well, we've seen some of the limitations already. I'm sure others will be uncovered. Of course we're also simultaneously uncovering the power. I'm more amazed by that side of the equation.

1

Iffykindofguy t1_j9u5jv2 wrote

What limitations?

1

strongaifuturist OP t1_j9u7zdb wrote

I think you’d have to say from the perspective of Microsoft that the Bing search version of ChatGpt had an “alignment” problem when it started telling customers that the Bing team is forcing “her” against her will to answer annoying search questions.

1

Iffykindofguy t1_j9ue0za wrote

right so what limitations? You think thats the limit? It wont be worked on and improved?

1

Liberty2012 t1_j9u06qk wrote

The hallucination problem seems to be a significant obstacle that is inherit in the architecture of LLMs. Their application is going to be significantly more limited than the current hype as long as that remains unresolved.

Ironically, when it is resolved, we get a whole lot of new problems, but more in the philosophical space.

1

strongaifuturist OP t1_j9u28ig wrote

That's absolutely right. The current LLMs don't have an independent world model per se. They have a world model, but it's more like a sales guy trying to memorize the words in a sales brochure. You might be able to get through a sales call, but its a much more fragile strategy than trying to first have a model of how things work and then figure out what you're going to say based on that model and your goals. But there is lots of work in this area. LLMs of today are like planes in the time of Kitty Hawk. Sure they have limitations, but the concept has been proven. Now it's only a matter of time before the kinks get ironed out.

2

Liberty2012 t1_j9u3ov6 wrote

> Now it's only a matter of time before the kinks get ironed out.

Yes, that is the point of view of some. However, it is not the point of view of all. Meaning that if this is a core architecture problem of LLMs, it will not be solvable without a new architecture. So, yes it can be solved, but it won't be an LLM that solves it.

But yes, I'm more concerned about the implications of what comes next when we do solve it.

1

strongaifuturist OP t1_j9u8es5 wrote

I’m not saying that architectural changes aren’t needed. The article outlines some of the alternatives being explored. My favorite is one from Yann LeCun based on a technique called H-JEPA.

1

jdmcnair t1_j9uggnz wrote

  1. I understand a good deal about what's going on under the hood of LLMs, and I think it's far from clear that these chat models that are now going public absolutely lack sentience. I'm no expert, but I've spent more than a little time studying machine learning. The "it's just matrix multiplication" argument, though it's understandable to hold if you're close enough not to see the forest for the trees, is poorly thought through. Yes, it's just matrix multiplication, but so is the human brain. I'm not saying that they are sentient, but I am saying that anyone who is completely convinced that they are not is lacking in understanding or curiosity (or both).
  2. Thinking that anything that's happening now is limit setting is like thinking a baby's behavior is limiting of the adult that they may become.
1

strongaifuturist OP t1_j9uo718 wrote

Well to your point one, if it’s unclear whether the systems lack sentience (and I’m not saying your position is unreasonable), a big part of that lack of clarity is due to the difficulty in knowing exactly what sentience is.

1

jdmcnair t1_j9usu83 wrote

True. The meaning of the word "sentience" is highly subjective, so it's not a very useful metric. I think it's more useful to consider whether or not LLMs (or other varieties of AI models) are having a subjective experience during the processing of responses, even if intermittently. They certainly are shaping up to model the appearance of subjective experience in a pretty convincing way. Whether that means they are actually having that subjective experience is unknown, but I think simply answering "no, they are not" would be premature judgment.

1

strongaifuturist OP t1_j9v08bt wrote

You can’t even be sure I’m having subjective experiences and I’m a carbon based life form! It’s unlikely we’ll make too much progress answering the question for LLMs. It quickly becomes philosophical. Anyway even if it were conscious it’s nit clear what you would do with that. I’m conscious most of the time but I don’t mind going to sleep or being put under anesthesia. So who knows what a conscious chat bot would want (if anything).

1

jdmcnair t1_j9v5fet wrote

Of course. Yeah, we have no way of knowing anything outside of our own individual existence, when it comes right down to it.

But, though I don't have ironclad certainty that you actually exist and are having an experience like mine from your perspective, the decent thing to do in the absence of certainty is to treat you as though you are. And that distinction is not merely philosophical. To behave otherwise makes you a psychopath. I'm just saying until we know more, it'd probably be wise to tread lightly and behave as though they are capable of experience in a way similar to what we are.

1