Viewing a single comment thread. View all comments

Myxomatosiss t1_j77hgb3 wrote

This is a language model you're discussing. It's a mathematical model that calculates the correlation between words.

It doesn't think. It doesn't plan. It doesn't consider.

We'll have that someday, but it is in the distant future.

26

---AI--- t1_j7a3hl0 wrote

>It doesn't think. It doesn't plan. It doesn't consider.

I want to know how you can prove these things. Because ChatGPT can most certainly at least "simulate" things. And if it can simulate them, how do you know it isn't "actually" doing them, or whether that question even makes sense?

Just ask it to do a task that a human would have to think plan and consider. A very simple example is to ask it to write a bit of code. That it can call and use functions before it has defined, it can open brackets planning ahead that will need to fill out that function there.

1

Myxomatosiss t1_j7abejl wrote

That's a fantastic question. ChatGPT is a replication of associative memory with an attention mechanism. That means it has associated strings with other strings based on a massive amount of experience. However, it doesn't contain a buffer that it works through. We have a working space in our heads where we can replay information, ChatGPT does not. In fact, when you pump in an input, it cycles through the associative calculations, comes to an instantaneous answer, and then ceases to function until another call is made. It doesn't consider the context of the problem because it has no context. Any context it has is inherited from its training set. To compare it with the Chinese room experiment, imagine if those reading the output of the Chinese room found it to have some affect. Maybe it has a dry sense of humor, or is a bit of an airhead. That affect would come exclusively from the data set, and not from some bias in the room. I really encourage you to read more about neuroscience if you'd like to learn more. There have been brilliant minds considering intelligence since long before we were born, and every ML accomplishment has been inspired by their work.

1

---AI--- t1_j7au2sj wrote

The Chinese room experiment is proof that a Chinese room can be sentient. There's no difference between a Chinese room and a human brain.

> It doesn't consider the context of the problem because it has no context.

I do not know what you mean here, so could you please give a specific example that you think ChatGPT and similar models will never be able to correctly answer.

2

Myxomatosiss t1_j7budz6 wrote

If you truly believe that, you haven't studied the human brain. Or any brain, for that matter. There is a massive divide.

Ask it for a joke.

But more importantly, it has no idea what a chair is. It has mapped the association of the word chair to other words, and it can connect them together in a convincingly meaningful way, but it only has a simple replication of associative memory. It's lacking so many other functions of a brain.

1

spiritus_dei OP t1_j77kkcu wrote

Sounds a lot like COVID-19. Was that dangerous?

−27

Ulfgardleo t1_j77ribp wrote

a virus acts on its own. it has mechanics to interact with the real world.

9

cedriceent t1_j785o2y wrote

It also sounds like a glass of water. Explain the similarities between CoViD19 and a language model in way that makes them analogous.

7