Viewing a single comment thread. View all comments

SandAndAlum t1_j9ik8xx wrote

The chinese room is just an exercise in shuffling complexity around and argument from incredulity. Nothing is proven other than the human in the room isn't the person being spoken to, which we started with in the premise.

1

DomesticApe23 t1_j9ikugc wrote

I'm sure that sounded really good in your head, but it doesn't seem to mean anything. Perhaps try using simple language to convey your ideas.

3

SandAndAlum t1_j9ikzze wrote

It's perfectly coherent, unlike the Chinese room arguments.

1

DomesticApe23 t1_j9il9uy wrote

It may be coherent but it doesn't say anything. What do you mean by 'shuffling complexity around'? How is it an argument from incredulity? Say something worthwhile.

1

SandAndAlum t1_j9ilsm5 wrote

All of Searle's no-simulation arguments consist of making an information processing machine out of silly parts, hiding how much information such a system would contain, and then saying 'look those parts are silly! There can't be meaning here'. It's pointless and circular.

But neither you nor he have defined meaning, and are saying nothing about whether or not meaning is an emergent property. Facile dismissals based on the presumption that it cannot emerge are what's hollow. Pointing out how tautogical that argument is is not.

0

DomesticApe23 t1_j9im59f wrote

ChatGPT is literally a Chinese Room. It understands nothing, yet it delivers meaning well enough, just as the Chinese Room translate Chinese well enough. Your failure to understand the specifics of ChatGPTs software is exactly analogous to 'hiding how much information a system such a system would contain'.

1

SandAndAlum t1_j9imdzi wrote

I know what a transformer is. Define understanding and prove there isn't any in one.

It's also not a chinese room because it's not indistinguishable so the argument is doubly stupid.

1

DomesticApe23 t1_j9imi1p wrote

Yeah I think I'll leave the sophomoric philosophy to you mate, you're obviously very enamoured of your own opinions.

1

SandAndAlum t1_j9iml4q wrote

And yet you're the one sophomorically insisting on a conclusion with no supporting logic or evidence.

1

DomesticApe23 t1_j9imwch wrote

What conclusion is that?

1

SandAndAlum t1_j9in9v7 wrote

Your presupposition that understanding cannot emerge from a table of numbers and some rules for multiplying and adding them is your conclusion that there is no understanding or new meaning that can emerge.

Your conclusion is identical to your assumption, so you're just extremely arrogantly saying nothing, then even more arrogantly falling back to an argument from authority where someone else did the same thing.

−1

CaseyTS t1_j9ioa8y wrote

You're so aggressive for literally no reason at all.

0

CaseyTS t1_j9io8yn wrote

The thing you were talking about was developing deep and unique insights about the human experience, from the comment. Yes, you can do that with a generative model that does not have subjective experience. It can intelligently and creatively synthesize information from vast amounts of documented human experience. That is literally what generative LLMs are designed to do - learn from humans and talk about it.

0

Rofel_Wodring t1_j9m2k22 wrote

What SandAndAlum means is that the Chinese Room Experiment shuffles the responsibility for explaining humanity's (self-oriented and essentialist) viewpoint of consciousness onto the computer. It just takes human consciousness as a given that doesn't have to justify itself, and certainly not through reductionism.

Because if our mode of consciousness did have to justify itself by the same rules of the computer in the Chinese Room Experiment, we'd fail in the same way the computer would fail.

1

CaseyTS t1_j9inh7k wrote

I understood it. I think i get "incredulous," but I didn't google it.

0