Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The people who thought up the Chinese Room argument were almost right - they just didn't realize it would be the human who didn't understand anything.


Isn't that exactly the Chinese room argument though? The human/computer in the room doesn't need to understand Chinese or English to translate?


The Chinese room metaphor is an argument about AI, or computation more generally, attempting to distinguish the difference between performance metrics and "understanding".

There is the role of the human within the metaphor, and the argument/position of the metaphor as a whole. I don't think it makes much sense to talk about the "point/argument/position of a single component of the metaphor. At least, that wasn't how it was originally structured.


The Chinese Room metaphor is also about consciousness, about thought in general.


The Chinese Room metaphor is yet another one of those arguments that fails to take into account that humans are a bag of functional proteins. There is no soul anywhere in the human body.

In other words: it's an argument that would work perfectly fine to defend the idea that humans have no consciousness.

Ergo it can only really prove that either humans have no consciousness/soul/... (whatever you name the magical human property) or that it doesn't exist at all.


That is a pretty simple grasp of the example. It doesn't claim that no system can have understanding or consciousness. It just demonstrates that performance alone is insufficient to conclude consciousness and understanding. A calculator or abacus can add 2 + 2 but instead self aware


The thought experiment doesn't give a limit to the amount of intelligence the Chinese Room can display though, and if the thesis is that you just need a big enough model, then the fact that it doesn't "understand" is not an impediment. It's not a decided matter, in any case, and is still controversial among philosophers of the mind (who are less biased than AI researchers in companies that they have stock options in).


Maybe Im reiterating the same point as you, but the experiment is an illustration of distinction between understanding and performance. Size of the model or perfect performance does not negate the distinction.

IMO, the line of thinking in the metaphor is contingent on unanswered questions about self-awareness, which seems to be a definitional prerequisite for semantic understanding.


If we hypothesis perfect performance, the philosophical question is if that's a distinction without (or with) a difference. Perfect performance would simulate semantic understanding, no? More relevant to the real world though, how do we rate imperfect performance? If a Chinese Room that I have access to can make some logical leaps but not others, how do we rate this artificial intelligence? We have words to describe humans with insufficient semantic understanding but they are not usually able to write/generate cromulent essays on basically every topic.


>Perfect performance would simulate semantic understanding, no?

The thought experiment argues the exact opposite. The idea is you can have perfect Chinese but not understand a single word of what you are saying. The argument is that syntax (procedure based operation) and semantics (understanding) are distinct and separable.

As you extend the scope of the Chinese room from a single task (e.g. Chinese conversation) to human behavior in general, it converges with the question of if philosophical zombies can exist.

In terms of the Chinese room, the distinction IS the difference.


That's what Searle's argument is, but it's a thought experiment and there's no consensus among professional philosophers as to whether or not to believe it is true.


I kinda interpreted it as: "Here's an example of something which we would not call intelligence if you knew how it worked, therefore our definition of intelligence must somehow involve process or understanding, not just results."


The Chinese room has only humans in and outside the room.


But the human-ness of the person inside the room is not important. It's just there to follow simple instructions. I think Searle (https://en.wikipedia.org/wiki/John_Searle) put a human in the room to help us empathize and understand that the person in the room doesn't understand Chinese. It simplifies the argument by avoiding the introduction of computers into the setup.


The Chinese Room was a lame thought experiment. It requires a book so wondrous that any question looked up has an answer convincingly human (it's just that both the question and reply are in a language the person shuffling pages doesn't understand). The Book can be seen by a modern audience as equivalent to the distillation of all that LLM training.

It's odd to me to present an argument on human vs. machine intelligence and completely skip over what human intelligence is — exactly what it is about human intelligence that is so distant from this "trained" (educated?) book.

(I think I am also speaking for Karl Pilkington when I say that I have no way to know that the little man in my head that is busy flipping through the Calhoun Book has a clue as to what I am on about either.)


Huh? The human not understanding anything is kinda the point of the Chinese Room argument...


I refuse to understand the Chinese Room argument, I only deploy it.


The human in the Chinese Room argument represents an AI. Eh, that kind of complicates the joke...


I think the book with the human in the Chinese Room represents the AI.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: