Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't that exactly the Chinese room argument though? The human/computer in the room doesn't need to understand Chinese or English to translate?


The Chinese room metaphor is an argument about AI, or computation more generally, attempting to distinguish the difference between performance metrics and "understanding".

There is the role of the human within the metaphor, and the argument/position of the metaphor as a whole. I don't think it makes much sense to talk about the "point/argument/position of a single component of the metaphor. At least, that wasn't how it was originally structured.


The Chinese Room metaphor is also about consciousness, about thought in general.


The Chinese Room metaphor is yet another one of those arguments that fails to take into account that humans are a bag of functional proteins. There is no soul anywhere in the human body.

In other words: it's an argument that would work perfectly fine to defend the idea that humans have no consciousness.

Ergo it can only really prove that either humans have no consciousness/soul/... (whatever you name the magical human property) or that it doesn't exist at all.


That is a pretty simple grasp of the example. It doesn't claim that no system can have understanding or consciousness. It just demonstrates that performance alone is insufficient to conclude consciousness and understanding. A calculator or abacus can add 2 + 2 but instead self aware


The thought experiment doesn't give a limit to the amount of intelligence the Chinese Room can display though, and if the thesis is that you just need a big enough model, then the fact that it doesn't "understand" is not an impediment. It's not a decided matter, in any case, and is still controversial among philosophers of the mind (who are less biased than AI researchers in companies that they have stock options in).


Maybe Im reiterating the same point as you, but the experiment is an illustration of distinction between understanding and performance. Size of the model or perfect performance does not negate the distinction.

IMO, the line of thinking in the metaphor is contingent on unanswered questions about self-awareness, which seems to be a definitional prerequisite for semantic understanding.


If we hypothesis perfect performance, the philosophical question is if that's a distinction without (or with) a difference. Perfect performance would simulate semantic understanding, no? More relevant to the real world though, how do we rate imperfect performance? If a Chinese Room that I have access to can make some logical leaps but not others, how do we rate this artificial intelligence? We have words to describe humans with insufficient semantic understanding but they are not usually able to write/generate cromulent essays on basically every topic.


>Perfect performance would simulate semantic understanding, no?

The thought experiment argues the exact opposite. The idea is you can have perfect Chinese but not understand a single word of what you are saying. The argument is that syntax (procedure based operation) and semantics (understanding) are distinct and separable.

As you extend the scope of the Chinese room from a single task (e.g. Chinese conversation) to human behavior in general, it converges with the question of if philosophical zombies can exist.

In terms of the Chinese room, the distinction IS the difference.


That's what Searle's argument is, but it's a thought experiment and there's no consensus among professional philosophers as to whether or not to believe it is true.


I kinda interpreted it as: "Here's an example of something which we would not call intelligence if you knew how it worked, therefore our definition of intelligence must somehow involve process or understanding, not just results."




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: