Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. The "system" understands Chinese in the same way a native speaker does. It's two different implementations (room system and native speaker) of the same computation ("understanding Chinese"). There is no externally observable difference between actually understanding Chinese and a perfect simulation of a system that understands Chinese. The fact that the person inside doesn't speak Chinese as a result is irrelevant in the same way that the L2 cache alone without the rest of a computer cannot run Minecraft is. If anything the Chinese Room thought experiment is an argument in favor of consciousness being computation. It pains me greatly that someone could come up with it and conclude the opposite.


The point of the experiment is to think about the individual in the room. You can not say it's irrelevant, because it's the entire point.

The system's response is trivial: Sure, if the room+person combination leads always to a coherent response in Chinese, then the entire system understands Chinese. I'd go even further: If the person in the room does not understand Chinese, but the system does, then there is some entity that understands Chinese - either a person or an advanced AI, feeding the inputs. Then, from the systems perspective, the person in the room is largely irrelevant.

But this is not the argument: Despite no discernible difference from the outside, the person in the room may either understand Chinese, or they may not. And so there is a distinction - from the perspective of the individual in the room, that does not depend on the outside observation.

That's all there is to it. It shows that meaning and understanding are not the same as syntactic computation (an important point, to be sure), but it does not show that one can exist with or without the other. By extension, it does not otherwise disprove consciousness as being this or that.


You might as well conclude that my fingers typing this post aren't conscious. It's a weird argument.

The analogy might be more valid if arguing its not possible for a third party to actually determine whether an entity/system is conscious (irrespective of whether the entity is conscious or not)


The argument about a third party is trivial in my opinion. Someone responds correctly in Chinese, and the onus just falls on that element of the system to be conscious or not. It's another argument and I don't even see how this experiment is particularly enlightening in that case. I think in that case, people just confuse it with the Turing test.

Instead, the core matter is about form versus meaning - something that is indeed not observable from the outside, and yet is a distinction to the person inside the Chinese room.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: