Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always disagreed w/ Searle re the Chinese Room. My guess is that Searle never built an adder circuit from logic gates: combining irrational elements together into something rational is the core magic of computer science.

If you want to see someone asking humans questions where they consistently fail to be rational, to the extent that they sometimes seem to approximate a stochastic parrot, read Thinking Fast and Slow by Daniel Kahneman. (It might actually be interesting to give GPT-4 some of the questions in that book, to see how similar or different they are.)



I'm not sure why you disagree with the Chinese Room argument. I would be interested. I agree that Searle was solely a philosopher who did not take an engineering viewpoint.

Searle's main point is that if I have a book that tells me how to respond and I never learn Chinese, then I do not understand Chinese. If you see a flaw in this reasoning, I am very interested.

My point is just that LLM models are a compression of the content available on the internet equivalent to a rule book. It is definitely fascinating how powerful LLMs are as far as summarization and forming coherent responses to input.

I am a big fan of Kahneman and agree with you that it is will be very interesting to ask GPT-4 the questions in that book.


> Searle's main point is that if I have a book that tells me how to respond and I never learn Chinese, then I do not understand Chinese. If you see a flaw in this reasoning, I am very interested.

You don't understand Chinese, but you are not the process. For the process to understand, it doesn't require any single component to understand like some variation on the homunculus.

And while it might seem obvious that the bulk of understanding can't be contained in a book, you don't really have a book in the Chinese room. Not if the room does a competent job. You have some kind of information-dense artifact that encodes an enormous understanding of Chinese in an inert form. A sweeping library that covers uncountable nuances in depth.

Or to phrase it as a direct attack on the argument: The book does have semantics. You don't need qualia to have semantics, especially not the definition of qualia where nobody can prove they exist.


Thanks! I appreciate the explanation. I think that you put your finger on the major assumption of the argument.


> Searle's main point is that if I have a book that tells me how to respond and I never learn Chinese, then I do not understand Chinese. If you see a flaw in this reasoning, I am very interested.

To a degree, I feel like the Chinese Room argument is begging the question. When I imagine Searle sitting in a room, with a book of instructions and paper and everything he needs to execute GPT-4's equivalent, I basically see an actual computer. That is literally what he is; there is no difference. So then to ask, "Does this system understand Chinese?" is literally exactly the same question as "Does GPT-4 understand Chinese?" You haven't actually illuminated the question in any meaningful way, except to give people not familiar with how microprocessors work a better intuitive understanding. (Which, upon reflection, probably is a fairly useful thing to do.)

I looked a bit at the "1990's version" of his argument on the Wikipedia page you quoted. Going back to my earlier example, this is sort of what his argument sounds like to me:

A1) Electronic gates just on and off switches.

A2) Numbers and addition are semantic.

A3) On and off switches are neither constitutive of, nor sufficient for, semantics.

Therefore, computers cannot add; they only simulate the ability to add.

Now I'm not up on the fine details of what "syntactic vs semantic" means in philosophy, so maybe #2 is't accurate. But in a sense it doesn't matter, because that communicates how I feel about Searle's argument: "I've made some distinction between two classes of things that you don't understand; I've defined one to be on one side, and the other to be on the other side; and therefore computers can't understand."

My best guess as to the "syntactic / semantic" thing is this: In some sense, even his premise, that "Progams are syntactic", isn't actually accurate: Computers operate on bits which are operated on by gates: gates and bits themselves don't inherently have symbols; the symbols are an abstraction on the bits. Even bits are abstractions on continuous voltages; and voltages are ultimately abstractions on quantum probabilities.

What a given set of voltages "means" -- whether they're numbers to be added, or words to be word-checked, or instructions to be executed, or a JPEG to be decompressed, depends entirely on how they're used. If you jump into the middle of a JPEG, your computer will happily try to execute it, and if you dump the program into your video buffer, you'll get a bunch of strange dots on your screen.

Furthermore, when you build an "adder" out of logic gates, you can build the gates such that they correspond to our intuitive idea of binary addition, with individual carries for each bit and so on. But this is inefficient, because then you have to wait for the carries to cascade all the way through the whole thing you're trying to add. Instead, you can brute-force a set of logic gates such that given these 16 bits in, and these 9 bits out (8 plus overflow), you just get the right answer; this will be a lot faster (in the sense that the signals have to go through fewer gates before stabilizing on the final answer), but the gates inside then don't make any sense -- they're almost a "compression" of the longer, carry-based method.

Does that mean that an adder made this way isn't "actually" adding? In the end it doesn't really matter: 16 bits come in, and 9 bits come out the way you want them to. It doesn't really matter that much what happened in the middle.

Putting all that together: It seems to me the "semantics" of a set of bits is based on how they end up interacting with the real world. If I can ask GPT-4 what's missing in my pancake recipe, and it can tell me "you're missing a leavening agent like baking powder", then it seems to me there must be semantic content in there somewhere, and all the arguments about syntax not being sufficient for semantic turn out to have been proven wrong by experiment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: