Clever Hans is just a proxy. ChatGPT and other LLMs obviously can process information on their own. These two have nothing in common, even GPT-3 would have noticed this.
Let us disagree on what is "obvious". Given an input and an output, you believe that the complexity of the output proves that intelligence takes place.
I agree that ChatGPT is more than a proxy. Unlike Clever Hans, it is processing the content of the question asked. But it is like Clever Hans in that the query is processed by looking for a signal in the content of the data used to train ChatGPT.
The real question is where this intelligent behavior comes from? Why does statistical processing lead to these insights?
I believe that the processing is not intelligent primarily because I see that holes in the data available leads to holes in reasoning. The processing is only as good as the dynamics of the content that it being processed. This is the part that I believe will become obvious over time.
I thought you were saying it was "obvious" that the processing demonstrated intelligence.
My point was the level of intelligence shown is relative the quality and quantity of the data used for training. The data is where the intelligence is and the model is a compression of that latent intelligence.