I tend to wonder whether people who make claims like that are confusing intelligence with consciousness. The claim as stated above could be a summary of a certain aspect of the hard problem of consciousness: that it's not clear how one could "coax consciousness out of a box of inanimate binary switches" - but the connection to "intelligence" is dubious. Unless of course one believes that "true" intelligence requires consciousness.
Intelligence can be expressed in higher order terms than the logic that the binary gates running the underlying software is required to account for.
Quarks don't need to account for atomic physics. Atomic physics doesn't need to account for chemistry. Chemistry doesn't need to account for materials science. It goes on and on. It's easy to look at a soup of quarks and go, "there's no way this soup of quarks could support my definition of intelligence!", but you go up the chain of abstraction and suddenly you've got a brain.
Scientists don't even understand yet where subjective consciousness comes into the picture. There are so many unanswered questions that it's preposterous to claim you know the answers without proof that extends beyond a handwavy belief.
We already have irrefutable evidence of what can reasonably be called intelligence, from a functional perspective, from these models. In fact in many, many respects, the models outperform a majority of humans on many kinds of tasks requiring intelligence. Coding-related tasks are an especially good example.
Of course, they're not equivalent to humans in all respects, but there's no reason that should be a requirement for intelligence.
If anything, the onus lies on you to clarify what you think can't be achieved by these models, in principle.