We already have irrefutable evidence of what can reasonably be called intelligence, from a functional perspective, from these models. In fact in many, many respects, the models outperform a majority of humans on many kinds of tasks requiring intelligence. Coding-related tasks are an especially good example.
Of course, they're not equivalent to humans in all respects, but there's no reason that should be a requirement for intelligence.
If anything, the onus lies on you to clarify what you think can't be achieved by these models, in principle.
We already have irrefutable evidence of what can reasonably be called intelligence, from a functional perspective, from these models. In fact in many, many respects, the models outperform a majority of humans on many kinds of tasks requiring intelligence. Coding-related tasks are an especially good example.
Of course, they're not equivalent to humans in all respects, but there's no reason that should be a requirement for intelligence.
If anything, the onus lies on you to clarify what you think can't be achieved by these models, in principle.