Because a) we know human-level intelligence is possible because we can directly observe it, b) GPU development has clearly slowed a lot but it hasn't stopped, c) even if it had stopped that doesn't mean they can never improve ever again, d) GPUs probably aren't the most efficient architecture for AI anyway so there are more opportunities for development even without process improvements, e) we're clearly not that far off AGI.
This is the sort of thing that might have made sense in 2005, not 2025.
Correction: (a) we can observe the effects of human-level intelligence but we don't actually know what human intelligence is because intelligence is best described as a side-effect of consciousness (aka the experience of qualia) and science has yet to provide a convincing explanation of what that is exactly.
"Never" is an incredibly strong word, and in the bigger scheme of humanity, the universe, and the next billion or so years, _everything_ that we can imagine will happen.
It's an intellectually lazy argument to make. Like saying "we will never walk on the moon" in the 1900's.
Especially when you consider that the author is apparently an "Assistant Professor at Carnegie Mellon University (CMU) and a Research Scientist at the Allen Institute for Artificial Intelligence (Ai2)".