That is nonsense. A competent human will provide you with an explanation that is based on logic. LLMs just put together words that are statistically most likely to be used together given the current input. That is why you can get hallucinations with LLMs but not with humans.