Once again affirming that prompt injection is social engineering for LLMs. To a first approximation, humans and LLMs have the same failure modes, and at system design level, they belong to the same class. I.e. LLMs are little people on a chip; don't put one where you wouldn't put the other.
They are worse than people: LLM combine toddler level critical thinking with intern level technical skills, and read much much faster than any person can.