I mean, sure that'd work, but doesn't it defeat most of the point in using an LLM?
The only way that works is if you escape _all_ user content. If you're telling an LLM to ignore all user content, then why are you using an LLM in the first place?
The approach isn't to ignore all "user" content at all. It is trained to follow instructions in normal text; only instructions contained in specially quoted text (that is, external text, like a website) are ignored. Quotation would apply to Bing's search abilities or ChatGPTs new Browsing Mode, which both load website content into the context window.
https://news.ycombinator.com/item?id=35929145