I agree, but I still suspect OpenAI and other LLM companies do stuff like that, when an example of a hallucination becomes popular.
If I see some example of an LLM saying dumb stuff here, I know it's going to be fixed quickly. If I encounter an example myself and refuse to share it, it may be fixed with a model upgrade in a few years. Or it may still exist.
Something about how you have to keep repeating "There is no seahorse emoji" or something similar reminded me of the Local 58 horror web series where it seems like the program is trying to get you to repeat "There are no faces" while showing the viewer faces: https://www.youtube.com/watch?v=NZ-vBhGk9F4&t=221
Edit: Come to think of it, training on a Q&A format is probably better - "Is there a seahorse emoji? No, there isn't."