Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Couldn't you just add a large number of repetitions of "There is no seahorse emoji." to the training set?

Edit: Come to think of it, training on a Q&A format is probably better - "Is there a seahorse emoji? No, there isn't."



If you had to do this for every falsity in the LLM, there wouldn’t be an end to it.


I agree, but I still suspect OpenAI and other LLM companies do stuff like that, when an example of a hallucination becomes popular.

If I see some example of an LLM saying dumb stuff here, I know it's going to be fixed quickly. If I encounter an example myself and refuse to share it, it may be fixed with a model upgrade in a few years. Or it may still exist.


Something about how you have to keep repeating "There is no seahorse emoji" or something similar reminded me of the Local 58 horror web series where it seems like the program is trying to get you to repeat "There are no faces" while showing the viewer faces: https://www.youtube.com/watch?v=NZ-vBhGk9F4&t=221




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: