Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Quoting a paragraph from OP (https://www.anthropic.com/research/tracing-thoughts-language...):

> Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response.



Fun fact, "confabulation", not "hallucinating" is the correct term what LLMs actually do.


Fun fact, the "correct" term is the one in use. Dictionaries define language after the fact, they do not prescribe its usage in the future.


Confabulation means generating false memories without intent to deceive, which is what LLMs do. They can't hallucinate because they don't perceive. 'Hallucination' caught on, but it's more metaphor than precision.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: