Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a difficult question I think independent of the restrictions that OpenAI imposes on GPT-4.

The model does not know what it knows, that’s why it sometimes hallucinates instead of saying it doesn’t know. But to answer the latest event it knows, it has to know which events it knows.



I thought that at first, but it doesn't have problems with facts other than dates, and it does answer about dates distant from September 2021, and furthermore it uses the exact same canned response when you probe it's limits. I don't think it's a natural limitation of the model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: