Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM probabilistically produces tokens over its model which is why it can hallucinate whilst an actual graph model would not have that issue


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: