Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's parsing. It's tokenizing. But it's a stretch to call it understanding. It creates a pattern that it can use to compose a response. Ensuring the response is factual is not fundamental to LLM algorithms.

In other words, it's not thinking. The fact that it can simulate a conversation between thinking humans without thinking is remarkable. It should tell us something about the facility for language. But it's not understanding or thinking.



I know that the "understanding" is a stretch, but I refer to the Understanding of the NLU that wasn't really understanding either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: