Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Brand new stuff, which it never saw in it's training, can come out.

Sort of. You can get LLMs to produce some new things, but these are statistical averages of existing information. Its kinda like a static "knowledge tree", where it can do some interpolation, but even then, its interpolation based on statistically occurring text.



The interpolation isn't really based on statistically occurring text. It's based on statistically occurring concepts. A single token can have many meanings depending on context and many tokens can represent a concept depending on context. A (good) LLM is capturing that.


Neither just text or just concepts, but text-concepts — LLMs can only manipulate concepts as they can be conveyed via text. But I think wordlessly, in pure concepts and sense-images, and serialize my thoughts to text. That I have thoughts that I am incapable of verbalizing is what makes me different from an LLM - and, I would argue, actually capable of conceptual synthesis. I have been told some people think “in words” though.


Nope, you could shove in an embedding that didn't represent an existing token. It would work just fine.

(if not obvious.. you'd shove it in right after the embedding layer...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: