Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not some kind of first order statistical gibberish.

It exhibits internal abstract modeling.

It's a bit silly to argue against it at this time.

To produce answers with quality we see it'd have to use orders of magnitude more memory than it actually does.

It's also easy to test yourself.

Simple way is to create some role playing scenario with multiple characters when same thing is seen differently by different actors at different time and probe it with questions (ie. somebody puts X into bag labelled Y, other person doesn't see it and asking what different actors think is in the bag at specific time in the scenario etc).

Or ask for some crazy analogy.

Why am I even saying it, just ask it to give you list of examples how to probe LLM to discover if it creates abstract internal models or not - it'll surely give you a good list.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: