Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The statistical distribution of historical chess games is a approximate statistical model of an actual model of chess.

It's "internal abstract representation" isnt a representation; it's an implicit statistical distribution across historical cases.

Consider the difference between an actual model of a circle (eg., radius + geometry) and a statistical model over 1 billion circles.

In the former case a person with the actual model can say, for any circle, what it's area is. In the latter case, the further you get outside the billion samples, the worse the area will report. And even within them, it'll often be a little off.

Statistical models are just associations in cases. They're good approximations of representational models for some engineering purposes; they're often also bad and unsafe.



It's not some kind of first order statistical gibberish.

It exhibits internal abstract modeling.

It's a bit silly to argue against it at this time.

To produce answers with quality we see it'd have to use orders of magnitude more memory than it actually does.

It's also easy to test yourself.

Simple way is to create some role playing scenario with multiple characters when same thing is seen differently by different actors at different time and probe it with questions (ie. somebody puts X into bag labelled Y, other person doesn't see it and asking what different actors think is in the bag at specific time in the scenario etc).

Or ask for some crazy analogy.

Why am I even saying it, just ask it to give you list of examples how to probe LLM to discover if it creates abstract internal models or not - it'll surely give you a good list.


Most things in life aren’t mathematical objects and therefore don’t have perfect theoretical models anyway. For example, what is a “chair”?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: