Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When humans say something, or think something or write something down, aren't we also "just predicting the next word"?


There is a lot more going on in our brains to accomplish that, and a mounting evidence that there is a lot more going on in LLMs as well. We don't understand what happens in brains either, but nobody needs to be convinced of the fact that brains can think and plan ahead, even though we don't *really* know for sure:

https://en.wikipedia.org/wiki/Philosophical_zombie


I trust that you want to say something , so you decided to click the comment button on HN.


But do I just want to say something because my childhood environment rewarded me for speech?

After all, if it has a cause it can't be deliberate. /s


Sure, the current version of LLM should wait for someone's input and then respond.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: