Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On a "pure" LLM, yes.

But it is possible that LLM providers circumvent this. For example, it might be the case that claude when set to concise mode, doesn't apply that to the thinking tokens, and only applies it to the summary. Or, the provider could be augmenting your prompt. From my simple tests on chatgpt, it seems that this is not the case, and asking it to be terse cuts the CoT tokens short as well. Someone needs to test on Claude 3.7 with the reasoning settings.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: