Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Pretty much.

Everyone and their dog says "transformer LLMs are flawed", but words are cheap - and in practice, no one seems to have come up with something that's radically better.

Sidegrades yes, domain specific improvements yes, better performance across the board? Haha no. For how simple autoregressive transformers seem, they sure set a high bar.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: