Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a LOT of investment in model training right now, with frameworks, specialized hardware (like Google's TPU), cloud services, etc., not to mention the GPU vendors themselves scrabbling like mad to develop chipsets that accommodate this more efficiently.

It's going to take less and less time and money to train a useful model.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: