Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Again I’m not in the “AI” industry so I don’t fully understand the economics and don’t run open models locally.

What’s the cost to run this stuff locally, what type of hardware is required. When you say virtually nothing, do you mean that’s because you already have a 2k laptop or gpu?

Again I am only asking because I don’t know. Would these local models run OK on my 2016 Mac Pro intel or do I need to upgrade to the latest M4 chip with 32GB memory for it to work correctly?





The large open-weights models aren't really usable for local running (even with current hardware), but multiple providers compete on running inference for you, so it's reasonable to assume that there is and will be a functioning marketplace.

Basically yes, the useful models need a modernish GPU to get inference running at a usable speed. You can get smaller parameter models 3b/7b running on older laptops, it just won’t produce output at a useful speed.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: