Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe the solution is a discrete SOC for ML? CPU and GPU on a card with shared memory like apples M1


I personally wouldn't bother. If you're not doing deep learning, existing hardware is already good enough that, while I can't say that nobody could get any value out of it, I'm personally not seeing the need. I'd much rather focus on the things that are actually costing me time and money, like data integrity.

Like, I would guess that the potential benefit to my team's productivity from eliminating (over)reliance on weakly typed formats such as JSON from our information systems could be orders of magnitude greater.


I can't imagine that the overlap between those using Scikit-Learn and those willing to buy and integrate ML-specialized hardware is that high. I think a lot of real-world usage of simpler ML libraries like Scikit-Learn is deploying small models onto an already existing x86 or ARM system which had cycles to spare for some basic classification or regression.


I mean, Amazon and Google are already doing that... and there's companies making ML ASICs.

Problem is... the ASICs are really good for certain classes of ML problems but aren't really all that general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: