Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Creator of inference.net / schematron here.

There is growing emphasis on efficiency as more companies adopt and scale with LLMs in their products.

Developers might be fine paying GPT-5-Super-AGI-Thinking-Max prices to use the very best models in Cursors, but (despite what some may think about Silicon Valley), businesses do care about efficiency.

And if you can fine-tune an 8b-parameter Llama model on GPT-5 data in < 48 hours and save $100k/mo, you're going to take that opportunity.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: