> However, both are platform-specific and only support specific models from the company
This is not true, as you are for sure aware. Google AI edge supports a lot models, including any Litert model from huggingface, pytorch ones etc. [0]. Additionally, it's not even platform specific, works for iOS [1].
Why lie? I understand that your framework does more stuff like MCP, but I'm sure that's coming for Google's as well. I guess if the UX is really better it can work, but i would also say Ollama's use cases are quite different because on desktop there's a big community of hobbyists that cook up their own little pipelines/just chat to LLMs with local models (apart from the desktop app devs). But on phones, imo that segment is much smaller. App devs are more likely to use the 1st party frameworks, rather than 3rd party. I wouldnt even be surprised if apple locks down at some points some API's for safety/security reasons.
Whoa—that's way too aggressive for this forum and definitely against the site guidelines. Could you please review them (https://news.ycombinator.com/newsguidelines.html) and take the spirit of this site more to heart? We'd appreciate it. You can always make your substantive points while doing that.
Note this one: "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
Thanks for the feedback. You're right to point out that Google AI Edge is cross-platform and more flexible than our phrasing suggested.
The core distinction is in the ecosystem: Google AI Edge runs tflite models, whereas Cactus is built for GGUF. This is a critical difference for developers who want to use the latest open-source models.
One major outcome of this is model availability. New open source models are released in GGUF format almost immediately. Finding or reliably converting them to tflite is often a pain. With Cactus, you can run new GGUF models on the day they drop on Huggingface.
Quantization level also plays a role. GGUF has mature support for quantization far below 8-bit. This is effectively essential for mobile. Sub-8-bit support in TFLite is still highly experimental and not broadly applicable.
Last, Cactus excels at CPU inference. While tflite is great, its peak performance often relies on specific hardware accelerators (GPUs, DSPs). GGUF is designed for exceptional performance on standard CPUs, offering a more consistent baseline across the wide variety of devices that app developers have to support.
GGUF is more suitable for the latest open-source models, i agree there. Quant2/Q4 will probably be critical as well, if we don't see a jump in ram. But then again I wonder when/If mediapipe will support GGUF as well.
PS, I see you are in the latest YC batch? (below you mentioned BF). Good luck and have fun!
I would say that while Google's MediaPipe can technically run any tflite model, it turned out to be a lot more difficult to do in practice with third-party models compared to the "officially supported" models like Gemma-3n. I was trying to set up a VLM inference pipeline using a SmolVLM model. Even after converting it to a tfilte-compatible binary, I struggled to get it working and then once it did work, it was super slow and was obviously missing some hardware acceleration.
I have not looked at OP's work yet, but if it makes the task easier, I would opt for that instead of Google's "MediaPipe" API.
This is not true, as you are for sure aware. Google AI edge supports a lot models, including any Litert model from huggingface, pytorch ones etc. [0]. Additionally, it's not even platform specific, works for iOS [1].
Why lie? I understand that your framework does more stuff like MCP, but I'm sure that's coming for Google's as well. I guess if the UX is really better it can work, but i would also say Ollama's use cases are quite different because on desktop there's a big community of hobbyists that cook up their own little pipelines/just chat to LLMs with local models (apart from the desktop app devs). But on phones, imo that segment is much smaller. App devs are more likely to use the 1st party frameworks, rather than 3rd party. I wouldnt even be surprised if apple locks down at some points some API's for safety/security reasons.
[0] https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inf...
[1] https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inf...