As someone building another competitor in the field, I'll relay some reasons why some of our customers ruled out OpenWebUI in their decision-making process:
- Instability when self-hosting
- Hard to get in touch with sales when looking for SLA-based contracts
- Cluttered product; Multiple concepts seemingly serving the same purpose (e.g. function calling vs. MCP); Most pre-MCP tools suffer from this
Broadly, I think other open source solutions are lacking in (1) integration of external knowledge into the chat (2) simple UX (3) complex "agent" flows.
Both internal RAG and web search are hard to do well, and since we've started as an enterprise search project we've spent a lot of time making it good.
Most (all?) of these projects have UXs that are quite complicated (e.g. exposing front-and-center every model param like Top P without any explanation, no clear distinction between admin/regular user features, etc.). For broader deployments this can overwhelm people who are new to AI tools.
Finally trying to do anything beyond a simple back and forth with a single tool calls isn't great with a lot of these projects. So something like "find me all the open source chat options, understand their strengths/weaknesses, and compile that into a spreadsheet" will work well with Onyx, but not so well with other options (again partially due to our enterprise search roots).
OpenWebUI isn't Open Source anymore. Open WebUI has an egregious CLA if I want to contribute back to it (Which I wouldn't do anyway because it isn't Open Source...)
Onyx Devs: This looks awesome, I will definitely add it to my list of things to try out... close to the top! Thanks, and please keep it cool!
Searching "Phoronix ${cpuModel}" will take you to the full review for that model, along with the rest of the build specs.
With the default build in a standard build environment the clock speed tends to matter more. With tuning one could probably squeeze more out of the higher core count systems.
That's using the same config as the server systems (allmodconfig) but it has the 9950X listed there and on that config it takes 547.23 seconds instead 47.27. That puts all of the consumer CPUs as slower than any of the server systems on the list. You can also see the five year old 2.9GHz Zen2 Threadripper 3990X in front of the brand new top of the range 4.3GHz Zen5 9950X3D because it has more cores.
You can get a pretty good idea of how kernel compiles scale with threads by comparing the results for the 1P and 2P EPYC systems that use the same CPU model. It's generally getting ~75% faster by doubling the number of cores, and that's including the cost of introducing cross-socket latency when you go from 1P to 2P systems.
Oh good catches! I must have grabbed the wrong chart from the consumer CPU benchmark, thanks for pointing out the subsequent errors. The resulting relations do make more sense (clock speed certainly helps, but there is wayyyy less of a threading wall than I had incorrectly surmised).
It varies a lot depending on how much you have enabled. The distro kernels that are designed to support as much hardware as possible take a long time to build. If you make a custom kernel where you winnow down the config to only support the hardware that's actually in your computer, there's much less code to compile so it's much faster.
I recently built a 6.17 kernel using a full Debian config, and it took about an hour on a fast machine. (Sorry, I didn't save the exact time, but the exact time would only be relevant if you had the exact same hardware and config.) I was surprised how slow it still was. It appears the benefits of faster hardware have been canceled by the amount of new code added.
They are very likely involved. It's shocking how many people use it as Signal alternative. Telegram did the marketing well. I have suspicion authorities around the world like it too because Telegram most likely gives police no fuss access.
Now I'm curious -- is that here a way to do this that avoids downloading any more than strictly necessary?
The command above downloads the whole repo history. You could do a depth=1 to skip the history, but it still downloads the he latest version of the entire repo tree.