Hacker Newsnew | past | comments | ask | show | jobs | submit | panki27's commentslogin

What does this do that OpenWebUI (or one of the many of other solutions) does not?


As someone building another competitor in the field, I'll relay some reasons why some of our customers ruled out OpenWebUI in their decision-making process:

- Instability when self-hosting

- Hard to get in touch with sales when looking for SLA-based contracts

- Cluttered product; Multiple concepts seemingly serving the same purpose (e.g. function calling vs. MCP); Most pre-MCP tools suffer from this

- Trouble integrating it with OIDC

- Bad docs that are mostly LLM generated


Broadly, I think other open source solutions are lacking in (1) integration of external knowledge into the chat (2) simple UX (3) complex "agent" flows.

Both internal RAG and web search are hard to do well, and since we've started as an enterprise search project we've spent a lot of time making it good.

Most (all?) of these projects have UXs that are quite complicated (e.g. exposing front-and-center every model param like Top P without any explanation, no clear distinction between admin/regular user features, etc.). For broader deployments this can overwhelm people who are new to AI tools.

Finally trying to do anything beyond a simple back and forth with a single tool calls isn't great with a lot of these projects. So something like "find me all the open source chat options, understand their strengths/weaknesses, and compile that into a spreadsheet" will work well with Onyx, but not so well with other options (again partially due to our enterprise search roots).


OpenWebUI isn't Open Source anymore. Open WebUI has an egregious CLA if I want to contribute back to it (Which I wouldn't do anyway because it isn't Open Source...)

Onyx Devs: This looks awesome, I will definitely add it to my list of things to try out... close to the top! Thanks, and please keep it cool!


What are compile times like right now, with modern hardware?


Phoronix includes a "Timed Linux Kernel Compilation" test as part of their reviews using the default build config.

Here is one comparing some modern high end server CPUs: https://www.phoronix.com/benchmark/result/amd-5th-gen-epyc-9... (2P = dual socket)

Here is one comparing some modern consumer CPUs: https://www.phoronix.com/benchmark/result/amd-ryzen-9-9900x-...

Searching "Phoronix ${cpuModel}" will take you to the full review for that model, along with the rest of the build specs.

With the default build in a standard build environment the clock speed tends to matter more. With tuning one could probably squeeze more out of the higher core count systems.


Note that those two links are using different configs. Here's the link for Threadripper 9995WX:

https://www.phoronix.com/review/amd-threadripper-9995wx-trx5...

That's using the same config as the server systems (allmodconfig) but it has the 9950X listed there and on that config it takes 547.23 seconds instead 47.27. That puts all of the consumer CPUs as slower than any of the server systems on the list. You can also see the five year old 2.9GHz Zen2 Threadripper 3990X in front of the brand new top of the range 4.3GHz Zen5 9950X3D because it has more cores.

You can get a pretty good idea of how kernel compiles scale with threads by comparing the results for the 1P and 2P EPYC systems that use the same CPU model. It's generally getting ~75% faster by doubling the number of cores, and that's including the cost of introducing cross-socket latency when you go from 1P to 2P systems.


Oh good catches! I must have grabbed the wrong chart from the consumer CPU benchmark, thanks for pointing out the subsequent errors. The resulting relations do make more sense (clock speed certainly helps, but there is wayyyy less of a threading wall than I had incorrectly surmised).

Here is the corrected link for the 9950X review with allmod instead of def for equal comparison (I couldn't find the def chart in the server review) https://www.phoronix.com/benchmark/result/amd-ryzen-9-9900x-...



It varies a lot depending on how much you have enabled. The distro kernels that are designed to support as much hardware as possible take a long time to build. If you make a custom kernel where you winnow down the config to only support the hardware that's actually in your computer, there's much less code to compile so it's much faster.

I recently built a 6.17 kernel using a full Debian config, and it took about an hour on a fast machine. (Sorry, I didn't save the exact time, but the exact time would only be relevant if you had the exact same hardware and config.) I was surprised how slow it still was. It appears the benefits of faster hardware have been canceled by the amount of new code added.


I believe you are referring to GNU/Linux, or as I've recently taken to calling it, GNU plus Linux.


The link appears to be broken, it redirects me to the main page.



For tmux users: you can use the lock-command option with something like cmatrix for a quick and dirty screensaver.


My most used function is probably the one I use to find the most recent files:

    lt () { ls --color=always -lt ${1} | head }


It's hidden in the "Copy" drop down at the top right.

https://http3-explained.haxx.se/~gitbook/pdf?limit=100


What are the chances that Telegram is an op by the FSB?


They are very likely involved. It's shocking how many people use it as Signal alternative. Telegram did the marketing well. I have suspicion authorities around the world like it too because Telegram most likely gives police no fuss access.


What are the chances that Signal is an op ran by the NSA?



This must be the dream resource of every physics teacher.


   $ git clone --no-checkout $URL/repo.git
   $ cd repo/
   $ git sparse-checkout init
   $ git sparse-checkout set subdirectory_i_want
   $ git checkout main


Now I'm curious -- is that here a way to do this that avoids downloading any more than strictly necessary?

The command above downloads the whole repo history. You could do a depth=1 to skip the history, but it still downloads the he latest version of the entire repo tree.


You could do a blobless or treeless clone https://github.blog/open-source/git/get-up-to-speed-with-par...

Combined with --depth=1 and the --no-checkout / --sparse-checkout flow that the GP already described.

I just tested on the emacs repo, left column is disk usage of just the `.git` folder inside:

  Shallow clones (depth=1):
  124K: Treeless clone depth=1 with no-checkout
  308K: Blobless clone depth=1 with no-checkout
  12M: Treeless clone depth=1 sparse checkout of "doc" folder
  12M: Blobless clone depth=1 sparse checkout of "doc" folder
  53M: Treeless clone depth=1 non-sparse full checkout
  53M: Blobless clone depth=1 non-sparse full checkout
  53M: Regular clone with depth=1

  Non-shallow clones:
  54M: Treeless clone with no-checkout
  124M: Blobless clone with no-checkout
  65M: Treeless clone sparse checkout of "doc" folder
  135M: Blobless clone sparse checkout of "doc" folder
  107M: Treeless clone with non-sparse full checkout
  177M: Blobless clone with non-sparse full checkout
  653M: Full regular git clone with no flags
Great tech talk covering some of the newer lesser-known git features: https://www.youtube.com/watch?v=aolI_Rz0ZqY


git-archive downloads only strictly necessary files but is not universally supported

https://git-scm.com/docs/git-archive


The rust app just calls a few git commands too[1]

Could've been a shell script[2]

[1] https://github.com/zikani03/git-down/blob/cb2763020edc81e464...

[2] https://textbin.net/ja17q8vga4


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: