Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They should've tested other embedding models, there are better ones than openai's (and cheaper)


Which do you suggest?



You should use RTEB instead. See here for why: https://huggingface.co/blog/rteb

Here is that leaderboard https://huggingface.co/spaces/mteb/leaderboard?benchmark_nam...

Voyage-3-large seems like SOTA right now


yep


The Qwen3 600M and 4B embedding models are near state of the art and aren't too computationally intensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: