I guess that's better than nothing. But now I'm unsure what your original comment was about, if your project doesn't use Jepsen for testing to "prove" it works fine, how is your project relevant to bring up on a submission about a Jepsen test of some other software?
If everyone who was making a database/message queue/whatever distributed system shared their projects on every Jepsen submission, we'd never have any discussions about the actual software in question.
I'm not seeing full self-hosting yet, and "Book a call" link is an instant nope for many techies.
I understand that you need to make money. But you'll have to have a proper self-hosting offering with paid support as well before you're considered, at least by me.
I'm not looking to have even more stuff in the cloud.
Postgres is a way better fit than Kafka if you want a large number of durable streams. But a flexible OLTP database like PG is bound to require more resources and polling loops (not even long poll!) are not a great answer for following live updates.
Plug: If you need granular, durable streams in a serverless context, check out s2.dev
s2.dev looks cool... I jumped around the home page a bit and couldn't perfectly grasp what it is quickly though. But if it is about decoupling the Kafka approach and client side libraries from the use of Kafka specifically I am cheering for you.
Could you see using the s2.dev protocol on top of services using SQL in the way of the article, assigning event sequence numbers, as a good fit? Or is s2 fundamentally the component that assigns event numbers?
I feel like we tried to do something similar to you, but for SQL DBs, but am not sure:
> foyer draws inspiration from Facebook/CacheLib, a highly-regarded hybrid cache library written in C++, and ben-manes/caffeine, a popular Java caching library, among other projects.
Foyer is a great open source contribution from RisingWave
We built a S3 read-through cache service for s2.dev so that multiple clients could share a Foyer hybrid cache with key affinity, https://github.com/s2-streamstore/cachey
Yes, currently it has its own /fetch endpoint that then makes S3 GET(s) internally. One potential gotcha depending on how you are using it, an exact byte "Range" header is always required so that the request can be mapped to page-aligned byte range requests on the S3 object. But with that constraint, it is feasible to add an S3 shim.
It is also possible to stop requiring the header, but I think it would complicate the design around coalescing reads – the layer above foyer would have to track concurrent requests to the same object.
Pros: unlimited streams with the durability of object storage – JetStream can only do a few K topics
Cons: no consumer groups yet, it's on the agenda
reply