Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For those interested, Tigerbeetle[0] is a high-peformance purpose built accounting database. It supports - double entry transfer - two-phase transfers (ie reserve an amount and post it later) - linked transfers (move money across multiple accounts atomically)

Worth looking at.

[0] - https://www.tigerbeetle.com/ & https://github.com/coilhq/tigerbeetle



Thanks!

What we realized for TigerBeetle, is that double-entry is often lifted and brought directly into the networked world of distributed systems as double-entry, as a ledger database.

However, most distributed financial systems of record also spend alot of their time talking to other distributed financial systems of record. So there are all these different entities (e.g. banks) all running different infrastructure and all wanting not only to track money within their own system but also to move money safely between systems.

Historically, double-entry is great at tracking transactions within a system or entity, but it's not great at this intersection between double-entry and distributed systems, because of the way that networks fail.

We were seeing that these systems all end up with the equivalent of a two-phase commit coordinator, which is alot of work to implement correctly on top of double-entry. And everyone is building these ad hoc ledgers that are not only ledgers but also two-phase commit coordinators.

So what we've done for TigerBeetle [1], is to take double-entry and marry it with distributed systems, to make it really easy to track transactions as money enters and leaves a system, by providing two-phase double-entry transfers out of the box. For example, not only a debits/credits balance, but also the concept of pending/posted debits/credits balance.

And then to package this all up as a mission-critical safe and performant (1m TPS) open source database that the whole ecosystem can partner with and build on.

[1] I did a deep dive into TigerBeetle in a recent talk at the Recurse Center called “Let's Remix Distributed Database Design!“ going into the storage fault research that we've implemented for TigerBeetle. For example, the safety reasons for why we didn't pick RAFT as our consensus, or the latency reasons for why we don't use LevelDB or RocksDB, and how our testing is inspired by FoundationDB's Deterministic Simulation Testing — https://www.youtube.com/watch?v=rNmZZLant9o


Interesting. And Zig too? Hmm.


Thanks! I don't think we could have done TigerBeetle the way we did, without Zig. It's been two years now, and looking back the choice has worked out well for our design decisions. For example:

- single-threaded control plane (thread-per-core architecture),

- static memory allocation at startup (we never call malloc() or free() thereafter—no UAFs) for extreme memory efficiency (TB can address 100 TiB storage using only 1 GiB statically allocated memory),

- explicit memory alignment expressed in the type system (for Direct I/O), and of course,

- Zig's comptime which is insane. We use it to do all kinds of things like optimize the creation of Eyztinger layouts, or to eliminate length prefixes in our on disk LSM table formats.


Also one of the bigger Zig codebases out there for you PL nerds.


After the Zig compiler that is, which dwarfs TB's code base. The Zig core team are all machines. ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: