This particular iteration is unbounded, but the next step is to pass in a GADT argument to specify which headers the application wants, so only those are parsed into a heterogenous tuple.
In a conventional GCed language, you need to minimise heap allocations to avoid putting too much pressure on the garbage collector. The OxCaml extensions allows values to be passed 'locally' (that is, on the callstack) as an alternative to heap allocation. When the function returns, the values are automatically deallocated (and the type system guarantees safety).
This means that I can pass in a buffer, parse it, do my business logic, and then return, without ever allocating anything into the global heap. However, if I do need to allocate into it (for example, a complex structure), then it's still available.
It's kind of Rust in reverse: OxCaml has a GC by default, but you can write very high-performance code that effectively never uses a GC. There's also emerging support for data-race-free parallelisation as well.
The webserver I'm putting together also uses io_uring, which provides zero-copy buffers from kernel to userspace. The support for one-shot effect handlers in OCaml allows me to directly resume a blocked fiber straight from the io_uring loop, and then this httpz parser operates directly on that buffer. Shared memory all the way with almost no syscalls!
(author here) I'm just adding data-race free parallelism support to this right now to switch my website over to using it! For those familiar with OCaml syntax, the OxCaml parse function is fun:
val parse : buffer -> len:int -> #(status * request * header list) @ local
This takes in a buffer and returns an unboxed tuple on the stack, so there's no GC activity involved beyond stack management for each HTTP request.
Doesn't (honest question) the operating system kernel prevent data races in memory accesses at the level of system calls like brk? I wonder at what level the operating system handles such things?
I just use Docker devcontainers using Anthropic's own Dockerfile as a base, and it gives me a persistent sandbox that have ports opened and work in any container environment (be it remote or local), and work in any IDE that supports devcontainers...
So what if Claude Code makes a mistake and tears up the sandbox? What happens to all the persisted state (aside from the container image)?
The linked fly.io article discusses why containers aren't a good fit for sandboxes that need persistent state and how sprites.dev addresses the challenges.
I read the linked fly article and didn't see where it's mentioned why containers aren't a good fit for sandboxes that need persistent state. You can definitely do all the same snapshoting directly on your local docker volumes, although granted you'd need zfs or lvm backed volumes (which is probably what sprites do under the hood).
I think there are tradeoffs here. Maybe your one person vibe coded app doesn't need any change management, IaC, any of that. No docker file, start with whatever docker file fly wrote for you, beat it with an agent until it works enough. And it's pretty cool that you can then just serve it directly. Is it dev or prod? Yes.
On the other hand, I really don't think editing php files over ftp in prod was ahead of it's time -- I was there, man, and it sucked. I just know I'll be really confused about why something doesn't work eventually and wish I had some tracking of what changed over time. I want my IDE. I want VCS!
The html5lib conformance tests when combined with the WHATWG specs are even more powerful! I managed to build a typed version of this in OCaml in a few hours ( https://anil.recoil.org/notes/aoah-2025-15 ) yesterday, but I also left an agent building a pure OCaml HTML5 _validator_ last night.
This run has (just in the last hour) combined the html5lib expect tests with https://github.com/validator/validator/tree/main/tests (which are a complex mix of Java RELAX NG stylesheets and code) in order to build a low-dependency pure OCaml HTML5 validator with types and modules.
This feels like formal verification in reverse: we're starting from a scattered set of facts (the expect tests) and iterating towards more structured specifications, using functional languages like OCaml/Haskell as convenient executable pitstops while driving towards proof reconstruction in something like Lean.
The JS is exposed in the full page's context the same as if you included a <script> under a <div> instead of <svg>. In much the same way, whether the <script> is before or after the <svg> tag doesn't matter - it's just a script working on a single DOM (with different namespaces for certain elements) either way.
What was the SVG that didn't work? In Jon's example in the original post, the SVG he embeds there was one he wrote in around 2005. That's a pretty impressive run for it to render 20 years on...
Another extremely cool feature of Eon is that it uses Capnproto as the capability-based RPC interface to handle management. There's a schema that any client can implement here https://github.com/RyanGibb/eon/blob/main/lib/cap/schema.cap... including to provision ACME TLS certificates directly via DNS negotiation instead of the usual HTTP dance.
Author of Eon here, there's still some open questions I have here about managing the lifetimes of these certificates. Renewal is supported via a Capnproto callback and there's some ad-hoc integration in with NixOS nginx to restart it on a certificate renewal. https://github.com/RyanGibb/eon/blob/3a3f5bae2b308b677edfb3f...
This doesn't work in the general case, e.g. for postfix and dovecot, and is only becoming more pertinent with short lived certificates. It would be great if the service manager could use these capabilities directly. I think GNU Shepard's integration with Guile Goblins and OCapN is a step in the right direction here: https://spritely.institute/news/spritely-nlnet-grants-decemb...
Are you thinking of _new_ fresh water sources that emerge in recent years? If you have any candidate lat/lon where this might have happened, we can take a look at the 2024 and earlier embeddings to see if we can spot it.
reply