Your response is a non-sequitur that does not answer the question you yourself posed, and you are responding to yourself with a chatbot. Given that it is a non-sequitur, presumably it is also the case that no work was done to verify whether the output of the LLM was hallucinated or not, so it is probably also wrong in some way. LLMs are token predictors, not fact databases; the idea that it would be reproducing a “historical exploit” is nonsensical. Do you believe what it says because it says so in a code comment? Please remember what LLMs are actually doing and set your expectations accordingly.
More generally, people don’t participate in communities to have conversations with someone else’s chatbot, and especially not to have to vicariously read someone else’s own conversation with their own chatbot.
It used to be the case that a web developer could be reasonably expected to actually learn and know pretty much all of CSS, but it has reached the point where it is actually not possible for a single person to “learn” CSS in the way you could in the 2000s or 2010s.
Just as one example, there are now, by my count, at least eight[0] layout models (column, anchor, positioned, flow, float, table, flex, and grid), plus several things that sit in some ambiguous middle place (the inline versions of block types, sticky positioning, masonry grid layout, subgrid, `@container`, paged media), each of which is different and each of which interacts with the others in various confounding ways. Flow collapses margins; table elements can’t have margins at all, but tables can have `border-spacing`, which is like `gap`, but different. Flex has a different default `min-inline-size` than flow, and `flex-basis` overrides `inline-size` if it isn’t `auto`, which is its initial value, until you use the recommended `flex` shorthand, at which point it becomes `0%`, unless you redefine it explicitly. Table layout[1] uses a special shrink-wrapping algorithm, which the CSS authors noted back in CSS 2 might make sense to add a way to work more like a regular block-level element, and then that just never happened. Grid is a mix of implicit and explicit placements with competing ways to do the same things (named areas, number ranges, templates on the parent, properties on the child) and a bunch of special sizing algorithm keywords like `minmax` and `fit-content` which only work in grid, some of which also work in flex, most of which don’t work in flow, but some of them do now, but they didn’t before.
You can select your elements with the old CSS 3 selectors, or `:where`, or `:is`, or `&`, or `:has` (but not if they’re nested), or `@scope`, or `@layer`. Definitely don’t try to put trailing commas on your selector lists, though, since that’s not syntactically valid in CSS, until it is, in some future revision.
To make sure your site works correctly with all scripts, all the directional keywords now have logical versions with `inline` and `block` keywords. Unless it’s a transform[2]. Or a gradient[3]. That’ll probably eventually be fixed, just keep checking the spec periodically until you have to re-learn something that used to be false is now true. Which is how “learning” CSS works. There is never an end.
And this is just the tip of the iceberg. There are also all the CSS units, colours, the whole animation engine, forms, pseudo-classes, pseudo-elements, containment, paints, filter effects, environment variables (yes, those are a thing), maths functions, overflow, scroll snaps, backgrounds and borders, feature queries, font features, writing modes, the different-but-not-really CSS of SVG, the half-forgotten weirdo things like `border-image` and `clip-path`, or the half-dozen other major and minor CSS features which I am not even thinking of right now.
CSS doesn’t suck “because we don’t bother learning it”. CSS sucks because its core strength is its core weakness. It is infinitely flexible and extensible, and that means it has been flexed and extended to fulfil every design trend and address every edge case. Then it needs to support all of those things forever. Making CSS do what you want as a web developer has probably never been easier, but “learning” CSS has never been harder.
[0] Please, for my own sanity, resist the urge to pedantically nitpick in the responses about whether everything in my list is actually a “layout model”. I am aware that some of these things overlap more than others. This is just my list. You can make your own list. It’s fine.
[1] Tables also create their own anonymous layout block such that a child `<caption>` element is drawn outside the putative `<table>` in the actual layout. Framesets do a similar thing with `<legend>`. These are all things that are the result of having to retroactively shoehorn weirdo features into CSS in a backwards-compatible way, but that doesn’t make it any less insane to learn.
With the actual layout models, I see it more of an evolution thing. For someone starting on CSS today, you do not have to learn all 8 now if you don't want to, just master the grid. It was designed to be the last one to rule them all.
As someone who uses Debian and very occasionally interacts with the BTS, what I can say is this:
As far as I know, it is impossible to use the BTS without getting spammed, because the only way to interact with it is via email, and every interaction with the BTS is published without redaction on the web. So, if you ever hope to receive updates, or want to monitor a bug, you are also going to get spam.
Again, because of the email-only design, one must memorise commands or reference a text file to take actions on bugs. This may be decent for power users but it’s a horrible UX for most people. I can only assume that there is some analogue to the `bugreport` command I don’t know of for maintainers that actually offers some amount of UI assistance. As a user, I have no idea how to close my own bugs, or even to know which bugs I’ve created, so the burden falls entirely on the package maintainers to do all the work of keeping the bug tracker tidy (something that developers famously love to do…).
The search/bug view also does not work particularly well in my experience. The way that bugs are organised is totally unintuitive if you don’t already understand how it works. Part of this is a more general issue for all distributions of “which package is actually responsible for this bug?”, but Debian BTS is uniquely bad in my experience. It shows a combination of status and priority states and uses confusing symbols like “(frowning face which HN does not allow)” and “=” and “i” where you have to look at the tooltip just to know what the fuck that means.
> As far as I know, it is impossible to use the BTS without getting spammed, because the only way to interact with it is via email, and every interaction with the BTS is published without redaction on the web. So, if you ever hope to receive updates, or want to monitor a bug, you are also going to get spam.
Do the emails from the BTS come from a consistent source? If so, it's not a good solution, but you could sign up with a unique alias that blackholes anything that isn't from the BTS.
The spam issue is probably one of the stronger arguments against email centered design for bug trackers, code forges and the like. It's a bit crazy that in order to professionally participate in modern software development, you're inherently agreeing that every spammer with a bridge to sell you is going to be able to send you unsollicited spam.
There's a reason most code forges offer you a fake email that will also be considered as "your identity" for the forge these days.
> Also, allowing CSS inside SVG is not a great idea because the SVG renderer needs to include full CSS parser, and for example, will Inkscape work correctly when there is embedded CSS with base64 fonts? Not sure.
For better or worse, CSS parsing and WOFF support are both mandatory in SVG 2.[0][1] Time will tell whether this makes it a dead spec!
This is possible, but only in the stupid way of using a `<foreignObject>` to embed HTML in your SVG (which obviously only works if your SVG renderer also supports at least a subset of HTML). SVG 2 fixes this by adding support for `inline-size`[0], so now UAs just need to… support that.
> - cannot embed font glyphs - your SVG might be unreadable if the user doesn't have the font installed. You can convert letters to curves, but then you won't be able to select and edit text. It's such an obvious problem, yet nobody thought of it, how?
Somebody did think of it. SVG 1.1 added the `<font>` element[1]; SVG 2.0 replaced this with mandatory WOFF support.[2] A WOFF is both subsettable and embeddable using a data URI, and is supported by all the browser UAs already, so it’s obvious why this was changed, but embeddable SVG fonts have existed for a long time (I don’t know why/how they got memory holed).
> - browsers do not publish, which version and features they support
It should be possible to use CSS `@supports` for most of this and hide/show parts of the SVG accordingly in most places.[3] The SVG spec itself includes its own mechanism for feature detection[4], but since it is for “capabilities within a user agent that go beyond the feature set defined in this specification”, it’s essentially worthless.
There are obvious unsolved problems with SVG text, but they are more subtle. For example, many things one might want to render with SVG (like graphs) make more sense with an origin at the bottom-left. This is trivial using a global transform `scaleY(-100%)`, except for text. There is no “baseline” transform origin, nor any CSS unit for the ascent or descent of the line box, nor any supported `vector-effect` keyword to make the transformation apply only to the position and not the rendering. So unless the text is all the same size, and/or you know the font metrics in advance and can hard-code the correct translations, it is impossible to do the trivial thing.
There are other issues in a similar vein where scaling control is just ludicrously inadequate. Would you like to have a shape with a pattern fill that dynamically resizes itself to fill the SVG, but doesn’t distort the pattern, like how HTML elements and CSS `background` work? Good luck! (It’s possible, but much like the situation with text wrapping, requires egregious hacks.)
Some of the new `vector-effect` keywords in SVG 2 seem like they could address at least some of this, but those are “at risk” features which are not supported by UAs and may still be dropped from the final SVG 2 spec.
As others have noted, this is not actually a Lua engine written in Rust. It is a wrapper over existing C/C++ implementations of Lua. There is, however, an actual Lua engine written in Rust. It is called piccolo.[0]
Last year I tried to extend Retro68 to support Palm OS and made quite a lot of progress, but in the end it ended up being too much work and I abandoned the attempt. I suppose now is as good a time as any to mention that it exists (in a very ugly state with lots of unsquashed commits) in case anyone wants to pick up the mantle.[0] At the least, it has an up-to-date and functioning copy of the Palm OS Emulator which, unlike cloudpilot, retains the debugger code so you can actually debug apps with it.[1] (I also reverse-engineered the dana HAL; this is as far as I know the only open-source version of POSE which supports that hardware.)
The thing that blocked me from being able to make any more progress was a bug in GCC that causes it to ICE generating PC-relative code[2], and I absolutely could not understand GIMPLE nor the GCC internals quickly, nor could I commit any more energy to trying to learn them. (Generating PC-relative code is essential for Palm OS because, unlike Mac OS, code sections are in read-only memory and cannot have relocations, so this mode being broken means it can’t really work.)
It is fair to say that I have no idea how close to actually working things actually are in that fork since GCC/binutils are an absolute nightmare[3] and I am no compiler engineer. Retro68 has some scary-looking hacks to deal with exception handling which wouldn’t work as-is with Palm OS, at the least. (Retro68 is also itself quite a pile of hacks which were clearly made without a good understanding of how GCC works, and without any apparent care to making sure it is easy to rebase atop newer versions of GCC.)
If I were to restart the work again I would probably just have GCC do the bare minimum of emitting code with whatever relocations it can, and then just make Elf2Mac rewrite machine code. Elf2Mac is itself fairly clearly an attempt to avoid having to touch binutils as much as possible, since it redoes a lot of the work that libbfd normally does, which makes sense, because working on GCC/binutils is just awful.[4]
[1] I tried to make the debugger work with the modern GDB remote protocol instead of the Palm-specific protocol which requires the ancient prc-tools version of GDB, but GDB is also broken <https://sourceware.org/bugzilla/show_bug.cgi?id=32120>, so that never worked very well. GDB’s remote protocol also does not support receiving symbols from the remote on-demand for whatever reason—it only allows to receive an ABI-compatible binary with DWARF symbols, which makes it next to impossible to get symbols out of the ROM, which uses MacsBugs format. Working with DWARF also sucks.
[3] Just as a tiny example: never in my life have I ever considered that someone would solve the “tabs versus spaces” debate by making the rule “two spaces per indent, unless it is eight spaces, in which case use a tab”. What IDE even supports this??
[4] To be clear, I don’t mean to denigrate all the hard work that has been done over decades to create these tools. The GCC toolchain is a triumph and I am sure that my relative intelligence has something to do with why I struggle with it, compared to all the compiler people who happily work with it every day. Nevertheless, it is a forty-year-old codebase, and everyone seems quite content to continue to work more or less within constraints that made sense in the 1980s, and perhaps not so much in 2025.
Yes, I saw that, it would have been really great if the source code had ever been released, instead I had to start from scratch…
My implementation does not require editing SDK headers and the goal was to support multiseg. If I had stopped at 32k single seg it would probably have been working. But I never got quite as far as being able to e.g. test libgcc, so who knows.
Click the second link. The source code is there. First comment. I only didn’t release it initially because I was really busy and sorting out a clean reproducible process to build it took too long. As soon as I was able to, I posted it.
And I have since made it not require any SDK changes.
Oh, how frustrating. I have no idea how I missed that since I feel like I spent quite a while looking for some later update that included the source. Well, thank you for making sure to release it! I did rewrite most of the Retro68 CMake code too, perhaps for similar reasons, so I can understand how that could have been a problem. At least the newer versions of GCC do not have race conditions in their Makefiles, unlike prc-tools-remix. :-)
The work I did was intended to eventually merge and live alongside the existing stuff in Retro68 instead of just blowing it away, with the hope that nothing like this would ever happen again to anyone else, but of course I failed to actually finish the work.
I never submit to OSS. It is the same as editing wikipedia -- every time I try, it is a political mess and nonsense galore. (My reasoning: If you are paying me for work, you are welcome to criticize, request amendments, etc. If you are not paying me for work, you thank me profusely for the free work I offered and take it...or don't. I am uninterested in your opinions in that case, or requests for changes unless they are bugs). Anyways, I never had goals of upstreaming anything. I was just trying to help others who wanted a working toolchain. My patches work well. People (not just me) have used them. There are also patches to PilRC i released that add some more bitmap compression modes and fix bugs with multi-depth fonts.
Is your POSE 64-bit fixed or still 32-bit? ISTR running into this problem trying to compile the original from source and have yet to find the round tuit.
It is 64-bit fixed (along with a bunch of other show-stopper bugs in the ancient FLTK code, and I got rid of the bizarre UI they they used only for *nix and replaced it with the UI they used for Windows). There were a couple 64-bit safety issues in the prc compiler too which I also fixed.
Good to hear (I have my own 64-bit fixed pilrc but POSE was the missing piece). What do you mean by the Windows UI, though? Likely I'd compile this on my MacBook.
When opening POSE without a previous session, on Windows it would open a reasonable window with some buttons (New, Open, Download, Exit). On *nix, they instead decided to open a blank window that said “Right click on this window to show a menu of commands”. (And then, due to programming errors and bitrot, actually trying to use the context menu would access invalid memory and crash.) So I replaced that bad UI with the less bad one from Windows. :-)
This is how it should be done. But it still doesn't protect users fully, because attacker can try to brute-force passwords their interested in. It requires much more effort though.
Where I live, ILLs do not work for video games because the format identification for video games is “Electronic”, and their software is programmed to suppress the request button for these items because it is interpreted as “no physical media”. I emailed the people who run the system, they said it is a known issue, and as far as I can tell that just means they aren’t going to fix it, since it has been this way for at least three years.
btrfs is OK for a single disk. All the raid modes are not good, not just the parity modes.
The biggest reason raid btrfs is not trustable is that it has no mechanism for correctly handling a temporary device loss. It will happily rejoin an array where one of the devices didn’t see all the writes. This gives a 1/N chance of returning corrupt data for nodatacow (due to read-balancing), and for all other data it will return corrupt data according to the probability of collision of the checksum. (The default is still crc32c, so high probability for many workloads.) It apparently has no problem even with joining together a split-brained filesystem (where the two halves got distinct writes) which will happily eat itself.
One of the shittier aspects of this is that it is not clearly communicated to application developers that btrfs with nodatacow offers less data integrity than ext4 with raid, so several vendors (systemd, postgres, libvirt) turn on nodatacow by default for their data, which then gets corrupted when this problem occurs, and users won’t even know until it is too late because they didn’t enable nodatacow.
The main dev knows this is a problem but they do seem quite committed to not taking any of it seriously, given that they were arguing about it at least seven years ago[0], it’s still not fixed, and now the attitude seems to just ignore anyone who brings it up again (it comes up probably once or twice a year on the ML). Just getting them to accept documentation changes to increase awareness of the risk was like pulling teeth. It is perhaps illustrative that when Synology decided to commit to btrfs they apparently created some abomination that threads btrfs csums through md raid for error correction instead of using btrfs raid.
It is very frustrating for me because a trivial stale device bitmap written to each device would fix it totally, and more intelligently using a write intent bitmap like md, but I had to be deliberately antagonistic on the ML for the main developer to even reply at all after yet another user was caught out losing data because of this. Even then, they just said I should not talk about things I don’t understand. As far as I can tell, this is because they thought “write intent bitmap” meant a specific implementation that does not work with zone append, and I was an unserious person for not saying “write intent log” or something more generic. (This is speculation, though—they refused to engage any more when I asked for clarification, and I am not a filesystem designer, so I might actually be wrong, though I’m not sure why everyone has to suffer because a rarefied few are using zoned storage.)
A less serious but still unreasonable behaviour is that btrfs is designed to immediately go read-only if redundancy is lost, so even if you could write to the remaining good device(s), it will force you to lose anything still in transit/memory if you lose redundancy. (Except that it also doesn’t detect when a device drops through e.g. a dm layer, so you can actually ‘only’ have to deal with the much bigger first problem if you are using FDE or similar.) You could always mount with `-o degraded` to avoid this but then you are opening yourself up to inadvertently destroying your array due to the first problem if you have some thing like a backplane power issue.
Finally, unlike traditional raid, btrfs tools don’t make it possible to handle an online removal of an unhealthy device without risking data loss because in order to remove an unhealthy but extant device you must first reduce the redundancy of the array—but doing that will just cause btrfs to rebalance across all the devices, including the unhealthy one, and potentially taking corrupt data from the bad device and overwriting on the good device, or just losing the whole array if the unhealthy device fails totally during the two required rebalances.
There are some other issues where it becomes basically impossible to recover a filesystem that is very full because you cannot even delete files any more but I think this is similar on all CoW filesystems. This at least won’t eat data directly, but will cause downtime and expense to rebuild the filesystem.
The last time I was paying attention a few months ago, most of the work going into btrfs seemed to be all about improving performance and zoned devices. They won’t reply to any questions or offers for funding or personnel to complete work. It’s all very weird and unfortunate.
> The biggest reason raid btrfs is not trustable is that it has no mechanism for correctly handling a temporary device loss. It will happily rejoin an array where one of the devices didn’t see all the writes. This gives a 1/N chance of returning corrupt data for nodatacow (due to read-balancing), and for all other data it will return corrupt data according to the probability of collision of the checksum. (The default is still crc32c, so high probability for many workloads.) It apparently has no problem even with joining together a split-brained filesystem (where the two halves got distinct writes) which will happily eat itself.
That is just mind bogglingly inept. (And thanks, I hadn't heard THIS one before).
For nocow mode, there is a bloody simple solution: you just fall back to a cow write if you can't write to every replica. And considering you have to have the cow fallback anyways - maybe the data is compressed, or you just took a snapshot, or the replication level is different - you have to work really hard or be really inept to screw this one up.
I honestly have no idea how you'd get this wrong in cow mode. The whole point of a cow filesystem is that it makes these sorts of problems go away.
I'm not even going to go through the rest of the list, but suffice it to say - every single broken thing I've ever seen mentioned about btrfs multi device mode is fixed in bcachefs.
Every. Single. One. And it's not like I ever looked at btrfs for a list of things to make sure I got right, but every time someone mentions one of these things - I'll check the code if I don't remember, some of this code I wrote 10 years ago, but I yet to have seen someone mention something broken about btrfs multi device mode that bcachefs doesn't get right.
> The last time I was paying attention a few months ago, most of the work going into btrfs seemed to be all about improving performance and zoned devices. They won’t reply to any questions or offers for funding or personnel to complete work. It’s all very weird and unfortunate.
By the way, if that was serious, bcachefs would love the help, and more people are joining the party.
I would love to find someone to take over erasure coding and finish it off.
In my case it was a last-ditch effort to get them to explain what was keeping them from making raid actually safe. Others have offered more concrete support more recently[0], I guess you could try reaching out to them, though I suppose they are interested in funding btrfs because they are using btrfs.
I share the sentiments of others in this discussion that I hope you are able to resolve the process issues so that bcachefs does become a viable long-term filesystem. There likely won’t be any funding from anyone ever if it looks like it’s going to get the boot. btrfs also has substantial project management issues (take a look at the graveyard of untriaged bug reports on kernel.org as one more example[1]), they just manage to keep theirs under the radar.
The btrfs devs are mainly employed by Meta and SuSE and they only support single devices (I haven't looked up recently if SuSE supports multiple device fs).
Meta probably uses zoned storage devices, so that is why they are focusing on that.
Unfortunately I don't think Patreon can fund the kind of talent you need to sustainably develop a file system.
That btrfs contains broken features is IMO 50/50 the fault of up-stream and the distributions.
Distributions should patch out features that are broken (like btrfs multi-device support, direct IO) or clearly put it behind experimental flags.
Up-stream is unfortunately incentivised to not do this, to get testers.
Patreon has never been my main source of funding. (It has been a very helpful backstop though!)
But I do badly need more funding, this would go better with a real team behind it. Right now I'm trying to find the money to bring Alan Huang on full time; he's fresh out of school but very sharp and motivated, and he's already been doing excellent work.
More generally, people don’t participate in communities to have conversations with someone else’s chatbot, and especially not to have to vicariously read someone else’s own conversation with their own chatbot.
reply