Hacker Newsnew | past | comments | ask | show | jobs | submit | ottah's commentslogin

Wayland, thow shall not screenshot without root.

D-Bus, YOLo!


I am not a fan of Wayland either, I rather want to be able to run programs from different users against the same application server. This works just fine on Xorg.

Works just fine on Wayland as well. It's simply a Unix socket that you can set permissions for, it's even easier than X11 with its magic cookies.

I actually think in practice the meaning has always been "things I dislike". Before AI you could see it applied to all kinds of things in media, WWE is slop, Soap Operas are slop, Genre Fiction is slop. It's almost exclusively a pejorative based on taste, intended to throw scorn on what other people enjoy. When a person uses it I stop listening, because essentially the speaker has stopped saying anything of value.

I have an allergic reaction to anything that feels manipulative, and "gamification" is probably the most manipulative. I've abandoned every learning app that prods you with points, achievements, and other noise.

I'm kind of had enough of unnecessary policy ratcheting, it's a problem in a every industry where a solution is not possible or practical; so the knob that can be tweaked is always turned. Same issue with corporate compliance, I'm still rotating password, with 2fa, sometimes three or four factors for an environment, and no one can really justify it, except the fear that not doing more will create liability.

>I'm still rotating password

A bit off-topic, but I find this crazy. In basically every ecosystem now, you have to specifically go out of your way to turn on mandatory rotation.

It's been almost a decade since it's been explicitly advised against in every cybersec standard. Almost two since we've done the research to show how ill-advised mandatory rotations are.


PCI still recommends 90 day password changes. Luckily they've softened their stance to allow zero-trust to be used instead. They're not really equivalent controls, but clearly laid out as 'OR' in 8.3.9 regardless.

I think it's only a requirement if passwords are the sole factor, correct? Any other factor or zero-trust or risk-based authentication exempts you from the rotation. It's been awhile since I've looked at anything PCI.

In any case, all my homies hate PCI.


But that would mean doing less, and that's by default bad. We must take action! Think of the children!

I tried at my workplace to get them to stop mandatory rotation when that research came out. My request was shot down without any attempt at justification. I don't know if it's fear of liability or if the cyber insurers are requiring it, but by gum we're going to rotate passwords until the sun burns out.


This was stated as a long-term goal long ago. The idea is that you should automate away certificate issuance and stop caring, and to eventually get lifetimes short enough that revocation is not necessary, because that's easier than trying to fix how broken revocation is.

The problem is when the automation fails, you're back to manual. And decreasing the period between updates means more chances for failure. I've been flamed by HN for admitting this, but I've never gotten automated L.E. certificate renewal to work reliably. Something always fails. Fortunately I just host a handful of hobby and club domains and personal E-mail, and don't rely on my domains for income. Now, I know it's been 90 days because one of my web sites fails or E-mail starts to complain about the certificate being bad, and I have to ssh into my VPS to muck around. This news seems to indicate that I get to babysit certbot even more frequently in the future.

I set it up last year and haven't had to interact with it in the slightest. It just works all the time for me.

Really? I've never had it fail. I simply ran the script provided by LE, it set everything up, and it renewed every time until I took the site down for unrelated (financial reasons). Out of curiousity, when did you last use LE? Did you use the script they provided you or a third party package?

I set it up ages ago, maybe before they even had a script. My setup is dead simple: A crontab that runs monthly:

    0 2 1 * * /usr/local/bin/letsencrypt-renew
And the script:

    #!/bin/sh
    certbot renew
    service lighttpd restart
    service exim4 restart
    service dovecot restart
... and so on for all my services

That's it. It should be bulletproof, but every few renewals I find that one of my processes never picked up the new certificates and manually re-running the script fixes it. Shrug-emoji.


I don't know how old "letsencrypt-renew" is and what it does. But you run "modern" acme clients daily. The actual renewal process starts with 30 days left. So if something doesn't work it retries at least 29 times.

I haven't touched my OpenBSD (HTTP-01) acme-client in five years: acme-client -v website && rcctl reload httpd

My (DNS-01) LEGO client sometimes has DNS problems. But as I said, it will retry daily and work eventually.


> I don't know how old "letsencrypt-renew" is and what it does.

It's the five lines below "the script:"


Yes, same for me. Every few months some kind internet denizen points out to me that my certificate has lapsed, running it manually usually fixes it. LE software is pretty low quality, I've had multiple issues over the years some of which culminated in entire systems being overwritten by LE's broken python environment code.

If it's happening regularly wouldn't it make sense to add monitoring for it? E.g. my daily SSL renew check sanity-checks the validity of the certificates actually used by the affected services using openssl s_client after each run.

I did manage to set it up and it has been working ok but it has been a PITA. Also for some reason they contact my server over HTTP, so I must open port 80 just to do the renovation.

That would be because you set up the HTTP-01 challenge as your domain verification method.

https://letsencrypt.org/docs/challenge-types/


Since there is no equivalent HTTPS way of doing the same thing?

We could have just fixed OCSP stapling instead. Or better yet scrap the CA nonsense entirely and just use DANE.

Enforcing an arbitrary mitigation to a problem the industry does not know how to solve doesn't make it a good solution. It's just a solution the corporate world prefers.

Except this isn't really viable for any kind of internal certs, where random internal teams don't have access to modify the corporate DNS. TLS is already a horrible system to deal with for internal software, and browsers keep making it worse and worse.

Not to mention that the WEBPKI has made it completely unviable to deliver any kind of consumer software as an offline personal web server, since people are not going to be buying their own DNS domains just to get their browser to stop complaining that accessing local software is insecure. So, you either teach your users to ignore insecure browser warnings, or you tie the server to some kind of online subscription that you manage and generate fake certificates for your customer's private IPs just to get the browsers to shut up.


Private CAs and CERTs will still be allowed to have longer lives.

This doesn't help that much, since you still have to fiddle with installing the private CA on all devices. Not much of a problem in corporate environments, perhaps, but a pretty big annoyance for any personal network (especially if you want friends to join).

It also ignores the real world as the CA/Browser forum admits they don't understand how certificates are actually used in the real world. They're just breaking shit to make the world a worse place.

They are calibrated for organizations/users that have higher consequences for mis-issuance and revocation delay than someone’s holiday blog, but I don’t think they’re behaving selfishly or irrationally in this instance. There are meaningful security benefits to users if certificate lifetimes are short and revocation lists are short, and for the most part public PKI is only as strong as the weakest CA.

OCSP (with stapling) was an attempt to get these benefits with less disruption, but it failed for the same reason this change is painful: server operators don’t want to have to configure anything for any reason ever.


> OCSP failed for the same reason this change is painful: server operators don’t want to have to configure anything for any reason ever.

OCSP is going end-of-life because it makes it too easy to track users.

From Lets Encrypt[1]:

We ended support for OCSP primarily because it represents a considerable risk to privacy on the Internet. When someone visits a website using a browser or other software that checks for certificate revocation via OCSP, the Certificate Authority (CA) operating the OCSP responder immediately becomes aware of which website is being visited from that visitor’s particular IP address. Even when a CA intentionally does not retain this information, as is the case with Let’s Encrypt, it could accidentally be retained or CAs could be legally compelled to collect it. CRLs do not have this issue.

[1]: https://letsencrypt.org/2025/08/06/ocsp-service-has-reached-...


> They're just breaking shit to make the world a worse place.

Well, it's the people who want to MITM that started it, a lot of effort has been spent on a red queen's race ever since. If you humans would coordinate to stay in high-trust equilibria instead of slipping into lower ones you could avoid spending a lot on security.


everything that uses TLS is publicly routable and runs a web service don’t you know.

Well, you could also give every random server you happen to configure an API key with the power change any DNS record it wishes.. what could go wrong?

#security


That’s why the HTTP-01 challenge exists - it’s perfect for public single-server deployments. If you’re doing something substantial enough to need a load balancer, arranging the DNS updates (or centralizing HTTP-01 handling) is going to be the least of your worries.

Holding public PKI advancements hostage so that businesses can be lazy about their intranet services is a bad tradeoff for the vast majority of people that rely on public TLS.


and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?

There are more things on the internet than web servers.

You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).

I don’t even think mail servers work well with the letsencrypt model unless its a single server for everything without redundancies.

I guess nobody runs those anymore though, and, I can see why.


I've operated things on the web that didn't use HTTP but used public PKI (most recently, WebTransport). But those services are ultimately guests in the house of public PKI, which is mostly attacked by people trying to skim financial information going over public HTTP. Nobody made IRC use public PKI for server verification, and I don't know why we'd except what is now an effectively free CA service to hold itself back for any edge case that piggybacks on it.

> and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?

The certificate you get for the domain can be used for whatever the client accepts it for - the HTTP part only matters for the ACME provider. So you could point port 80 to an ACME daemon and serve only the challenge from there. But this is not necessarily a great solution, depending on what your routing looks like, because you need to serve the same challenge response for any request to that port.

> You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).

The server using the certificate doesn't have to be the one going through the ACME flow, and once you have multiple nodes it's often better that it isn't. It's very rare for even highly sophisticated users of ACME to actually provision one certificate per server.


FWIW, there are ways to use DNS-01 without an API key that can control your entire domain.

https://hsm.tunnel53.net/article/dns-for-acme-challenges/


Putting DNS Api keys on every remote install is indeed problematic.

The solution however is pretty trivial. For our setup I just made a very small server with a couple of REST endpoints.

Each customer gets their own login to our REST server. All they do is ask "get a new cert".

The DNS-01 challenge is handled by the REST server, and the cert then supplied to the client install.

So the actual customer install never sees our DNS API keys.


If it doesn’t run a web service, or isn’t publicly routable - why do you need it to work on billions of users browsers and devices around the world?

Are we pretending browsers aren’t a universal app delivery platform, fueling internal corporate tools and hobby projects alike?

Or that TLS and HTTPS are unrelated, when HTTPS is just HTTP over TLS; and TLS secures far more, from APIs and email to VPNs, IoT, and non-browser endpoints? Both are bunk; take your pick.

Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).

Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.

And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah

Inspired.


> Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).

It does none of these. Putting more elbow grease into your ACME setup with existing, open source tools solves this for basically any use case where you control the server. If you're operating something from a vendor you may be screwed, but if I had a vote I'd vote that we shouldn't ossify public PKI forever to support the business models of vendors that don't like to update things (and refuse to provide an API to set the server certificate programmatically, which also solves this problem).

> Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.

Yes, but unironically. If rotating certs is a once a year process and the guy who knew how to do it has since quit, how quickly is your org going to rotate those certs in the event of a compromise? Most likely some random service everyone forgot about will still be using the compromised certificate until it expires.

> And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah

Everyone likes to meme on this, but TLS without verification is actually substantially stronger than nothing for server-to-server SMTP (though verification is even better). It's much easier to snoop on a TCP connection than it is to MITM it when you're communicating between two different datacenters (unlike a coffeeshop). And most mail is between major providers in practice, so they were able to negotiate how to establish trust amongst themselves and protect the vast majority of email from MITM too.


> Everyone likes to meme on this, but TLS without verification is actually substantially stronger than nothing for server-to-server SMTP (though verification is even better). It's much easier to snoop on a TCP connection than it is to MITM it when you're communicating between two different datacenters (unlike a coffeeshop). And most mail is between major providers in practice, so they were able to negotiate how to establish trust amongst themselves and protect the vast majority of email from MITM too.

No, it's literally nothing, since you can just create whatever TLS cert you want and just MITM anyway.

What do you think you're protecting from? Passive snooping via port-mirroring?

Taps are generally more sophisticated than that.

How do I establish trust with Google? How do they establish trust with me: I mean, we're not using the system designed for it, so clearly it's not possible- otherwise they would have enabled this option at the minimum.


Because the service needs to be usable from non-managed devices, whether that be on the internet or on an isolated wifi network.

Very common in mobile command centres for emergency management, inflight entertainment systems and other systems of that nature.

I personally have a media server on my home LAN that I let my relatives use when they’re staying at our place. It has a publicly trusted certificate I manually renew every year, because I am not going to make visitors to my home install my PKI root CA. That box has absolutely no reason to be reachable from the Internet, and even less reason to be allowed to modify my public DNS zones.


Sure, but in those examples - automation and short-lifetime certs are totally possible.

Except when it's not, because the system rarely (or never) touches the Internet.

It might never 'touch' the internet, but the certificates can be easily automated. They don't have to be reachable on the internet, they don't have to have access to modify DNS - but if you want any machine in the world to trust it by default, then yes - there'll need to be some effort to get a certificate there (which is an attestation that you control that FQDN at a point-in-time).

and we're back to: How do I create an API token that only enables a single record to be changed on any major cloud provider?

Or.. any registrar for that matter (Namecheap, Gandi, Godaddy)?

The answer seems to be: "Bro, you want security so the way you do that is to give every device that needs TLS entire access to modify any DNS record, or put it on the public internet; that's the secure way".

(PS: the way this was answered before was: "Well then don't use LE and just buy a certificate from a major provider", but, well, now that's over).


There are ways to do this as pointed out below - CNAME all your domains to one target domain and make the changes there. There’s also a new DCV method that only needs a single, static record. Expect CA support widely in the coming weeks and months. That might help?

One answer I've seen to this (very legitimate) concern is using CNAME delegation to point _acme-challenge.$domain to another domain (or a subdomain) that has its own NS records and dedicated API credentials.

It’s a stupid policy. To solve the non-existent problem with certificates, we are pushing the problem to demonstrating that we have access to a DNS registrar’s service portal.

It's not really a stupid problem, its the BygoneSSL problem: https://www.certkit.io/blog/bygonessl-and-the-certificate-th...

It costs more to let's encrypt.

Yeah the best/worst part of this is that nobody was stopping the 'enlightened' CA/Browser Forum from issuing shorter certificates for THIER fleets, but no we couldn't be allowed to make our own decisions about how we best saw the security of the communications channel between ourselves and our users. We just weren't to be allowed to be 'adult' enough. The ignorance about browser lock-in too, is rad. I guess we could always, as they say, create a whole browser, from scratch to obviate the issue, one with sane limitations on certificate lifetimes.

I fear this reflects two misunderstandings.

First, one of the purposes of shorter certificates is to make revocation easier in the case of misissuance. Just having certificates issued to you be shorter-lived doesn't address this, because the attacker can ask for a longer-lived certificate.

Second, creating a new browser wouldn't address the issue because sites need to have their certificates be acceptable to basically every browser, and so as long as a big fraction of the browser market (e.g., Chrome) insists on certificates being shorter-lived and will reject certificates with longer lifetimes, sites will need to get short-lived certificates, even if some other browser would accept longer lifetimes.


I always felt like #1 would have better been served by something like RPKI in the BGP world. I.e. rather than say "some people have a need to handle ${CASE} so that is the baseline security requirement for everyone" you say "here is a common infrastructure for specifying exactly how you want your internet resources to be able to be used". In the case of BGP that turned into things like "AS 42 can originate 1.0.0.0/22 with maxlength of /23" and now if you get hijacked/spoofed/your BGP peering password leaks/etc it can result in nothing bad happening because of your RPKI config.

The same in web certs that could have been something like "domain.xyz can request non-wildcard certs for up to 10 days validity". Where I think certs fell apart with it is they placed all the eggs in client side revocation lists and then that failure fell to the admins to deal with collectively while the issuers sat back.

For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.


> "domain.xyz can request non-wildcard certs for up to 10 days validity"?

You could be proposing two things here:

(1) Something like CAA that told CAs how to behave. (2) Some set of constraints that would be enforced at the client.

CAA does help some, but if you're concerned about misissuance you need to be concerned about compromise of the CA (this is also an issue for certificates issued by the CA the site actually uses, btw). The problem with constraints at the browser is that they need to be delivered to the browser in some trustworthy fashion, but the root of trust in this case is the CA. The situation with RPKI is different because it's a more centralized trust infrastructure.

> For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.

I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.


The RPKI-alike is more akin to #1, but avoids the step of trying to bother trusting compromised CAs. I.e., if a CA is compromised you revoke and regenerate CA's root keys and that's what gets distributed rather than rely on individual revocation checks for each known questionable key or just sitting back for 45 days (or whatever period) to wait for anything bad to expire.

> I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.

Same reasoning between us I think, just a difference in interpreting what it was saying. Kind of like sarcasm - a "yes, you can do it just as they say" which in reality highlights "no, you can't actually do _it_ though" type point. You read it as solely the former, I read it as highlighting the latter. Maybe GP meant something else entirely :).

That said, I'm not sure I 100% agree it's really related to the strictest major browser does alone though. E.g. if Firefox set the limit to 7 days then I'd bet people started using other browsers vs all sites began rotating certs every 7 days. If some browsers did and some didn't it'd depend who and how much share etc. That's one of the (many) reasons the browser makers are all involved - to make sure they don't get stuck as the odd one out about a policy change.

.

Thanks for Let's Encrypt btw. Irks about the renewal squeeze aside, I still think it was a net positive move for the web.


Some users will be able to opt-in to automatically getting a new cert every 7 days at some point [1].

[1]: https://letsencrypt.org/docs/profiles/#shortlived


I don't feel the tradeoff for trying to to fix the idea of a rogue CA misissuing is addressed by the shorter life either though, the tradeoff isn't worth it. The best assessment of the whole CA problem can be summed up the best by Moxie, https://moxie.org/2011/04/11/ssl-and-the-future-of-authentic...

And, Well the create-a-browser was a joke, its what ive seen suggested for those who don't like the new rules.


I just post the password semi-publicly on some scratchpad (like maybe a secret gist that's always open in browser or for 2fa a custom web page with generator built in) if any of those policies get too annoying. Bringing number of factors back to one and bypassing 'cant use previous 300000' passwords bs. Works every time.

HTML is the api

GNU utils is battle tested, well reviewed, and STABLE. That's really what I want in an OS, stability. Rust solves only one case of security issues, but it cannot solve logical errors, which there will be many of in a new software project.

I just don't see what's to gain, to suffer through years of instability, waiting for a userspace suite to mature, and reach feature parity, when we have a well understood, and safe tool set know.

Maybe in five years, when coreutils is complete, I'd be okay with Ubuntu replacing user land with it. But we're not there, and it's a problem we shouldn't have to tolerate.

Also I can't stand we're leaving GPL code behind for MIT.


Luckily, the existence of uutils doesn’t change the fact that GNU coreutils exists. In fact, it’s helped improve the stability of the GNU coreutils by clarifying intended behavior and adding test cases. So if you prefer them, you should stick to them. Nobody is taking anything from you.

So I guess to properly clarify, I absolutely do not mind that someone wants to build coreutils in Rust. I don't have a problem with Rust Coreutils existing.

The problem, and the real issue I have is that this project is being used as the default in major linux distros. Eager adoption of this project, and making it the production target does take away things from me. The interface has changed, stability is affected. Correctness is now measured against this incomplete implementation first, not the known correct, and stable GNU coreutils.


That’s not what is happening. One distro is kicking the tires on using this by default. The purpose is exactly because the GNU versions are being treated as the proper versions. Divergences from them are being fixed, so that this new version follows those. You can only do that by actually trying them out, because it’s impossible for the test suite to cover every behavior.

> That’s not what is happening. One disto is kicking the tires on using this by default.

Many people call Ubuntu flavors distributions. This includes Ubuntu developers.

Ubuntu made it default. The tire kicking analogy was incorrect.

> The purpose is exactly because the GNU versions are being treated as the proper versions. Divergences from them are being fixed, so that this new version follows those. You can only do that by actually trying them out, because it’s impossible for the test suite to cover every behavior.

You should assume everyone understands how Ubuntu's decision would benefit this project. You should assume most Ubuntu users do not care.


> Many people call Ubuntu flavors distributions. This includes Ubuntu developers.

You seem mad that a Linux distribution (Ubuntu) is trying this software out. Why do you care so much? Do you expect some of the programs you use to break? Have they?

If you don’t want to use uutils, I have good news. You can opt out. Or use Ubuntu LTS. Or use a different distribution entirely. I suspect you’re mad for a different reason. If all the tests passed, would you still be mad? Do you feel a similar way about angry projects like alpine Linux, which ship code built on musl? All the same compatibility arguments apply there. Musl is also not 100% compatible with glibc. How about llvm? Do you wish we had fewer web browsers?

Or maybe, is it a rust thing in particular? Like, if this rewrite was in C, C++ or go would you feel the same way? Are you worried more components of Linux will be ported to rust? (And if so, why?)

Ultimately the strength (and weakness) of Linux is that you’re not locked in to anything. I don’t understand how the existence of this software could make your life worse. If anything it sounds like it might be helping to clarify your stance on OS stability. If you want to make a principled stance there, there’s plenty of stable Linux distributions which will mirror your values. (Eg debian, Ubuntu lts, etc). Or you can just opt out of this experiment.

Given all of that, the tone I’m inferring from your comments seems disproportionate. Whats going on? Or am I misreading you?


You thought I was angry? What would you call Linus Torvalds when someone broke user space?[1]

You confused blunt responses to repetitive, condescending, specious, or false statements and anger at Canonical seemingly.

I made no objection to any software existing.

I like Rust. It was unfortunate this experiment supported stereotypes of Rust fanatics promoting Rust without respect for stability.

I reject the view users should have to wait 2 years for bug fixes and features, accept silently all experiments, or switch silently to a distribution with less 3rd party support and other issues inevitably.

The opt out process I saw required --allow-remove-essential. It would be irresponsible to recommend this.

A more responsible way to conduct this experiment would have been opt in 1st. Then phased. Then opt out for everyone. And waiting until all tests passed would have been better of course.

[1] https://lkml.org/lkml/2012/12/23/75


It is expressly described as an experiment. Making it the default does not preclude it being an experiment. It’s how you get broad enough usage to see if it’s ready. If it isn’t by the time for LTS, then it’ll be unmade as the default. That’s what an experiment is.

> It is expressly described as an experiment. Making it the default does not preclude it being an experiment.

Calling something an experiment does not make it exempt from criticism.

> It’s how you get broad enough usage to see if it’s ready.

My understanding was it was known not 100% compatible. And what did I say you should assume?

> If it isn’t by the time for LTS, then it’ll be unmade as the default.

People use non LTS releases for non experimental purposes.


Of course it’s not exempt from criticism. But suggesting something is permanent and final when it expressly is not is a poor criticism.

All software has bugs. Plus, not every bug is in the test suite. There are open bugs in all of the software shipped by every distro. Software can be ready for use even if there are know bugs in corner cases. Regular coreutils has open bugs as well.


> But suggesting something is permanent and final when it expressly is not is a poor criticism.

No one did this.

> All software has bugs. Plus, not every bug is in the test suite. There are open bugs in all of the software shipped by every distro. Software can be ready for use even if there are know bugs in corner cases. Regular coreutils has open bugs as well.

Stop speaking as if other people know nothing of software development. GNU do not break compatibility knowingly and with no user benefit.


This project is not knowingly breaking compatibility. It expressly considers the GNU behavior to be the correct one.

Canonical broke compatibility knowingly and with no user benefit when they made these utilities default in Ubuntu 25.10. The point was saying the GNU utilities had bugs was specious.

Ubuntu is using uutils experimentally in a non-LTS release. This kind of widespread testing will speed up the development process. Won't be long before it catches up and surpasses GNU coreutils. Then what? You want people to not use it? why?

One of the major problems with C, which like a lot of C's issues Rust just doesn't have, is that it's getting more difficult to find young, eager programmers willing to maintain a C codebase. The hassle of C outweighs the rewards, especially when Rust exists. So, ceteris paribus, development on the Rust version will outpace the C version, and you'll get more and smarter eyes on the code base.

Best to put the C code out to pasture, i.e. in maintenance mode only, with a deprecation plan in place.


It sounds like your beef is with Ubuntu for shipping some of this code. Not with the project for existing and fixing all the compatibility issues that you seem to care a great deal about.

If you want a purely gnu userland with gpl code and strong stability guarantees, Ubuntu is almost certainly the wrong distribution for you. Plenty of Linux distributions are far more stable, and won’t replace coreutils, maybe forever. (And if this is aiming to be bug for bug compatible, they won’t ever have to.)

As for the gpl, this isn’t new. there’s been bsd/mit licensed alternatives to coreutils for decades. You know, in FreeBSD and friends. It’s only aiming for 100% Linux compatibility that’s new. And I guess, shipping it in Linux. But let’s be real, the gpl v3 is a pretty toxic license. By trying so hard to preserve user freedom, it becomes a new tyranny for developers. If you build a web based startup today hosted on top of Linux, you might be in breach of the gpl. What a waste of everyone’s time. The point of opensource to me is nobody can tell me what I’m allowed to do with my computer. And that includes RMS.


well, sudo-rs had a few privilege escalation CVEs recently. So there has been some recent evidence in favor of the stability argument. I think it’s worthwhile to RiiR in general but I’ll be waiting a few more years for things to mature.

> well, sudo-rs had a few privilege escalation CVEs recently. So there has been some recent evidence in favor of the stability argument.

it would probably be a lot stronger an argument if sudo hadn’t also had a few privilege escalation CVEs recently.


87.75% compatibility, as measured by a comprehensive, but incomplete test suite. They want 87.75% compatibility to be an accurate measure, but we know that in reality the real number is lower.

Also, I have major issues with dumping GPL userspace utilities, for an MIT license suite, that is known to not be feature complete, only, and literally only because it was written in Rust. This does not make sense, and this is not good for users.


The question is going to be how much of that unknown/untested percentage actually matters. I mean, there's even a question of how much the 12.25% of known test regressions actually matter.

> Also, I have major issues with dumping GPL userspace utilities, for an MIT license suite, that is known to not be feature complete, only, and literally only because it was written in Rust. This does not make sense, and this is not good for users.

Thinking about it, I guess I have to agree. This allows ubuntu to avoid releasing security fixing patches if they so choose. You can't do that with GPLed code. It means they can send out binary security fixes and delay the source code release for as long as they like or indefinitely. Which is pretty convenient for a company that sells extended security support packages.


> This allows ubuntu to avoid releasing security fixing patches if they so choose. You can't do that with GPLed code. It means they can send out binary security fixes and delay the source code release for as long as they like or indefinitely

The GPL does not state that the source code for any modification must be released immediately, it doesn't even set some kind of time limit so it technically doesn't prevent indefinite delays either.


there's even a question of how much the 12.25% of known test regressions actually matter.

I would think that the regression tests are actually the most worthwhile targets for the new project to validate against: they represent real-world usage and logic corner cases that are evidently easy to get wrong. These are not the kind of bugs that Rust is designed to eliminate.


I agree. But I don't know that the 12.25% of test regressions are regression tests or unit tests from the gnu core utils.

I believe Ubuntu simply copied and transposed a bunch of tests from gnu core utils and that's where these ultimately came from. That doesn't really mean that all these tests arose due to regressions. (for sure some probably did).


To be clear, Ubuntu did nothing. This is a third party implementation that Ubuntu decided to ship in their OS.

To me moving from MIT to GPL is a downgrade regardless of features. Not everything is about features. Some people also care that their work can't be re-utilized as a tool by Big Corp in their march forward to subjugate their users.

You meant moving to MIT from GPL?

Yes. Sorry, I'm retarded.

Part of this project has been writing a lot of new tests, which are run on both GNU coreutils and rust coreutils. Some of these tests have found bugs in the original GNU coreutils.

This does not make sense to you because you are looking from a technological standpoint. The reason to rewrite coreutils (or sudo) in Rust is not technological, as there is no merit. Coreutuils are titanium rock stable tools that no one asked to rewrite.

And this is precisely why the worst Rust evangelists aim to rewrite it: virtue signaling with no suffering of the opposing party is not good enough.


Also, I don't really get why coreutils would be a worthwhile candidate for a Rust rewrite. A rewrite of curl/wget or sudo I can understand, but what's the security benefit to improved memory safety for CLI tools that are only ever run with same-user privileges? Even if there's an exploitable bug there, there's no avenue for privilege escalation.

> CLI tools that are only ever run with same-user privileges?

You don't think these are ever run with sudo/runas/pkexec/run0 or otherwise invoked by a program running as root?

That said I do think things like sudo, ssh, gpg, maybe systemd, http servers like nginx and apache etc. are more valuable to replace with tools written in rust (or more generally a "memory safe language"). But that doesn't mean rewriting coreutils isn't valuable.


Because the reasons to replace coreutils with the Rust rewrite are not technological, they are political. And thus aiming to rewrite something very core and stable is the correct approach to enrage the opposite party.

> comprehensive, but incomplete

????


The gnu project is more than welcome to make its own moves away from C.

The GNU project can't go to the men's room without a thumbs up from Stallman, who is so disconnected from how real people do their computing that by his own statement he hasn't written any material amount of code in almost 20 years and can't even figure out how to update his own website, instead relying on volunteers to do so.

Stallman comes from the era when C was good enough, because computing was not a hostile environment like it is today.

GNU is never going to "rewrite it in rust" as long as he's living, and probably for several years afterwards.

In other words, it's a social problem not a technical one.


In fact, it is not a problem at all.

Let new generations of Free Software orgs come along and supplant GNU with a GBIR (GNU But In Rust), but don't insist on existing, established things that are perfectly good for who and what they are to change into whatever you prefer at any given moment.


I would assume some good faith on their part. Verification would be valuable, but so would timely release of information. If the reports are true, an active harm to those organizations are being done, and it would be valuable for the public to know sooner than later. If you attempt to verify the information, but it's taking more time and resources than you have to do the job quickly, releasing the information with attribution to a reputable source is the least harmful option.

> but so would timely release of information. If the reports are true, an active harm to those organizations are being done, and it would be valuable for the public to know sooner than later.

I do not believe that that is The Guardian’s goal with this reporting. If it were, wouldn’t it make more sense to list the organizations (provide actionable information), rather than spending time telling a story?

I also have a hard time seeing the harm or the size thereof without knowing more context about any of the organizations, what they do, and how much they rely or depend on Facebook to be effective.

If I were an organization that had my Facebook account suspended unfairly or unjustly, I would simply find a different way to stay in touch with others. Meta does not owe me anything


I wouldn't be surprised to see zig in the kernel at some point

I would be. Mostly because while Zig is better than C, it doesn't really provide all that much benefit, if you already have Rust.

I personally feel the Zig is a much better fit to the kernel. It's C interoperability is far better than Rust's, it has a lower barrier to entry for existing C devs and it doesn't have the constraints the Rust does. All whilst still bringing a lot of the advantages.

...to the extent I could see it pushing Rust out of the kernel in the long run. Rust feels like a sledgehammer to me where the kernel is concerned.

It's problem right now is that it's not stable enough. Language changes still happen, so it's the wrong time to try.


From a safety perspective there isn't a huge benefit to choosing Zig over C with the caveat, as others have pointed out, that you need to enable more tooling in C to get to a comparable level. You should be using -Wall and -fsanitize=address among others in your debug builds.

You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).

The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.

It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.

But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.


I’m not so sure. The big selling point for Rust is making memory management safe without significant overhead.

Zig, for all its ergonomic benefits, doesn’t make memory management safe like Rust does.

I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.

And it seems unlikely they’d go through all the effort of porting safer Rust code into less safe Zig code just for ergonomics.


> Zig, for all its ergonomic benefits, doesn’t make memory management safe like Rust does.

Not like Rust does, no, but that's the point. It brings both non-nullable pointers and bounded pointers (slices). They solve a lot of problem by themselves. Tracking allocations is still a manual process, but with `defer` semantics there are many fewer foot guns.

> I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.

The jump from 2 to 3 is smaller than the jump from 1 to 2, but I generally agree.


> I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.

That was where my argument was supposed to go. Especially a third language whose benefits over C are close enough to Rust's benefits over C.

I can picture an alternate universe where we'd have C and Zig in the kernel, then it would be really hard to argue for Rust inclusion.

(However, to be fair, the Linux kernel has more than C and Rust, depending on how you count, there are quite a few more languages used in various roles.)


IMHO Zig doesn't bring enough value of its own to be worth bearing the cost of another language in the kernel.

Rust is different because it both:

- significantly improve the security of the kernel by removing the nastiest class of security vulnerabilities.

- And reduce cognitive burden for contributors by allowing to encode in thr typesystem the invariants that must be upheld.

That doesn't mean Zig is a bad language for a particular project, just that it's not worth adding to an already massive project like the Linux kernel. (Especially a project that already have two languages, C and now Rust).


Pardon my ignorance but I find the claim "removing the nastiest cla ss of security vulnerabilities" to be a bold claim. Is there ZERO use of "unsafe" rust in kernel code??

Aside from the minimal use of unsafe being heavily audited and the only entry point for those vulnerabilities, it allows for expressing kernel rules explicitly and structurally whereas at best there was a code comment somewhere on how to use the API correctly. This was true because there was discussion precisely about how to implement Rust wrappers for certain APIs because it was ambiguous how those APIs were intended to work.

So aside from being like 1-5% unsafe code vs 100% unsafe for C, it’s also more difficult to misuse existing abstractions than it was in the kernel (not to mention that in addition to memory safety you also get all sorts of thread safety protections).

In essence it’s about an order of magnitude fewer defects of the kind that are particularly exploitable (based on research in other projects like Android)


Not zero, but Rust-based kernels (see redox, hubris, asterinas, or blog_os) have demonstrated that you only need a small fraction of unsafe code to make a kernel (3-10%) and it's also the least likely places to make a memory-related error in a C-based kernel in the first place (you're more likely to make a memory-related error when working on the implementation of an otherwise challenging algorithm that has nothing to do with memory management itself, than you are when you are explicitly focused on the memory-management part).

So while there could definitely be an exploitable memory bug in the unsafe part of the kernel, expect those to be at least two orders of magnitude less frequent than with C (as an anecdotal evidence, the Android team found memory defects to be between 3 and 4 orders of magnitude less in practice over the past few years).


It removes a class of security vulnerabilities, modulo any unsound unsafe (in compiler, std/core and added dependency).

In practice you see several orders of magnitude fewer segfaults (like in Google Android CVE). You can compare Deno and Bun issue trackers for segfaults to see it in action.

As mentioned a billion times, seatbelts don't prevent death, but they do reduce the likelihood of dying in a traffic accident. Unsafe isn't a magic bullet, but it's a decent caliber round.


“by removing the nastiest class of security vulnerabilities” and “reduce the likelihood” don’t seem to be in the same neighborhood.

If you are reducing the likelihood of something by 99%, you are basically eliminating it. Not fully, but it’s still a huge improvement.

It reminds me of this fun question:

What’s the difference between a million dollars and a billion dollars? A billion dollars.

A million dollars is a lot of money to most people, but it’s effectively nothing compared to a billion dollars.


Dividing their number by 1000[1] is technically the later but in practice it's pretty much the former.

[1]: this the order of magnitude presented in the recent Android blog post: https://security.googleblog.com/2025/11/rust-in-android-move...

> Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.


In theory they are the same statement; in practice there is 0.01% chance someone wrote unsound code.

"Unsafe" rust still upholds more guarantees than C code. The rust compiler still enforces the borrow checker (including aliasing rules) and type system.

You can absolutely write drivers with zero unsafe Rust. The bridge from Rust to C is where unsafe code lies.

And hardware access. You absolutely can't write a hardware driver without unsafe.

There are devices that do not have access to memory, and you can write a safe description of such a device's registers. The only thing that is inherently unsafe is building DMA descriptors.

Zig as a language is not worth, but as a build system it's amazing. I wouldn't be surprised if Zig gets in just because of the much better build system than C ever had (you can cross compile not only across OS, but also across architecture and C stlib versions, including musl). And with that comes the testing system and seamless interop with C, which make it really easy to start writing some auxiliary code in Zig... and eventually it may just be accepted for any development.

I agree with you that it's much more interesting than the language, but I don't think it matters for a project like the Kernel that already had its build system sorted out. (Especially since no matter how nice and convenient Zig makes cross-compilation if you start a project from scratch, even in Rust thanks to cargo-zigbuild, it would require a lot of efforts to migrate the Linux build system to Zig, only to realize it doesn't support all the needs of the kernel at the start).


Zig at least claims some level of memory safety in their marketing. How real that is I don't know.

About as real as claiming that C/C++ is memory safe because of sanitizers IMHO.

I mean, Zig does have non-null pointers. It prevents some UB. Just not all.

Which you can achieve in C and C++ with static analysis rules, breaking compilation if pointers aren't checked for nullptr/NULL before use.

Zig would have been a nice proposition in the 20th century, alongside languages like Modula-2 and Object Pascal.


I'm unaware of any such marketing.

Zig does claim that it

> ... has a debug allocator that maintains memory safety in the face of use-after-free and double-free

which is probably true (in that it's not possible to violate memory safety on the debug allocator, although it's still a strong claim). But beyond that there isn't really any current marketing for Zig claiming safety, beyond a heading in an overview of "Performance and Safety: Choose Two".


Runtime checks can only validate code paths taken, though. Also, C sanitizers are quite good as well nowadays.

That's a library feature (not intended for release builds), not a language feature.

It is intended for release builds. The ReleaseSafe target will keep the checks. ReleaseFast and ReleaseSmall will remove the checks, but those aren't the recommended release modes for general software. Only for when performance or size are critical.

DebugAllocator essentially becomes a no-op wrapper when you use those targets.

I have heard different arguments, such as https://zackoverflow.dev/writing/unsafe-rust-vs-zig/ .

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: