> I hope this churn in .NET builds is temporary because a lot of people might be looking to go back to something stable especially after the recent supply chain attacks on the Node ecosystem.
Can you elaborate a bit? This article talks about internal machinery of building .net releases. What does that have to do with "this churn", whatever that is?
My guess is if you build with .NET Framework you can just forever run your builds, but if your source code is based on newer .NET you have to update to a new version each year, and deal with all the work in upgrading your entire project, which also means everyone in your team is also upgrading their dev environment, and now you have new things in the language and the runtime to deal with, deprecation and all that. Plus lots of packages don’t update as fast when version changes occurs, so chances are you will probably take more work and use as few dependencies as possible if at all, which may cause a lot of work. Instead it’s best to, if you need to depend on something, to be a very big Swiss Army knife like thing.
I think node is just more flexible and unless .NET Framework like forever releases or much longer term support make a come back, there’s no good trade off from node, since you don’t even get more stability.
> if your source code is based on newer .NET you have to update to a new version each year
.NET has a really refreshingly sane release life cycle, similar to nodejs:
- There's a new major release every year (in November)
- Even numbers are LTS releases, and get 3 years of support/patches
- Odd numbers get 18 months of support/patches
This means if you target LTS, you have 2 years of support before the next LTS, and a full year overlap where both are supported. If you upgrade every release, you have at least 6 months of overlap
There's very few breaking changes between releases anyway, and it's often in infrastructure stuff (config, startup, project structure) as opposed to actual application code.
I think it's important to remember that Dotnet projects can use code built for older releases; to an almost absurd degree, and if you don't go to before the .NET Framework divide, you largely don't even need to change anything to move projects to newer frameworks. They largely just work.
The .Net platform is honestly the most stable it has ever been.
Recent experience report: I updated four of my team's five owned microservices to .net 10 over the past two weeks. All were previously on .net 8 or 9. The update was smooth: for the .net 9 services, I only had to update our base container images and the csproj target frameworks. For the .net 8 services, I also had to update the Mvc.Testing reference in their integration tests.
It's hard for me to imagine a version increment being much easier than this.
.NET Framework had back then, when it was not in frozen state as it is now, every release a list of breaking changes. Modern .NET breaking changes are not worth talking about. Keeping up with the state of the art however is more interesting... But that is needed to be a solution for today and to stay relevant.
AKA people who are willing to be honest about its faults. .NET has a lot of social commentary on how stable and robust it is yet for some reason in every project I've had the displeasure of brushing up against a .NET solution it's always "we're updating" and the update process magically takes longer than building the actual feature.
Or what is a pretty standard feature in other tech-stacks needs some bespoke solution that takes 3 dev cycles to implement... and of course there's going to be bugs.
And it's ALWAYS been this way. For some reason .NET has acolytes who have _always_ viewed .NET has the pinnacle of programming frameworks. .NET, Core, .NET framework, it doesn't matter.
You always get the same comments. For decades at the point.
Except the experience and outcomes don't match the claims.
Just before I get the reply, I'm pretty familiar with .NET since the 2000's.
What do you mean? The .Net ecosystem has been generalized chaos for the past 10 years.
A few years ago even most people actively working in .Net development couldn't tell what the hell was going on. It's better now. I distinctly recall when .Net Framework v4.8 had been released and a few months later .Net Core 3.0 came out and they announced that .Net Standard 2.0 was going to be the last version of that. Nobody had any idea what anything was.
.Net 5 helped a lot. Even then, MS has been releasing new versions of .Net at a breakneck pace. We're on .Net 10, and .Net Core 1.0 was 9 years ago. There's literally been a major version release every year for almost a decade. This is for a standard software framework! v10 is an LTS version of a software framework with all of 3 years of support. Yeah, it's only supported until 2028, and that's the LTS version.
The only chaos occurred in the transition from .NET Framework to .NET (Core). Upgrading .NET versions is mostly painless now because the breaking changes tend to only affect very specific cases. Should take a few minutes to upgrade for most people.
Except it is a bummer when one happens to have such specific cases.
It never takes a few minutes in big corp, everything has to be validated, the CI/CD pipelines updated, and now with .NET 10, IT has to clear permission to install VS 2026.
If you can't get permission to update/change IDE, the company processes aren't working at all tbh. Same if cicd is in another department that doesn't give a shit.
That is pretty standard in most Fortune 500, whose main business is not selling software, and most development is done via consulting agencies.
In many cases you get assigned virtual computers via Citrix/RDP/VNC, and there is a whole infra team responsible for handling tickets of the various contractors.
Similar story at my prior job. Heck, we still had one package that was only built using 32-bit .Net Framework 1.1. We were only just starting to see out-of-memory errors due to exhausting the 2 GB address space in ~2018.
I love the new features of .Net, but in my experience a lot of software written in .Net has very large code bases with a lot of customer specific modifications that must be supported. Those companies explicitly do not want their software framework moving major supported versions as quickly as .Net does right now, because they can't just say "oh, the new version should work just fine." They'd have to double or triple the team size just to handle all the re-validation.
Once again, I feel like I am begging HN to recognize not everyone is at a 25 person microservice startup.
I might be missing something but the combination of 'we mustn't break anything' and 'we can't test it without 2-3* team size' sounds like release deadlock until you can test it..
The migrations where I've worked at have always been a normal ticket/epic. You plan it in the release, you do the migration, you do the other features planned, do the system tests, fix everything broken, retest, fix, repeat until OK, release.
Otherwise you're hoping you know exactly how things interact and what can possibly have broken, and I doubt anyone knows that. Everyone's broken things at first sight seemingly completely unrelated to their changes at some point. Especially in large systems it happens constantly. Probably above 1% of our merges break the nightly in unexpected places since no one has the entire system in their head.
Or you're keeping a dead product just barely alive via surgical precision and a lot of prayers that the surgeon remains faultless prior to every release.
On the migrations... read the comments through this thread.. there are many, and none have mentioned any significant pain points at all, just hypothetical ones from people like you who aren't actually actively using it.
As to the CI/CD pipelines... I just edited my .github/workflow/* to bump the target version, and off to the races... though if you're deploying to bare metal as opposed to containers, it does take a couple extra steps.
As to the "permission to install..." that's what happens when companies don't trust the employees that already write the software that can make or break the company anyway... Defelopers should have local admin privs, or a jump box (rdp) that does... or at the very least a linux environment you can remote-develop on that, again, has local admin privs.
I'm in a locked down environment currently, for a govt agency and hasn't been an issue. Similar for past environments which include major banking institutions.
I have no idea about most .NET developers. At my current job (a public software company in US with thousands of employees) it's up to engineers to decide when to upgrade. We upgraded our main monolith app to .NET 10 in the first week.
I've been using .Net since late 2001 (ASP+) including in govt and banking and rarely have had issues getting timely updates for my local development environment, and in the past decade it's become more likely that the dev team controls the CI/CD environment and often the deployment server(s).... Though I prefer containerized apps over bare metal deployments.
Can you elaborate a bit? This article talks about internal machinery of building .net releases. What does that have to do with "this churn", whatever that is?