Your phrasing implies that the Greenfield rewrite is always a mistake.
Depending on the project, it might actually be the only sane option if you're still required to make significant changes to the application and features have be be continuously added - and the project has already become so full of technical debt that even minor changes such as mapping an additional attribute can take days.
As an easily explained example for such: I remember an angular fronted I had to maintain a while ago.
The full scope of the application was to display a dynamic form via multiple inputs and interdependent on the selected choices of the available form (so if question 1 had answer a, question 2+3 had to be answered etc).
While I wouldn't call such a behavior completely trivial, on a difficulty line it was definitely towards the easy end - but the actual code was so poorly written that any tiny change always led to regressions.
It was quite silly that we weren't allowed to invest the ~2 weeks to recreate it
Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
But yeah, lots of Greenfield rewrites end in disaster. I give you that!
My company spent $billion and several years in a rewrite of a core product a few years ago.
I'm convinced on hindsight that we could have just refactored in place and been just as well off. Sure there would be some code that is still the ugly mess that made us jump to the big rewrite in the first place. However we would have had working code all along to ship. Much more importantly, we fixed a lot of problems in the rewrite - but we introduced other problems that we didn't anticipate at the same time, and fixing them means either another rewrite, or do an in-place refactor. The in-place refactor gives one advantage - if whatever new hotness we choose doesn't work in the real world we will discover before it becomes a difficult to change architecture decision.
A few years after a greenfield rewrite, the codebase is going to go back to the same level of quality it was before the rewrite, because the level of code quality is a function of the competency of the team, not the tech stack.
The only time it really makes sense to do a rewrite, is when either a new architecture/technology is going to be used that will impact team competency, or the team's competency has improved significantly but is being held back by the legacy application.
In both of those situations though, you could and should absolutely cut the application into pieces, and rewrite in place, in digestible, testable chunks.
The only time it makes sense to do a wholesale greenfield rewrite, is political. For example, maybe you have a one-time funding opportunity, with a hard deadline (rewrite before acquisition or ipo, etc).
I think it is safe to say we have improved as a company between when the code was started and the rewrite. And we have improved a lot since then.
We have also improved a lot as an industry. The rewrite was started in C++98 because C++11 was still a couple years away. Today C++23 has a lot of nice things, but some of the core APIs still are C++98 (though less every year) because that is all we had the ability to use. And of course rust wasn't an option back then, but if we were to start today would get a serious look.
Once we did a major rewrite from Perl/C++/CORBA into Java, during the glory days of Java application servers, three years development that eventually went nowhere, or maybe it did, who knows now.
In hindsight, cleaning up that Perl and C++ code, even where both languages stand today, would have been a much better outcome, than everything else that was produced out of that rewrite.
But hey, we all got to improve our CVs during that rewrite and got assigned better roles at the end, so who cares. /s
> Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
Their next example was exactly what you asked for, 2 years rewrite.
Bonus points from me because they didn't wait for the whole rewrite to be done, and instead started using the new project by replacing only parts of the original one.
If you are not architectured around the bridge it is really hard to add it latter. Probably the biggest advantage of micro services is they have built in bridges. Monoliths often have no obvious way to break things up. Every time you think you want to you discover some other headache.
Depending on the project, it might actually be the only sane option if you're still required to make significant changes to the application and features have be be continuously added - and the project has already become so full of technical debt that even minor changes such as mapping an additional attribute can take days.
As an easily explained example for such: I remember an angular fronted I had to maintain a while ago.
The full scope of the application was to display a dynamic form via multiple inputs and interdependent on the selected choices of the available form (so if question 1 had answer a, question 2+3 had to be answered etc).
While I wouldn't call such a behavior completely trivial, on a difficulty line it was definitely towards the easy end - but the actual code was so poorly written that any tiny change always led to regressions.
It was quite silly that we weren't allowed to invest the ~2 weeks to recreate it
Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
But yeah, lots of Greenfield rewrites end in disaster. I give you that!