I experienced this few years ago. I was hired with a team of consultants (I know.. I know..) because the system in question would have data randomly disappear from their databases every few weeks and it would take the operation team a month to notice, so by then the issue was harder to rectify.
The engineering team in question had proposed to rewrite an old Java app with AWS Amolify and replace their Postgres database with Dynodb. The whole thing was then ducktaped with Lambdas and spread across multiple regions. They had not bothered writing a single line of test, never mind having any kind of build and deployment pipeline.. they didn’t even have basic error reporting.
After doing a deeper dive, we discovered that engineers didn’t have a local environment but only had access to staging, however they could easily connect to the production database from local by just updating environmental variables. It turns out that one of them had forgotten to switch it back after debugging something on production from his machine and had forgotten to switch it back.. he had a background cleanup job running in intervals which was wiping data..
It was a complete nightmare, the schema less nature of Dynamo made it harder to understand what is happening and the React UI would crash every 15 minutes due to an infinite loop.
The operation team had learned to use the Chrome console and clear the local storage manually before the window feeezes..
Being charitable, in order for one of those new technologies to become mature and boring it requires guys like that to actually use it for things. So while it might be misguided we thank them for their sacrifice of getting caught on every sharp edge so they might be dulled (or at least documented) later.
And if the new tech is beneficial and adds enough value that it makes it worth replacing the old tech, then by all means go for it.
However, I can't count many companies I've seen decide to get into "the cloud" only to do lift-and-shifts and are now running their stack in slower and more expensive ways.
Over the last decade I worked for a fintech that did analytics for the investment banking industry and between 2016-2020 the amount of people that were shocked we weren't trying to shove blockchain somewhere was surreal.
Oh yeah, SOMEONE has to be be first.
It just doesn't have to be YOU!
If you want to be successful in your career, when you are put in charge of a big new project, on a tight timeline, high management visibility, etc.. you dig into your existing tried & true toolkit to get the job done. There's so many other variables, why needlessly add more risk no one asked for?
But yes, I'm glad there are maniacs out there.. I just don't want to work with them.
Granted, I remember symfony was very good, even in versions 1.x. It truly made working with PHP not suck as much as it did back in the day. I don't remember if there was ever a product of which it was born.
> you dig into your existing tried & true toolkit to get the job done
Haha. Yes. But when the c-suite is made up of top level management pushed out of the s&p 500, they always assume it’s their tried and true toolkit from another company. Believe me, it’s never the hammer the current engineering staff is holding. I’m slow clapping so hard for the business school graduates right now…
I've seen projects akin to "Excel sucks because its written in X, I am very smart, we will rewrite it to run in browser only with this new framework I read about, backed by microservices in Kotlin/Rust/some new fad, running server less on AWS (using this new thing thats just out of beta that I saw at re:Invent)".
And we need to stop all new feature work on Excel since its legacy, so give me 80% of the dev team to do above. Oh btw, they don't know the alphabet soup of stuff I decided to use, so we will also start firing them as well, as I need to hire for these special skills.
Eventually you need to admit COBOL is dead and rewrite. Eventually some library/framework is dead and you need to rewrite. However there are lots of options, and often your best is in place rewrite just small parts at a time, slowly getting rid of the legacy code over a couple decades - there is no real hurry here.
Eventually styles will change and you will have to redo the UI. This will happen much more often than the above. Your program may look very different but if you have a good architecture this is a superficial change. It may still be expensive, but none of your core logic changes. Normally you keep the old and new UI running side by side (depending on the type of program may be different builds, other times it is just a front end) until you trust the new one. (depending on details it may be an all at once switch or one screen/widget at a time)
Just because it is populate doesn't mean it isn't dead. Or should be dead. COBOL was really innovated in the day, but many of those innovations proved to be bad ideas. However switching to something else is very hard and expensive - thus it continues on.
Well, I'm not sure I'd say Windows is a good example of that anymore, in fact I was going to use it to argue the very point: as they seem to rewrite the UI in new frameworks, we've lost a lot of features (not even talking about speed and reliability)
I can, off the top of my head, name at least half a dozen of "those guys" and describe in detail the wreckage they left behind. Including one that was a major contributor to an 80%-of-the-billings client finding an alternative agency resulting in almost 100 people losing their jobs. :sigh:
Your phrasing implies that the Greenfield rewrite is always a mistake.
Depending on the project, it might actually be the only sane option if you're still required to make significant changes to the application and features have be be continuously added - and the project has already become so full of technical debt that even minor changes such as mapping an additional attribute can take days.
As an easily explained example for such: I remember an angular fronted I had to maintain a while ago.
The full scope of the application was to display a dynamic form via multiple inputs and interdependent on the selected choices of the available form (so if question 1 had answer a, question 2+3 had to be answered etc).
While I wouldn't call such a behavior completely trivial, on a difficulty line it was definitely towards the easy end - but the actual code was so poorly written that any tiny change always led to regressions.
It was quite silly that we weren't allowed to invest the ~2 weeks to recreate it
Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
But yeah, lots of Greenfield rewrites end in disaster. I give you that!
My company spent $billion and several years in a rewrite of a core product a few years ago.
I'm convinced on hindsight that we could have just refactored in place and been just as well off. Sure there would be some code that is still the ugly mess that made us jump to the big rewrite in the first place. However we would have had working code all along to ship. Much more importantly, we fixed a lot of problems in the rewrite - but we introduced other problems that we didn't anticipate at the same time, and fixing them means either another rewrite, or do an in-place refactor. The in-place refactor gives one advantage - if whatever new hotness we choose doesn't work in the real world we will discover before it becomes a difficult to change architecture decision.
A few years after a greenfield rewrite, the codebase is going to go back to the same level of quality it was before the rewrite, because the level of code quality is a function of the competency of the team, not the tech stack.
The only time it really makes sense to do a rewrite, is when either a new architecture/technology is going to be used that will impact team competency, or the team's competency has improved significantly but is being held back by the legacy application.
In both of those situations though, you could and should absolutely cut the application into pieces, and rewrite in place, in digestible, testable chunks.
The only time it makes sense to do a wholesale greenfield rewrite, is political. For example, maybe you have a one-time funding opportunity, with a hard deadline (rewrite before acquisition or ipo, etc).
I think it is safe to say we have improved as a company between when the code was started and the rewrite. And we have improved a lot since then.
We have also improved a lot as an industry. The rewrite was started in C++98 because C++11 was still a couple years away. Today C++23 has a lot of nice things, but some of the core APIs still are C++98 (though less every year) because that is all we had the ability to use. And of course rust wasn't an option back then, but if we were to start today would get a serious look.
Once we did a major rewrite from Perl/C++/CORBA into Java, during the glory days of Java application servers, three years development that eventually went nowhere, or maybe it did, who knows now.
In hindsight, cleaning up that Perl and C++ code, even where both languages stand today, would have been a much better outcome, than everything else that was produced out of that rewrite.
But hey, we all got to improve our CVs during that rewrite and got assigned better roles at the end, so who cares. /s
> Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
Their next example was exactly what you asked for, 2 years rewrite.
Bonus points from me because they didn't wait for the whole rewrite to be done, and instead started using the new project by replacing only parts of the original one.
If you are not architectured around the bridge it is really hard to add it latter. Probably the biggest advantage of micro services is they have built in bridges. Monoliths often have no obvious way to break things up. Every time you think you want to you discover some other headache.
Turns out brand new stuff doesn't always survive, and even if it does you don't know its tradeoffs & pain points yet.
Everything is perfect & bug-free when it has no product use.
Seen it many times, and seen the wreckage later.