Monolith is fine if you have a fairly simple process for manipulating your data. Like posting an article, then a comment, with some moderation thrown in.
But when you start adding business rules, and start transforming your data and moving it around, then your monolith will become too complex and often too expensive to run. Lots of moving parts tightly coupled together, long-running transaction wrapping multiple joined tables etc. Rolling out new features will become very challenging as well.
Something like event sourcing is more complex to set upfront than a monolith, but at least it offers the way to add scale and features later without creating a combinatorial explosion...
Ironically, business rules are often much easier done in a monolith, since they tend to require access to basically the entire database, and have impact across your code base.
Not saying it needs to be spaghetti all over the code mind you. Just that it's easier to have a module within the monolith rather than a dedicated service.
Especially that it must be fun hunting down race conditions across microservices. Like, microservices throw away every benefit of a single code base in a single language (and its guarantees). They sometimes make sense, but arguably that sometimes is quite rare.
The conversation in this (overall) thread is dancing around issues of experience,competence, and maturity. And the age ceiling forcefully pushed by people like Paul Graham of this very HN. When your entire engineering team are “senior developers” with 3 years of experience (lol) and most don’t even know what a “linker” does, the fundamental feature of the wunder architecture is obscure and not understood.
Building effective monoliths absolutely demands competent DB experts, schema designers & developers. The problem that microservices solved was the sparsity of this sort of talent when demand overshot supply by orders of magnitude.
(In a way the monoloth vs microservices debates are echoes of the famous impedance mismatch between object graph runtimes and relational tables and DBMSs.)
Why do we need "scale"? The 2nd cheapest Hetzner offering can probably serve a hundred thousand people a basic CRUD app just fine, with the DB running on the same machine. And you can just buy a slightly more expensive machine if you need scale, horizontal scaling is very rarely necessary actually.
Stackoverflow runs on a couple of (beefy) machines only.
This is a fallacy. Adding a network boundary does not make your application less complex. If you can't make a "monolith" make sense, thinking you can do it in a microservices architecture is hubris. If you think long-running transactions / multiple tables are difficult, try doing that in a distributed fashion.
One of the main "problems" with proposing microservices is that trivially, there is nothing a microservice can do that can not be done by a monolith that is designed with discipline. Over the years my monoliths have grown to look at awful lot like a lot of microservices internally, except that they can still benefit from passing things around internally rather than over networks.
(Meaning generally that performance-wise, they clean the clock of any microservice-based system. Serializing a structure, shipping it over a network with compression and encryption, unserializing it on the other end, performing some operation, serializing the result, shipping it over the network with compression and encryption, deserializing the result, and possibly having to link it back up to internal data structures finds it hard to compete with "the data is already in L1, go nuts".)
I've even successfully extracted microservices from them when that became advantageous, and it was a matter of hours, not months, because I've learned some pretty solid design patterns for that.
If you can't design a well-structured monolith, you even more so can't design a microservice architecture.
It's not wise to try to learn too many lessons about what is good and bad from undisciplined, chaotic code bases. Chaos can be imposed on top of any nominal architecture. Chaotic microservices is not any more fun than a chaotic monolith, it's just unfun in a different way. The relevant comparison is a well-structured monolith versus a well-structured microservice architecture and that's a much more nuanced question.
A few comments point out that replacing a monolith with micro services doesn't reduce complexity. I agree 100%.
That's why I mentioned Event Sourcing pattern, not "microservices". Think of a single event log as a source of truth where all the data goes, and many consumer processes working in parallel alongside, picking only those events (and the embedded data) that concern them, reacting to them, then passing it on not knowing what happens later. Low coupled small self-sufficient components that you can keep on adding one next to another, without increasing the complexity of the overall system.
Maybe Event Sourcing/CQRS can be called "microservices done right", but that's definitely not those microservices (micro-monoliths?) everyone is talking about.
ES has the potential but is too immature of a pattern to be simple. It’s a shame, but let’s not pretend.
For instance, an immutable event log is illegal in many cases (PII). So you have to either do compaction on the log or use an outside mutable store.
Another issue is code evolution: if you change your event processing logic at runtime, you get a different state if you replay it. Maybe some users or orders will not be created at all. How you deal with that? Prevent it with tooling/testing or generate new events for internal actions?
Also, all the derived state is eventually consistent (so far so good) but for non-toy apps you absolutely need to use derived state to process events, which naively breaks determinism (now your event processing depends on the cursor of the derived state).
Check out Rama[1]. They’re solving this problem, and it’s super interesting but again let’s not fool ourselves – we’re far from mature and boring now.
Something like it could hopefully become boring in the future. Many of these features could probably be simplified or skipped entirely in later iterations of these patterns.
"passing it on not knowing what happens later" often is fundamentally not acceptable - you may need proper transactions spanning multiple things, so that you can't finalize your action until/unless you're sure that the "later" part was also completed and finalized.
An individual component participating in a complex operation spanning multiple steps indeed knows nothing about the grand scheme of things. But there will be one event consumer component specifically charged with following the progress of this distributed transaction (aka saga pattern).
> But when you start adding business rules, and start transforming your data and moving it around, then your monolith will become too complex and often too expensive to run. Lots of moving parts tightly coupled together, long-running transaction wrapping multiple joined tables etc. Rolling out new features will become very challenging as well.
Absolutely NONE of that has to happen if you structure your project/code well.
From the author of the Sequel gem. It can do everything that Rails can, faster and lighter by order of magnitude, expandable (turn on only the plugins you need), and it does it all 'the Ruby way', not 'the Rails way'. That means simplicity, clarity and power. Generally, all software Jeremy Evans releases is of excellent quality and actively supported. Oh, and this Roda framework is in active use by the US government and is subject to regular security audits...
The definition of obscure but powerful and quite beautiful web framework.
Yeah. So many api writers aim to force clients do all the heavy lifting. The whole point of a good api is that it reduces heavy lifting. Anyone can write pass through apis that don’t do anything.
> The whole point of a good api is that it reduces heavy lifting.
Isn’t that the opposite of what Torvalds is saying? He seems to be arguing for simplicity. APIs that do a bunch of magic for you are the opposite of simple and tend to be mountains of subtle bugs and unexpected behavior.
> APIs that do a bunch of magic for you are the opposite of simple
You're mixing simplicity of API with simplicity of implementation. More often than not, you can only have one but not both.
Modern Linux or Windows do huge amount of magic when you call kernel API like open (POSIX) / CreateFile (Windows), yet the API is simple and easy.
You can expose all implementation details, your code will be simple, but hard to build upon. Speaking about data storage, once upon a time I programmed Nintendo consoles, their file system API probably was very simple for Nintendo to implement, but using it wasn't fun: SDK documentation specified delays, specified how to deal with corrupt flash memory, etc.
You can do the other way, you'll have to do lot of work handling all the edge cases, your code will be very complex, but this way you might make a system that's actually useful. About data storage, SQL servers have tons of internal complexity, even sqlight does, but API, the SQL, is high level and easy to use even by non-programmers.
I think that lies in the art of developing the API in the first place. It should give you enough primitives that it gets the job done balancing the responsibilities it advertises with the cognitive load on the developer to use it correctly.
Disagree in this case, it's about exposing a simple abstraction, which may mean a simple implementation, or may mean a complex one, depending on the impedance mismatch with what's going on under the hood.
There was explosion on the sea platform and men were thrown off it by the blast to the water. They were searching for them for a while before pronouncing them dead. This was reported in the Russian media.
The article's study found that "people really don't want to confront information that could potentially disrupt their worldviews."
This thread's comments against "anti-partisanship" articles show that people also don't want to confront information about them not willing to confront information... etc.
> The number of people in slavery would have still decreased
On what basis are you making this assumption? "The fall of Rome" was not an anti-slavery uprising. It was gradual disappearance of the central authority (and all its benefits such as roads, law and order etc).
About 10% of England's population entered in the Domesday Book (1086) were slaves. Compare that with the Roman Empire where slave population (including Rome and all provinces) are estimated at 10-15% of the total.
The biggest buyer of slaves going down would decimate the market, no? I would imagine it would have played exactly like abolition of slavery in Britain or US. There was still slavery across the world, but the number of people that suffered from it went down.
James C. Scott is a good illustration of 'no skin in the game' syndrome among intellectuals.
It would be very persuasive to see him personally bolting for freedom between warring warlords in present-day Libya, or improving his welfare in lawless mafia-ruled cities of ex-Soviet Union in the 1990s.
But no such chance, so his views may be safely ignored.
Funny thing is that a US project of nuclear-powered cruise missile Project Pluto has been abandoned in 1964 among other things in order to "not to provoke" the Russians into developing something similar... "against which there was no known defence".
Project Pluto was abandoned for many other reasons too. Cruize missiles didn't have to be powered by a nuclear ramjet.
Although at the time there was no defence against them, there is now. Developing it will be expensive, and the system will have a rather low hit rate, but modern electronically scanned radars, UAVs and AI-based target detection can easily be tuned to take down cruize missiles regardless of the payload.
Github is down for Russian developers because some anti-Putin political activists are eager to make a point.
It works like this:
- they push some content deemed illegal in Russia to a github repo (something like instructions on committing suicide or on growing marijuana).
- then they themselves post a complain to the Russian internet regulator, accompanied by the link to illegal content;
- the ugly bureaucrats machine (which is mostly automatic) bans the whole of Github.
- at some point later in the day a human intervenes and unblocks the site, as happened several times in the past.
However, by that time the media has picked up the story, and many oppressed Russian developers who don't know how to use a US proxy, have received job offers.
Here you are. Posted by a Russian citizen from a commit labeled "Privet, Roskomnadzor!" Such a rebel! Demonstrating the world unspeakable horrors of the Putin's regime.
Imaging the library offering a girl's diary. Or the diary of some person describing in detail how he raped her.
Now, because it causes her great emotional distress to have people reading it, she asks them to remove it from their index (or even remove it completely).
But when you start adding business rules, and start transforming your data and moving it around, then your monolith will become too complex and often too expensive to run. Lots of moving parts tightly coupled together, long-running transaction wrapping multiple joined tables etc. Rolling out new features will become very challenging as well.
Something like event sourcing is more complex to set upfront than a monolith, but at least it offers the way to add scale and features later without creating a combinatorial explosion...