Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Logical separation, the modules, is what allows to preserve developer sanity. Physical separation, the (micro-)services is what allows you to ship things flexibly. Somewhere on the distant high end, microservices also play a role in enabling scalability to colossal scales, only needed by relatively few very huge companies.

The key problem of developing a large system is allowing many people to work on it, without producing a gridlock. A monolith, by its nature, produces a gridlock easily once a sufficient number of people need to work on it in parallel. Hence modules, "narrow waist" interfaces, etc.

But the key thing is that "you ship your org chart" [1]. Modules allow different teams to work in parallel and independently, as long as the interfaces are clearly defined. Modules deployed separately, aka services, allow different teams to ship their results independently, as long as they remain compatible with the rest of the system. Solving these organizational problems is much more important for any large company than overcoming any technical hurdles.

[1]: https://en.wikipedia.org/wiki/Conway%27s_law



> Physical separation, the (micro-)services is what allows you to ship things flexibly.

If you are willing to pay a price. Once you allow things to ship separately are are locked into the API Hyrum's law mistakes of the past until you can be 100% that all uses of the old thing are gone in the real world. This is often hard. By shipping things together you can verify they all work together before you ship, and more importantly when you realize something was a mistake there is a set time where you can say "that is no longer allowed" (this is a political problem - sometimes they will tell you no we won't change, but at least you have options).

Everything else is spot on, but I am feeling enough pain from mistakes made 15 years ago (that looked reasonable at the time!) such that I feel the need to point it out.


Scale? At what "scale" runs Linux, perhaps the most well known monolith software ever?

(And also the subject of perhaps the most well studied flamew^Wdiscussion about mono- versus microservice architecture.)

It only runs most of all servers and most of all the mobile terminals in the world. Where is that distant higher end, where microservices unlock colossal scale, exactly?

Architecture matters. Just not always the way you think. But it serves as catnip for everyone who loves a good debate. Anyone who gets tired of writing code can always make a good living writing about software architecture. And good for them. There's a certain artistry to it, and they have diagrams and everything.


Linux can scale to a machine with hundreds of cores; the largest is 1000+ cores IIRC. It's because it scales horizontally, in a way, and can run kernel threads on multiple cores. But a NUMA configuration feels increasingly like a cluster the more cores you add, just because a single-memory-bus architecture can't scale too much; accessing a far node's memory introduces much more latency that accessing local RAM.

It's easy to run a thousand independent VMs. It's somehow more challenging to run a thousand VMs that look externally as one service, and can scale down to 500 VMs, or up to 2000 VMs, depending on the load. It's really quite challenging to scale a monolith app to service tens of millions of users on one box, without horizontal scaling. But it definitely can be done, see the architecture of Stackoverflow.com. (Well, this very site is served by a monolith, and all logged-in users are served by a single-threaded monolith, unless they rewrote the engine. Modern computers are absurdly powerful.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: