At the last company I was at, our search microservice was fast (average response was well under 100ms) and it didn't crash once while I was there. At a larger company, this may not be an accomplishment. At a startup, this is the bees knees.
Meanwhile, the rest of our codebase (a monolith) crashed every few days for one reason or another. We had an on-call rotation not because that's what you're supposed to do, but because we actually needed it.
Now I'm not saying that microservices make sense for everyone. In general, I agree that they are used incorrectly. Microservices are hot and software developers, generally speaking, like to use hot technologies. Yes, moving to a microservice was costly. We had to re-write a lot of code, we had to set up our own servers, and we had to get permission from the guardians that be to do all of this. But, for our use case, and I assume there are other use cases too, the benefits of detaching ourselves from the company's monolithic codebase far outweighed the costs for doing so.
TL;DR No argument is the end to every conversation. Few things are so black and white.
Sooner or later you get a feel for which bits are becoming at least API stable and could run independently. That's when I split them out.
Do it too soon and you end up choosing the wrong boundaries and tying yourself up in knots, do it too late and your monolith can become a mess that's difficult to detach the pieces of.
Another option is to start with an umbrella app (Erlang/Elixir/OTP). It can run like a monolithic or ... nano-services (I suppose) within the same monolith. When it is time to split them out, it is easier.
It does assume that you either start with devs familiar with OTP or you have generalist devs that can pick things up quickly.
True. There's another thread in here somewhere talking about premature generalization. I think that's what you're getting at with "Do it too soon and you end up choosing the wrong boundaries".
TBH microservices do a good job of making you much more dependent on your tools, and selecting the wrong tool for the job won't become clear until you've used that tool for years.
At the last place I was at. We had a micro serviced monolith. I can't even begin to describe that thing in common engineering terms. (note: it's better than it seems).
You can get that benefit by dividing your system up into libraries with defined, documented, tested APIs. There's no need to introduce all the complexity and failure modes of distributed systems just to force good design.
When you need to scale, then you can easily throw your libraries behind an RPC framework and call it microservices, but there's no need to pay that cost until you actually face that problem.
Just putting libraries that were never designed to scale up behind RPC usually won't help you scale. These libraries tend to work with mutable, stateful objects and don't have any groundwork in place for partitioning.
That doesn't mean you can't scale up from a monolith (even one without clean interfaces) - every startup growth story is a testament otherwise - but it's never as as easy as strapping an RPC layer over your library.
One caveat is that if you need to fix a bug in your library in an API-compatible way, you can't reach into all the codebases that are using your library. You can deploy a new version of the microservice, though.
I mean, you _can_ if you organize your code such that you can. For example, Google's monorepo lets maintainers of a library find all internal usages and fix them. This is one of the benefits Dan Luu notes in http://danluu.com/monorepo/.
I think he means that you can't force all teams that use your library to recompile and pickup the updated code, while if you deploy it as a service, you recompile and redeploy and everyone talking to your service gets the most up-to-date version.
This is a real problem - I recall that Sanjay Ghemawat et al was working on it when I left Google, though I dunno if the solution they came up with is public yet. It's unlikely to seriously affect you unless you're Google-scale, though, by which time you've probably divided everything up into services and aren't taking advice from the Internet anyway. For companies that are a few teams working on a single product, it's easy enough to send a company-wide e-mail saying "Rebuild & redeploy anything that depends upon library X", and if you're doing continuous deployment or deploy only as a single artifact, the problem never affects you anyway.
You were initially replying to a suggestion explicitly qualifying this with "for small shops". Yes, you most definitely can force all teams that use your library to recompile and pickup the updated code - and it doesn't mean "a company wide email", a realistic scenario would involve standing up, pointing to a specific person and saying "Bob, the new version of my library will also work better for the performance problems you had, pick it up whenever you're ready"; and knowing that it's an exhaustive list of people who need to be informed.
For starters the vast majority of code is developed in-house in non-software companies. The vast number of products are a single team working essentially in a silo, not "a few teams working on a single product".
When people are talking about small companies, it's misleading to think "smaller than Google". Smaller-than-Google is still an enormous quantity of development. Enterprisy practices make much sense in scaling software in companies that are smaller-than-smaller-than-Google. if you hear "small company", think multiple steps further from that, a smaller-than-smaller-than-smaller-than-smaller-than-smaller-than-Google company.
> you recompile and redeploy and everyone talking to your service gets the most up-to-date version
Sure, but if you do that in place it will still break stuff that assumes it works like the last version, and if you do a versioned API or the like you still can't force all teams to adopt the new version.
If you need to change your microservice's API in a non-backwards compatible way, you have the exact same problem plus significant operational complexity.
Which is basically what you do for a traditional library as well. Tweak the header so anything being recompiled against it gets a different function signature. Then old apps continue to work, and newly built apps get the fix.
Moderately ironically - this is a place where dynamically loaded libraries are particularly well suited. So long as the API hasn't changed, the library can be patched independently of all the other compiled code.
Of course, there are other limitations this imposes, but it does make it very simple to deploy a new library to all code which uses it.
Nothing about splitting your app into microservices _forces_ a good design. I've never seen microservices with well-defined seams. Every time, knowledge "leaked" between the apps, and any non-trivial change to the app required updating multiple repos, deployment synchronization, etc. Microservices are a tremendous burden that the vast majority of companies will not benefit from.
Except splitting into microservices is an unnecessarily complex design choice. That's almost always worse, and the cognitive load comes in when you now need to figure out how to get this stuff right. The scaling benefits also require that you get it right, small flaws in your system become massive issues.
If you separate components wrong in the same code base is an easy fix. If you get them wrong between services you have s much larger problem. I'm not sure why you'd be more likely to get that right with services than within the same code base.
"Logic" is vague and there a several layers you can implement this before even thinking about microservices.
It can be as simple as a simple class, or maybe a larger class as a single-file service, or an entire namespace with a several classes, or a separate library easily referenced. All the "logic" split benefits without the ridiculous hassle of microservices.
I think this is actually a failure in mainstream programming languages, which make it far too easy to reach across what's meant to be a defined subsystem boundary and meddle where you shouldn't.
Definitely agree – the polyglot aspect can also be useful for companies where different parts of their problem fit different tools.
However, exercising proper software discipline and using languages with good/existent module systems, like OCaml or Go, can lead to the same modular results without the fixed overhead. If you don't have a full-time ops person or team, you almost always have no business running microservices.
If this were Medium, I'd highlight the hell out of that.
That's so true, and so nicely, succinctly put - it ought to be the reply to end every argument about whether microservices are good or bad.