Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe in the original amazon service architecture, that grew into AWS (see “Bezos API mandate” from 2002), backwards compatibility is expected for all service APIs. You treat internal services as if they were external.

That means consumers can keep using old API versions (and their types) with a very long deprecation window. This results in loose coupling. Most companies doing microservices do not operate like this, which leads to these lockstep issues.





Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.

I'm not saying monoliths are better then microservices.

I'm saying for THIS specific issue, you will not need to even think about API compatibility with monoliths. It's a concept you can throw out the window because type checkers and integration tests catch this FOR YOU automatically and the single deployment insures that the compatibility will never break.

If you choose monoliths you are CHOOSING for this convenience, if you choose microservices you are CHOOSING the possibility for things to break and AWS chose this and chose to introduce a backwards compatibility restriction to deal with this problem.

I use "choose" loosely here. More likely AWS ppl just didn't think about this problem at the time. It's not obvious... or they had other requirements that necessitated microservices... The point is, this problem in essence is a logical consequence of the choice.


> or they had other requirements that necessitated microservices

Scale

Both in people, and in "how do we make this service handle the load". A monolith is easy if you have few developers and not a lot of load.

With more developers it gets hard as they start affecting eachother across this monolith.

With more load it gets difficult as the usage profile of a backend server becomes very varied and performance issues hard to even find. What looks like a performance loss in one area might just be another unrelated part of the monolith eating your resources,


Exactly, performance can make it necessary to move away from a monolith.

But everyone should know that microservices are more complex systems and harder to deal with and a bunch of safety and correctness issues that come with it as well.

The problem here is not many people know this. Some people think going to microservices makes your code better, which I’m clearly saying here you give up safety and correctness as a result)


> Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.

This this is what I don't get about some comments in this thread. Choosing internal backwards compatibility for services managed by a team of three engineers doesn't make a lot of sense to me. You (should) have the organizational agility to make big changes quickly, not a lot of consensus building should be required.

For the S3 APIs? Sure, maintaining backwards compatibility on those makes sense.


Backwards compatibility is for customers. If customers don’t want to change apis… you provide backwards compatibility as a service.

If you’re using backwards compatibility as safety and that prevents you from doing a desired upgrade to an api that’s an entirely different thing. That is backwards compatibility as a restriction and a weakness in the overall paradigm while the other is backwards compatibility as a feature. Completely orthogonal actions imo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: