Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you're using protobufs, for instance, and you share the interfaces in a repo. Updating Service A's interface(s) necessitates all services dependent on communicating with it to be updated as well (whether you utilize those changes or not).

This is not true! This is one of the core strengths of protobuf. Non-destructive protobuf changes, such as adding new API methods or new fields, do not require clients to update. On the server-side you do need to handle the case when clients don't send you the new data--plus deal with the annoying "was this int64 actually set to 0 or is it just using the default?" problem--but as a whole you can absolutely independently update a protobuf, implement it on the server, and existing clients can keep on calling and be totally fine.

Now, that doesn't mean you can go crazy, as doing things like deleting fields, changing field numbering or renaming APIs will break clients, but this is just the reality of building distributed systems.





What you are talking about is simply keeping the API (whether a library or a service) backwards-compatible. There are plenty strategies to achieve that, and it can be done with almost any interface layer (HTTP, protobuf, JSON, SQL, ...).

I was oversimplifying for the sake of example, but yes you are correct. Properly managed protobufs don't require an update on strict interface expansion; so shouldn't always require a redeploy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: