1. You can't just rely on documentation ("we never said we would guarantee this or that") to push back on your users' claims that you introduced a breaking change. If you care more about your documentation than your users, they will turn their back on you.
2. However if you start guaranteeing too much stability, innovation and change will become too costly or even impossible. In this instance, if the git team has to guarantee their hashes (which seems impossible anyway because it depends on the external gzip program) then they can never improve on compression.
Someone once stated that every observable behaviour will be depended upon by someone sooner or later.
I can only imagine someone going to great lengths to avoid such "a stable order of operations was never guaranteed" discussion by just randomizing the order of execution or something similar (I bet someone will then use that as a seed for prng).
edit: skipping the first paragraph lead to repeating hyrums law.
I once got a bug report from a user's manager about the values in our application's private database.
It was an internal user interface, intended for employees of our conpany. Once upon a time, we had a process for adding a new record where it had to be added manually to multiple internal systems. So the internal UI had its own copy of the data. But then we built a single source of truth for this data source, that single source of truth had an API which our application would query and so the database table updates were abandoned as they were for the database's internal use only, however nobody ever bothered to remove the old table with a few hundred rows of stale data.
Two years later, we got the bug report then. The users' manager was complaining that the dataset was incomplete, that it was impeding his work, and that it needed to be fixed asap.
It turned out at some stage he had requested and was granted read only access to that DB, and had been querying the records of user actions in that DB to track the volume and quality of work his subordinates did. And then at some other point he realised that he could join against this table to get readable labels rather than opaque identifiers for the types of data said reports were working on. Except of course, the data was two years stale so he was noticing an increasing amount of "missing" labels in his report.
Said user escalated all the way to a VP of engineering before accepting that no, a private database is not a supported interface of our product.
"Users will eventually use your database directly no matter how good your UI/API is" deserves law on its own tbh. Or maybe "the shittier your API/UI is the higher chance that users will just use database directly.
When I had to do user management on a multi-site Wordpress instance years ago, I had to resort to using the database to manage user groups. It was a wild time.
I hope you at least estimated how much work it would be to add a user-facing audit tracking and reporting feature. You could probably charge good money for that.
Considering this was an in-house tool for a very company specific task which had 3 managers that could possibly use that information, it just was never going to be a high priority.
> I can only imagine someone going to great lengths to avoid such "a stable order of operations was never guaranteed" discussion by just randomizing the order of execution or something similar (I bet someone will then use that as a seed for prng).
The issue is that they didn't look closely at what their users / API consumers were actually doing. Even a cursory look at CI, packaging systems etc would have seen that those were expecting the hashes to be stable. If they'd done that early enough they might have been able to plan a transition to unstable hashes, or at least been able to emphasise the problem in documentation.
1. You can't just rely on documentation ("we never said we would guarantee this or that") to push back on your users' claims that you introduced a breaking change. If you care more about your documentation than your users, they will turn their back on you.
2. However if you start guaranteeing too much stability, innovation and change will become too costly or even impossible. In this instance, if the git team has to guarantee their hashes (which seems impossible anyway because it depends on the external gzip program) then they can never improve on compression.
Tough situation to be in.