Wait, does urlib not use semvar? Don't remove APIs on minor releases people. A major release doesn't have to be a problem or a major redesign, you can do major release 400 for all I care, just don't break things on minor releases.
Lots of things not using semvar that I always just assumed did.
As an example, I always knew urllib3 as one of the foundational packages that Requests uses. And I was curious, what versions of urllib3 does Requests pull in?
That is exactly the kind of dependency specification I would expect to see for a package that is using semver: The current version of urllib3 is 2.x, so with semver, you set up your dependencies to avoid the next major-version number (in this case, 3).
So, it seems to me that even the Requests folks assumed urllib3 was using semver.
Fixing deprecations is unfortunately the lowest prio of any kind of work for majority of the projects. Part of the problem is probably lack of pressure to do so it if the timeline is unclear. What if this is actually never removed? Why going through the pain?
IMO telling "we deprecate now and let's see when we remove it" is counterproductive.
A better way: deprecate now and tell "in 12 (or 24?) months this WILL be removed".
After 12/24 months, cut a new semver-major release. People notice the semver-major through the dependency management tools at some point, an maybe they have a look at changelog.
If they don't, at some point they may want to use a new feature, and finally be incentivised to update.
If there's no incentive other than "do the right thing", it never gets done.
Having said that, I think LLMs are really going to help with chores like this, if e.g. deprecations and migration steps are well documented.
Alternative option: create a codemod CLI that fixes deprecations for the users, doing the right thing automatically. If migration is painless and quick, it's more likely people will do it.
> Fixing deprecations is unfortunately the lowest prio of any kind of work for majority of the projects.
... and the right answer to that is to make it entirely their problem.
> Part of the problem is probably lack of pressure to do so it if the timeline is unclear. What if this is actually never removed?
In this case, the warnings said exactly what release would remove the API. Didn't help.
> Why going through the pain?
Because you're not a feckless irresponsible idiot? I don't think it's an accident that the projects they said didn't react were an overcomplicated and ill-designed management layer for an overcomplicated and ill-designed container system, a move-fast-and-break-things techbro company, and what looks to be a consolation project for the not-too-bright.
You probably get an extra measure of that if you're operating in the Python ecosystem, which is culturally all about half-assed, 80-percent-right-we-hope approaches.
The right answer is to remove it when you say you're going to remove it, and let them pick up the pieces.
It also helps if you design your API right to begin with, of course. But this is Python we're talking about again.
> After 12/24 months, cut a new semver-major release. People notice the semver-major through the dependency management tools at some point, an maybe they have a look at changelog.
Deprecations in all forms are always a shitshow. There isn’t a particular pattern that “just works”. Anybody that tells you about one, best case scenario, it just worked for them because of their consumer/user not because of the method itself.
The best I have seen is a heavy handed in-editor strike through with warnings (assuming the code is actively being worked on) and even then it’s at best a 50/50 thing.
50% of the developers would feel that using an API with a strike through in the editor is wrong. And the other 50% will just say “I dunno, I copied it from there. What’s wrong with it??”
Deprecations via warnings don't reliably work anywhere, in general.
If you are a good developer, you'll have extensive unit test coverage and CI. You never see the unit test output (unless they fail) - so warnings go unnoticed.
If you are a bad developer, you have no idea what you are doing and you ignore all warnings unless program crashes.
You can turn warnings into errors with the `-Werror` option. I personally use that in CI runs, along with the `-X dev` option to enable additional runtime checks. Though that wont solve the author's problem, since most Python devs don't use either of those options
There was this one library we depended on, it was sort of in limbo during the Python 2 -> 3 migration. During that period is was maintained by this one person who'd just delete older versions when never ones became available. In one year I think we had three or four instances where our CI and unit tests just broke randomly one day, because the APIs had changed and the old version of the library had been yanked.
In hindsight it actually helped us, because in frustrations we ended up setting up our own Python package repo and started to pay more attention to our dependencies.
> If you are a good developer, you'll have extensive unit test coverage and CI. You never see the unit test output (unless they fail) - so warnings go unnoticed.
In my opinion test suites should treat any output other than the reporter saying that a test passed as a test failure. In JavaScript I usually have part of my test harness record calls to the various console methods. At the end of each test it checks to see if any calls to those methods were made, and if they were it fails the tests and logs the output. Within tests if I expect or want some code to produce a message, I wrap the invocation of that code in a helper which requires two arguments: a function to call and an expected output. If the code doesn't output a matching message, doesn't output anything, or outputs something else then the helper throws and explains what went wrong. Otherwise it just returns the result of the called function:
let result = silenceWarning(() => user.getV1ProfileId(), /getV1ProfileId has been deprecated/);
expect(result).toBe('foo');
This is dead simple code in most testing frameworks. It makes maintaining and working with the test suite becomes much easier as when something starts behaving differently it's immediately obvious rather than being hidden in a sea of noise. It makes working with dependencies easier because it forces you to acknowledge things like deprecation warnings when they get introduced and either solve them there or create an upgrade plan.
When I update python version, python packages, container image, etc for a service, I take a quick look at CI output, in addition to the all the other checks I do (like a couple basic real-world-usage end-to-end usage tests), to "smoke test" whether something not caught by outright CI failure caused some subtle problem.
So, I do often see deprecation warnings in CI output, and fix them. Am I a bad developer?
I think the mistake here is making some warnings default-hidden. The developer who cares about the user running their the app in a terminal can add a line of code to suppress them for users, and be more aware of this whole topic as a result (and have it more evident near the entrypoint of the program, for later devs to see also).
I think that making warnings error or hidden removes warnings as a useful tool.
Author here! Agreed that are different levels of "engaged" from users, which is okay. The concerning part of this finding is that even dependent users that I know to be highly engaged didn't respond to the deprecation warnings, so they're not working for even the most engaged users.
Why does your codebase generate hundreds of warnings, given that every time one initially appeared, you should have stamped it out (or specifically marked that one warning to be ignored)? Start with one line of code that doesn't generate a warning. Add a second line of code that doesn't generate a warning...
Why is it that CI tools don't make warnings visible? Why are they ignored by default in the first place? Seems like that should be a rather high priority.
It isn't that easy. If you have a new warning on upgrade you probably want to work on it "next week", but that means you need to ignore it for a bit. Or you might still want to support a really old version without the new API and so you can't fix it now.
> If you have a new warning on upgrade you probably want to work on it "next week", but that means you need to ignore it for a bit.
So you create a bug report or an issue or a story or whatever you happen to call it, and you make sure it gets tracked, and you schedule it with the rest of your work. That's not the same thing as "ignoring" it.
That's why you, very early on, release code that slows the API down once it has been deprecated. Every place you issue a deprecation warning, you also sleep 60. Problem solved.
Wild (and I guess most of the time bad) idea: on top of the warnings, introduce a `sleep` in the deprecated functions. At every version, increase the sleep.
Has this ever been considered?
The problem with warnings is that they're not really observable: few people actually read these logs, most of the time. Making the deprecation observable means annoying the library users. The question is then: what's the smallest annoyance we can come up with, so that they still have a look?
A deprecation warning is not actionable for typical end users. Why don't more warnings include calls to action?
Instead of a warning that says, "The get_widget method in libfoo is deprecated and will be removed by November 30", the warning could say:
"This app uses the deprecated get_widget method. Please report this bug to the app developer. Ask them to fix this issue so you can continue using this app after November 30."
The devil is in the details. It seems `getHeaders` v. `headers` is non-security, non-performance related issue. Why people should spend time fixing these?
The secret trick I've used on rare occasion, but when necessary, is the "ten second rule."
Users don't notice a deprecation warning. But they might notice adding a "time.sleep(10)" immediately at the top of the function. And that gives them one last grace period to change out their software before it breaks-breaks.
A breaking change causes a full-stop to a service.
An intentional slowdown lets the service continue to operate at degraded performance.
I concur that it's less clear for debugging purposes (although any reasonable debugging infrastructure should allow you to break and see what function you're in when the program hangs; definitely not as clear as the program crashing because the called function is gone, however).
A breaking change in a dependency doesn’t cause a full-stop to a service at all. The old version continues to work. Making subtly harmful changes so that new broken versions sneak in is just a bad idea and totally unnecessary.
It's worked in the past. But it does require someone at your org to care that CI times are spiking, which is not always a thing you can rely upon.
In addition: if CI is the only place the issue shows up, and never in a user interaction... Why does that software exist in the first place? In that context, the slowdown may be serving as a useful signal to the project to drop the entire dependency.
ETA: To be clear, I don't do this as a substitute for a regular deprecation cycle (clear documentation, clear language-supported warnings / annotations, clear timeline to deprecate); I do it in addition before the final yank that actually breaks end-users.
>We ended up adding the APIs back and creating a hurried release to fix the issue.
So it was entirely possible to keep the software working with these. Why change/remove them in the first place? Is the benefit of of the new abstraction greater than the downside of requiring everyone using the software to re-write theirs?
OS software maintainers don't like maintaining legacy ugly APIs forever and want to refactor/remove legacy code to keep themselves sane and the project maintainable.
Every public API change is a cost on the user: for an extreme example, if every library I ever used renamed half its APIs every year to align with the latest ontology, then there would hardly be any point in saving my scripts, since I'd have to be constantly rewriting them all.
Of course, the reality is hardly ever as bad as that, but I'd say having to deal with trivial API changes is a reasonable basis for a user to dislike a given project or try to avoid using it. It's up to the maintainers how friendly they want to be toward existing user code, and whether they pursue mitigating options like bundling migrations into less-common bigger updates.
Yep. Or you can see it as, "This software doesn't really care about the users and their use cases. It prioritizes making things look pretty and easier on the dev side over maintaining functionality." Or in the worse but fairly common OSS case, CADT, but that doesn't seem to apply in this context.
I think this is a valid question for this specific case, but may not always be possible. That said, I think as a user I would probably prefer it if under the hood the old function called the new so they can deprecate the behavior without breaking the API. In that way you can still emit the deprecation warning while also only having one actual code path to maintain.
Lots of things not using semvar that I always just assumed did.
reply