This describes basically all their comments in this thread, which seems to be basically all their comments ever on this 3 day old seemingly troll account.
You say he's narrow-minded, but you focus on the least relevant thing of everything he said, speed, and suggest that, somehow, something with "fast" in its name will fix it?
Speed is the least concern because things like numpy are written in C and the overhead you pay for is in the glue code and ffi. The lack of a standard distribution system is a big one. Dynamic typing works well for small programs and teams but does not scale when either dimension is increased.
But pure Python is inherently slow because of language design. It also cannot be compiled efficiently unless you introduce constraints into the language, at which point you're tackling a subset thereof. No library can fix this.
Very little of what you're claiming is relevant for FastAPI specifically, which in terms of speed isn't too far from an equivalent app written in Go for writing a web app. You need to research the specifics of a problem at hand instead of making broad but situationally incorrect assumptions. The subject here is web apps, and Python is very much a capable language in this niche as of the end of 2025, both in terms of speed, code elegance and support for static typing (FastAPI is fully based on Pydantic) - https://www.techempower.com/benchmarks/#section=test&runid=7...
> But pure Python is inherently slow because of language design. It also cannot be compiled efficiently unless you introduce constraints into the language, at which point you're tackling a subset thereof. No library can fix this.
A similar point was raised in the other python thread on cpython the other day, and I’m not sure I agree. For sure, it is far from trivial. However, GraalVM has shown us how it can be done for Java with generics. Highover, take the app, compile and run it. The compilation takes care of any literal use of Generics, running the app takes care of initialising classes and memory, instrumentation during runtime can be added to add runtime invocations of generics otherwise missed. Obviously, this takes a lot of details getting it right for it to work. But it can be done.
Implying that existence of your tool of preference in another programming language makes other equally impressive tools something akin to "[colossal] mistake that we'll pay for for years" "simply motivated by inertia" is way below the level of discussion I would expect from Hacker News.
I would have given the OOP the effort and due respect is formulating my response if it was phrased in the way you're describing. It's only fair that comments that strongly violate the norms of substantive discourse don't get a well-crafted response back.
While this is true, it is often stunning to me how long it took to get to `uv run`.
I have worked with Python on and off for 20+ years and I _always_ dreaded working with any code base that had external packages or a virtual environment.
`uv run` changed that and I migrated every code base at my last job to it. But it was too late for my personal stuff - I already converted or wrote net new code in Go.
I am on the fence about Python long term. I’ve always preferred typed languages and with the advent of LLM-assisted coding, that’s even more important for consistency.
Well said. I’m in the same boat of being on the fence about python. I’ve been burned too many times in the past.
And even if uv was perfectly solves all of our woes, it still seems worse than languages that solve packaging and deployment with a first-party built tools.
There’s only so much lipstick and makeup you can put on a pig…
Yeah, the difference between static and dynamically typed languages are massive with LLM coding, and the difference seems to me exponentially larger with larger codebases.
It's a UX issue. The author is correct — nobody cares about all the mumbo-jambo virtualenvs or whatever other techno-babble.
The user
just
wants
to run
the damn program.
> `uv run` and PEP 723 solved every single issue the author is describing.
PEP 723 eh? "Resolution: 08-Jan-2024"
Sure, so long as you somehow magically gain the knowledge to use uv, then you will have been able to have a normal, table-stakes experience for whole 2 years now. Yay, go Python ecosystem!
Is uv the default, officially recommended way to run Python? No? Remember to wave goodbye to all the users passing the language by.
I don't see your point. The kind of user who will struggle to type out `uv run` will find it even more difficult to type out `//usr/local/go/bin/go run "$0" "$@"; exit`. Neither approaches are the "default, officially recommended ways to run" scripts.
I strongly encourage you to read the article to acquire the context for the conversation before commenting, which is what I assume is happening here.
I don't agree, the user wants to run the program in a way the user wants to, but is frustrated when it doesn't.
If all dependencies were installed on the machine the script would run no problem. I have some scripts with dependencies that are installed on the system.
The author writes:
> The built in tooling within the go ecosystem is another large selling point. We don't need a .pyproject or package.json to configure ad-hoc formatting and linters, backed by pipelines to ensure consistency.
Maybe shebangs is not the solution to that problem? It's a convenience to run scripts as executable, but the user is supposed to setup the environment. Then he continues to explain that go has a great stdlib which makes it perfect for scripting. This is the reason I usually reach for python for complex scripts, the stdlib is big enough to solve most my problems.
Now that node includes sqlite the choice isn't as easy, but I wouldn't be pissed at node and javascript if I have to setup the environment to make sure the script runs. I understand how it runs, where it gets the dependencies. If I forget to run `npm i` before running the scripts that's my error, I prefer errors that remind me of my stupidity over magic.
Libraries, yes. Tooling around packages/building/managing runtimes? I'm not convinced. Perl has been using CPAN like two decades now, and I wouldn't consider that ecosystem to exactly be an example of "there's only one way to do it". I feel like you're extrapolating in the wrong direction; older languages are less likely to have first party tooling for this, so they're more likely to have multiple ways of doing it, but I don't think there's much evidence that a language that started out with first party tooling will always follow that same trend. I don't think the issue with Python is it's age as much as the tooling it has was just really not good for a long time. People will probably want to replace subpar tooling regardless of the official status if there's a better alternative, but I'd expect that I'm the presence of good enough first-party tooling, people will eventually stop bothering.
I do actually think Go is a bit of an illustrative example here because it started out with just `go get` and liberal use of vendoring, then accumulated a variety of attempted replacements (e.g. godep and dep, which confusingly were not the same thing), but eventually the first party tooling around modules became a thing and after some time it seems like pretty much everyone dropped those interim tools and standardized on the official tooling. I feel like this actually shows that the proliferation of tooling can actually be stopped even if it didn't exist early on, provided that there's a process for making it official.
I've never worked on a super big project, so when it comes to python dependancies the issue I always had is some C/C++ packages trying to build locally and fsiling. While this is mostly a problem on windows I've encountered it on mac as well. I assume uv doesn't have a way to solve this?
By a _substantial_ margin, because the best bang-for-your-buck strategy with smartphones for a long time has been to buy used or refurbished popular flagships for the last one or two years. As much as I like what Xperias are doing with a headphone jack and an SD card slot, the used market for them is almost non-existent. Even if you somehow manage to get a good deal, it will be even more difficult to find a good case and accessories like a reliable magnetic wallet, the market is just isn't there.
I myself have settled on using a Pixel with a headphone jack DAC dongle and an external hard drive.
There are some mostly reliable ones out there on the pricier end, but the catch is that they are almost exclusive to flagships. For the extra-cautious, some even have "Find My Device" compatibility baked in.
In this case the advantage are operators for running postgres.
With Docker Compose, the abstraction level you're dealing with is containers, which means in this case you're saying "run the postgres image and mount the given config and the given data directory". When running the service, you need to know how to operate the software within the container.
Kubernetes at its heart is an extensible API Server, which allows so called "operators" to create custom resources and react to them. In the given case, this means that a postgres operator defines for example a PostgresDatabaseCluster resource, and then contains control loops to turn these resources into actual running containers.
That way, you don't necessarily need to know how postgres is configured and that it requires a data directory mount. Instead, you create a resource that says "give me a postgres 15 database with two instances for HA fail-over", and the operator then goes to work and manages the underlying containers and volumes.
Essentially operators in kubernetes allow you to manage these services at a much higher level.
Docker Compose (ignoring Swarm which seems to be obsolete) manages containers on a single machine. With Kubernetes, the pod that hosts the database is a pod like any other (I assume). It gets moved to a healthy machine when node goes bad, respects CPU/mem limits, works with generic monitoring tools, can be deployed from GitOps tools etc. All the k8s goodies apply.
When it comes to a DB moving the process around is easy, it's the data that matters. The reason bare-metal-hosted DBs are so fast is that they use direct-attach storage instead of networked storage. You lose those speed advantages if you move to distributed storage (Ceph/etc).
You don’t need to use networked storage, the zalando postgres operator just uses local storage on the host. It uses a StatefulSet underneath so that pods will stay on the same node until you migrate them.
But if I'm pinning it to dedicated machines then Kubernetes does not give me anything, but I still have to deal with its tradeoffs and moving parts - which from experience are more likely to bring me down than actual hardware failure.
It’s not like anyone’s recommending you setup k8s just to use Postgres. The advice is that, if you’re already using k8s, the Postgres operator is pretty great, and you should try it instead of using a hosted Postgres offering or having a separate set of dedicated (non-k8s) servers just for Postgres.
I will say that even though the StatefulSet pins the pod to a node, it still has advantages. The StatefulSet can be scaled to N nodes, and if one goes down, failover is automatic. Then you have a choice as an admin to either recover the node, or just delete the pod and let the operator recreate it on some other node. When it gets recreated, it resyncs from the new primary and becomes a replica and you’re back to full health, it’s all pretty easy IMO.
I run PostgreSQL+Patroni on Kubernetes where each instance is a separate StatefulSet pinned to dedicated hosts, with data on local ZFS volumes, provisioned by the OpenEBS controller.
I do this for multiple reasons, one is that I find it easier to use Kubernetes as the backend for Patroni, rather than running/securing/maintaining just another etcd cluster. But I also do it for observability, it's much nicer to be able to pull all the metrics and logs from all the components. Sure, it's possible to set that up without Kubernetes, but why if I can have the logs delivered just one way. Plus, I prefer how self-documenting the whole thing is. No one likes YAML manifests, but they are essentially running documentation that can't get out of sync.
The assumption is that you’re already using Kubernetes, sorry.
Docker compose has always been great for running some containers on a local machine, but I’ve never found it to be great for deployments with lots of physical nodes. k8s is certainly complex, but the complexity really pays off for larger deployments IMO.
I hate that this is starting to sound like a bot Q&A, but the primary advantages is that it provides secure remote configuration and it's that it's platform agnostic, multi-node orchestration, built in load balancing and services framework, way more networking control than docker, better security, self healing and the list goes on, you have to read more about it to really understand the advantages over docker.
The author is a bit uncharitable. "I have nothing to hide" usually is a shorthand for "it would be imprudent and inconvenient to dedicate my limited time and resources to an abstract good like privacy. This would not be the case if I had something illegal or reputation-ruining to hide." Nobody denies the value of privacy, but practicality beats purity in the eyes of a person who doesn't have a particular ideological conviction.
It can also mean: I prefer the police catching murderers. I'm fine when wife cheaters get caught in the drag net.
Privacy advocates never admit that there is not only a "next" government abusing surveillance, but also a "current" one, which uses surveillance for beneficial purposes.
I am a privacy advocate, and also am disappointed at how narrow minded some of the arguments of privacy advocates are.
"Banning encrypted chat will just mean the bad people moved to banned platforms". Perhaps, but some bad people have to operate where victims are (Facebook stalkers, eBay cons, ...)
"Police should be forced to just do... actual police work."
It's pretty reasonably for police to want to increase the chances and speed of resolution.
We should champion and defend privacy, in spite of the good reasons to weaken it. There's no need to strawmen.
Right, but that practicality is predicated on the ability to switch to a more privacy-focused posture later on. And the point of the blog post is, when you need it, it won't be there when you reach for it.
Not arguing with that specific claim, but the author claims to hold a "special kind of contempt" for ordinary people making practical day-to-day choices. That attitude is much more hostile.
The catch is that there could be a version mismatch between the version of Python installed on the end user's computer and the version on which the script was developed. This problem can be solved with uv, and there aren't really Python-native ways available.
reply