Problem with TLA+ is that it is a completely unrelated tool to the actual programming, and even when everything is done correctly, it doesn't prevent further code changes breaking the model, unless there are work processes in place to update the related model, revalidate it, and only then push the changes into production.
As mentioned previously, I think the only tools that are really valuable should be able to produce code, naturally with multiple possible languages as backends, that are then used as library from the application code.
Something like Lean, F*, Dafny, even if that isn't exactly the same as TLA+.
The point of TLA+ is to act as a simple "toy model" for the actual system where you can still figure out relevant failure modes by automated means. Real-world code is a lot more complex so "end to end" proofs, while viable in a sense (static type checking is a kind of proof about the code) have very different and more modest goals than something like TLA+.
Executable "toy models" are excellent in practice for property based regression tests. The invariant is that for all inputs the behavior of the oracle ("toy model") is the same as the behavior of the system under test. AWS call this approach "lightweight formal methods[1]" and it's a good one. The problem is afaict there's no way to insert a TLA+ oracle into e.g. a rust proptest suite.
Which on my eyes renders it into a useless academic exercise, regardless of how well renowned the people behind TLA+ happen to be.
Scenario proof modeling that is bound to human translation errors is as useful in practice, as doing PowerPoint driven software architecture.
Every step of the flow requires an expert on TLA+, the actual programming language and frameworks being used, to manually validate everything still means exactly the same across all layers.
There's plenty of real-world production use of TLA+, including by Amazon. The "toy model" approach may have limitations of a sort but it's far from purely academic - building simplified models of a complex system is routine practice.
The obvious difference with PowerPoint design is that non-trivial failure modes can be surfaced automatically if they're reflected in the toy model - PowerPoint slides don't do this.
You don't even have to use TLA itself for this purpose, a Lean development could also do this - but then you would have to put together much of the basic formalism (which ultimately works by combining the "modalities" of time, state and non-determinism) on your own.
Or it might be that the model doesn't really avoid all possible human failures when translating TLA+ into Java, C++ metatemplate programming, or whatver.
As mentioned previously, I think the only tools that are really valuable should be able to produce code, naturally with multiple possible languages as backends, that are then used as library from the application code.
Something like Lean, F*, Dafny, even if that isn't exactly the same as TLA+.