Is it just me or is this piece essentially devoid of any sort of real analysis?
They ran a linter and then did a one liner pretty-print on the output... and then close with observation that there a lot of false positives and potentially 'a bug or two'.
Anyone who has ever run a linter on any sizeable code base could have made that observation without taking the trouble to even run the linter in the first place.
Not really though. Perhaps everything is almost entirely false positives, but perhaps he just picked a specific bad example and claims a generalization that isn't true. Or maybe there is a simple trick to turn off the false positives. Or maybe the next version is much better...
FWIW, I did not see a lot of false positives in the tests I did with clang. On the contrary, I would say that clang tends to be rather conservative, and doesn't report warnings unless they are almost certainly bugs.
I'd certainly be interested in what others think -- if so inclined please leave a comment on the article, or email me directly.
What's the difference between clang-tidy, and cc --analyze?
FWIW, I just this week ran cc --analyze on a codebase that I inherited. It had a few false positives, but it mostly found real (albeit unlikely) bugs. Overall, it was an immensely helpful experience, as it found some bugs in areas of the code which I've never had to explore before, and it kind of forced me to understand those areas.
Static analysis tools tend to get run on the kernel quite often and make for fantastic starting contributions. The issue is, as mentioned, there's a lot of false positives and often times that's not really clear. You need a ton of domain knowledge to figure out if some warnings are actually bugs or not.
It would be cool if there was a single place that stored upstream warnings, be that from compilers on various architectures or from various static analysis tools like sparse, where maintainers/experienced contributors could help flag what is a bug, what isn't a bug, and what requires further investigation. Would probably reduce a ton of duplicated effort.
> You need a ton of domain knowledge to figure out if some warnings are actually bugs or not.
That isn't necessarily true. Coverity has its share of false positives but for the most part they're quickly dispatched by studying the surrounding code without needing intimate knowledge of how it works. On the other hand it readily finds legitimate bugs that are only triggered by a certain set of inputs through certain code paths that are easily missed by a human.
However it can be seen as an opportunity to improve clang-tidy to produce fewer false positives. I've used the clang static analyzer in some of my projects and it was very useful, e.g: clang-modernize (now part of clang-tidy) is great.
They ran a linter and then did a one liner pretty-print on the output... and then close with observation that there a lot of false positives and potentially 'a bug or two'.
Anyone who has ever run a linter on any sizeable code base could have made that observation without taking the trouble to even run the linter in the first place.