Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s funny to me, as a professional statistician, because most methods popularized by Fischer et al in the early 1900s are wildly inappropriate for practical problems, especially policy decision science or causal inference.

All the theory behind t-testing, Wald testing, using the detivatives of the log likelihood near to the MLE point estimate in order to also estimate standard errors when no analytical solution exists, ANOVA, instrumental variables, etc.

It is in no sense exaggerative or incendiary to say that whole collection of stuff is truly garbage statistics that is insanely rife with counter-intuitive results, common situations when minor violations of the assumptions can easily lead to statistically significant results of the wrong sign, and common practical needs (like model selection without doing a bunch of pairwise or subset selection calculations, or correcting for multicollinearity in large regressions where calculating something like variance inflation factors is totally intractable) are difficult or impossible.

Modern Bayesian approaches fully and entirely subsume these techniques, and not just for large data (in fact, using Bayesian methods is more critical for small data), and also not because of modern computing frameworks, but because, from very first principle of null-hypothesis significance testing, that whole field of stats/econometrics is fundamentally incapable of giving evidence or estimations that could address the very questions that the whole field is based on.

NHST basically solves a type of inference problem that nobody can ever actually have in reality, and which is almost always not even approximately close enough to actually be non-misleading.

NHST is like the stats analogue of Javascript: a horrible historical accident that gained market traction despite being utterly and unequivocally a bad choice for the very problem domain it’s intended to be used for. The historical accident of adoption and momentum in Javascript sets back professional computer science by decades until it’s eventually wholesale replaced with something whose first principles are actually appropriate.

That same reckoning is in flux in many fields of statistics, as the fundamental unreliability of NHST estimation is more understood and drop-in Bayesian replacements are more available.



I don't disagree with anything you've written. The only thing I'd take issue with is placing NHST at the feet of statisticians. Scientists deserve a fair share as well. :-p




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: