It Is Time to Stop Teaching Frequentism to Non-Statisticians (2024)

37 Tomte 35 5/24/2025, 5:27:49 PM arxiv.org ↗

Comments (35)

robwwilliams · 1h ago
Yes old, but even worse, it is not a well argued review. Yes, Bayesian statistics are slowly gaining an upper hand at higher levels of statistics, but you know what should be taught to first year undergrads in science? Exploratory data analysis! One of the first books I voluntarily read in stats was Mosteller and Tukey’s gem: Data Analysis and Regression. A gem. Another great book is Judea Pearl’s Book of Why.
nxobject · 43m ago
On the subject of prioritizing EDA:

I need to look this up, but I recall in the 90s a social psychology journal briefly had a policy of "if you show us you're handling your data ethically, you can just show us a self-explanatory plot if you're conducting simple comparisons instead of NHST". That was after some early discussions about statistical reform in the 90s - Cohen's "The Earth is round (p < .05)" I think kick-started things off.

wiz21c · 1h ago
Definitely. It always amazes me that in many situations, I'm applying some stats algorithm just to conclude: let's look at these data some more...
perrygeo · 1h ago
Frequentists stats aren't wrong. It's just a special case that has been elevated to unreasonable standards. When the physical phenomenon in question is truly random, frequentist methods can be a convenient mathematical shortcut. But should we be teaching scientists the "shortcut"? Should we be forcing every publication to use these shortcuts? Statistic's role in the scientific reproducibility crisis says no.
kccqzy · 1h ago
Frequentism methods are strictly less general. For example Laplace used probability theory to estimate the mass of Saturn. But with a frequentist interpretation we have to imagine a large number of parallel universes where everything remains the same except for the mass of Saturn. That's overly prescriptive of what probability means. Whereas in Bayesian statistics what probability means is strictly more general. You can manipulate probabilities even without fully defining them (maximum entropy) subject to intuitive rules (sum rule, product rule, Bayes' theorem), and the results of such manipulation are still correct and useful.
perrygeo · 16m ago
Drawing a sample of Saturns from an infinite set of Saturns! It's completely absurd, but that's what you get when you take a mathematical tool for coin flips and apply it to larger scientific questions.

I wonder if the generality of the Bayesian approach is what's prevented its wide adoption? Having a prescribed algorithm ready to plug in data is mighty convenient! Frequentism lowered the barrier and let anyone run stats, but more isn't necessarily a good thing.

bmacho · 2h ago
Article is from 2012, compare [0] and [1].

The pdf got replaced for some reason (bug, sensitive information in the meta or idk), but the article seems to have stayed the same, except the date.

[0]: https://arxiv.org/pdf/1201.2590v1.pdf

[1]: https://web.archive.org/web/0if_/https://arxiv.org/pdf/1201....

NewsaHackO · 3h ago
It’s weird how random people can submit non peer reviewed articles to preprint repos. Why not just use a blog site, medium or substack?
jxjnskkzxxhx · 3h ago
> Why not just use a blog site, medium or substack?

Because it looks more credible, obviously. In a sense it's cargo cult science: people observe this is the style of science, and so copy just the style; to a casual observer it appears to be science.

nickpsecurity · 56m ago
Professional science has been doing that a long time if one considers that many published works were never independently tested and replicated. If it's a scientist, and uses scientific descriptions, many just repeat it from there.
jxjnskkzxxhx · 47m ago
Overly reductionistic. At the same time a proper rebuttal isn't worth the time for someone who's clearly not looking to understand.
groceryheist · 2h ago
Two reasons:

1. Preprint servers create DOIs, making works better citable.

2. Preprint servers are archives, ensuring works remain accessible.

My blog website won't outlive me for long. What happened to geocities could also happen to medium.

SoftTalker · 2h ago
Who would want to cite a random unreviewed preprint?
mitthrowaway2 · 2h ago
You don't get a free pass to not cite relevant prior literature just because it's in the form of an unreviewed preprint.

If you're writing a paper about a longstanding math problem and the solution gets published on 4chan, you still need to cite it.

NooneAtAll3 · 1h ago
tbf, you cite the paper that described and discussed said solution in the more appropriate form
mousethatroared · 57m ago
You cite the form you encountered and if you're any good of a researcher you will have encountered the original 4chan anon post, Borges' short story, or Chomsky's linguistic paper.
bowsamic · 1h ago
It happens way more than you expect. In my PhD I used to cite unreviewed preprints that were essential to my work but simply for whatever reason hadn’t been pushed to publication. More common for long review like papers
amelius · 2h ago
Maybe other pseudoscientists who agree with the ideas presented and want to create a parallel universe with alternative facts?
mousethatroared · 34m ago
And people who care more for gatekeeping will stick to academic echo chambers. The list of community driven medical discoveries encountering entrenched professional opposition is quite long.

Both models are fallible, which is why discernment is so important.

billfruit · 3h ago
Why the gatekeeping. Only what is said matters, not who says it.
tsimionescu · 2h ago
That's a cute fantasy, but it doesn't work beyond a tiny scale. Credentials are critical to help filter data - 8 billion people all publishing random info can't be listened to.
SoftTalker · 2h ago
> 8 billion people all publishing random info can't be listened to.

Yet it's what we train LLMs on.

verbify · 1h ago
There's a paper Textbooks are all you need - https://arxiv.org/abs/2306.11644

> We introduce phi-1, a new large language model for code, with significantly smaller size than competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on 8 A100s, using a selection of ``textbook quality" data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as phi-1 that still achieves 45% on HumanEval

We train on the internet because, for example, I speak a fairly niche English dialect influenced by Hebrew, Yiddish and Aramaic, and there are no digitised textbooks or dictionaries that cover this language. I assume the base weights of models are still using high quality materials.

birn559 · 2h ago
Which are known to be unreliable beyond basic things that most people that have some relevant experience get right anyway.
tsimionescu · 2h ago
It's what we train LLMs on to make them learn language, a thing that all healthy adult human beings are experts on using. It's definitely not what we train LLMs on if we want them to do science.
BlarfMcFlarf · 2h ago
Peer review specifically checks that what is being said passes scrutiny by experts in the field, so it is very much about what is being said.
SJC_Hacker · 2h ago
They why isn't it double blind ?
BDPW · 1h ago
Often reviewing is executed double blind for exactly this reason. This can be difficult in small fields where you can more-or-less guess who's working on what, but the intent is definitely there.
mcswell · 1h ago
I've reviewed computational linguistics papers in the past (I'm retired now, and the field is changing out from under me, so I don't do it any more). But all the reviews I did were double blind.
birn559 · 2h ago
If what is said has any merit can be very hard to judge beyond things that are well known.

In addition, peer reviews are anonymous for both sides (as far as possible).

ujkiolp · 2h ago
i would filter your dumb shit
jxjnskkzxxhx · 45m ago
> news.ycombinator.com/user?id=billfruit

> Why the gatekeeping. Only what is said matters, not who says it.

Tell me you zero media literacy without telling me you have zero media literacy.

watwut · 1h ago
Yeah, that is why 4chan became famous for being the source of trustworthy and valuable scientific research. /s
constantcrying · 36m ago
>It’s weird how random people can submit non peer reviewed articles to preprint repos.

It is weird how people use a platform exactly how it is supposed to be used.

brudgers · 2h ago
Previous submission comments, https://news.ycombinator.com/item?id=32341770