It would be interesting to consider how much knowledge would never have been uncovered if you were King of Science. All those subtle, barely seen interactions in nature that on further investigation turned out to be something rather special.
Such as? It would also be interesting to explore how many dead ends we wouldn't have wasted time on, and so what other things might have been discovered sooner.
Scientists aren't stupid. No one saw a paper where a predictor explained 1% of the variance in an outcome and based solely on a significant p value decided that was a great road to base an entire career on. The problem, as described by the parent comment, doesn't really exist in funding structures and the scientific literature. It does occur to some degree in media coverage of science.
One could make the case that in GWAS studies it has occured, but not because small effect sizes are inconsequential, the statistical methods just weren't able to separate grain from chaff for a while.
An allele that is responsible for 2% of the variation in disease risk might seem inconsequential, but 25 of those together can serve as a polygenic risk score that can predict disease and target treatment.
> Scientists aren't stupid. No one saw a paper where a predictor explained 1% of the variance in an outcome and based solely on a significant p value decided that was a great road to base an entire career on. The problem, as described by the parent comment, doesn't really exist in funding structures and the scientific literature.
Of course they're stupid. Everyone is stupid. That's why we have a "scientific method" and a formal discipline of logic to overcome fallacious reasoning and cognitive biases. If people weren't stupid we wouldn't need any of these disciplines to check our mistakes.
And yes, what you describe does happen all of the time. We literally just had a thread on HN about the failure of the amyloid hypothesis in Alzheimer's and the decades of work put wasted on it. Many researchers are still trying to push it as a legitimate therapeutic target despite every clinical trial to date failing spectacularly. As Planck said, science advances on funeral at a time.
Which isn't to say that small effect sizes aren't legitimate research targets either, but if you're after a a small effect size, the rigour should be scaled proportionally.
So your example of decades being wasted chasing an initial tiny effect size, all the time, was... An example of a failed mechanistic hypothesis that wasn't based on a tiny effect size.
I wasn't trying to post about the effect size specifically, but about general incentives and dead ends, but if you want a specific example look no further than aspirin for myocardial infarction:
> A commonly cited example of this problem is the Physicians Health Study of aspirin to prevent myocardial infarction (MI).4 In more than 22 000 subjects over an average of 5 years, aspirin was associated with a reduction in MI (although not in overall cardiovascular mortality) that was highly statistically significant: P < .00001. The study was terminated early due to the conclusive evidence, and aspirin was recommended for general prevention. However, the effect size was very small: a risk difference of 0.77% with r2 = .001—an extremely small effect size. As a result of that study, many people were advised to take aspirin who would not experience benefit yet were also at risk for adverse effects. Further studies found even smaller effects, and the recommendation to use aspirin has since been modified.
Long-term aspirin use has its own risks, like GI bleeds, and the MI benefits are clearly not warranted given those risks.
It's hard to parse that example, because the citation it contains is to a meta-analysis that provides and effect size of aspirin for MI in the PHS in the form of an odds ratio that is much greater magnitude. Digging a bit more, heres the actual result - the difference in relative risk was 44% not 0.77%.
https://www.nejm.org/doi/full/10.1056/NEJM198907203210301
> There was a 44 percent reduction in the risk of myocardial infarction (relative risk, 0.56; 95 percent confidence interval, 0.45 to 0.70; P<0.00001) in the aspirin group (254.8 per 100,000 per year as compared with 439.7 in the placebo group).
I agree if you said from the start you meant general incentives, especially in pharma development, but that is by and large a different conversation.