One problem with academic research publications is known as the “file-drawer” problem. If you do research on a subject and your results are not statistically significant, it is very hard to get them published. While I am certainly no fan of significance testing, this problem is not really caused by the over-reliance on p-values: The same thing would happen if we used effect sizes or confidence intervals. If (say) 10 researchers study a field, the one whose results are most highly significant (or largest) is the most likely to get published and most likely to get accepted at a higher-prestige journal.
This gives a distorted view of reality. It’s as if you had 10 friends tossing coins, and you only reported the one that got the most heads. Then you concluded that the coin is unfair. Or, to be more realistic, each friend would get a random sample of coins and toss all of them a bunch of times and record the proportion of heads. Then the one who got the most heads would submit an article showing that all coins were biased. And no one would know that the other coin-tossers were out there.
I consult with a lot of people who are doing dissertation research and they don’t have this problem, at least, not if their committee is fair. They write a proposal, which is critiqued. Once the proposal is accepted, they do the research. Provided that they have done what they said they would do, their dissertation gets approved, even if the results are not significant (or not large).
Suppose academic papers worked the same way?
You would get an idea for some research and write a proposal. Then you send it to a journal editor. It gets accepted, rejected or the always-popular “revise and re-submit”. Then you gather data and do what you said you would do. And, provided you have done that, the paper gets published.
Specialties: Regression, logistic regression, cluster analysis, statistical graphics, quantile regression.