When I was in graduate school, one professor actually said “If you don’t understand the results, just report the p-value. That’s what matters”. This is statistical stupidity.
Why is it stupid to say that you should just report the p-value?
There are two big reasons why the above quote is stupid.
First, it encourages statistical ignorance. This is not a good idea, ever. But this was said by a professor! For a professor to encourage ignorance is profoundly wrong. It is true that people in fields other than statistics need not (and usually cannot) become fully expert in statistics. That’s what statisticians are for! But there is a big difference between 1) being able to select a statistical analysis, prepare the data, run the analysis (and probably amend it) and then summarize the results and 2) being able to understand the results. If you don’t understand the results, find someone who does.
Second, it encourages use of the wrong thing! The p-value is not “what matters”. The p-value answers a very specific question (and only in certain circumstances); it is not usually the question that we are interested in. Here is the question that a p-value answers:
If, in the population from which this sample was randomly selected, the null hypothesis is true, then what is the probability that we would get a test statistic at least as extreme as the one we got?
The null hypothesis is nearly always that there is no effect (e.g. no difference between groups; no relationship between variables).
The certain circumstances include:
- that the sample is randomly drawn
- that we engaged in no ‘model hunting’ that is, that we had specific a priori hypotheses and that we tested these and no others
- that the assumptions of whatever statistics we used were met
and possibly others. It is rare (at least in many fields) for all of these to be true.
But, even if we meet all the requirements, this is not usually a question we are interested in. Usually the null is not true and usually we are interested in the effect size, not the p-value.