The t-test is a statistical test of whether two sample means (averages) or proportions are equal. It was invented by William Sealy Gosset, who wrote under the pseudonym “student” to avoid detection by his employer (the Guinness Brewing Company). Guinness prohibited publications by employees, because another employee had divulged trade secrets in writing.
There are also one-sample versions of a the t-test, to tell if a sample has a mean equal to some fixed value, but these are relatively little used.
When to use a t-test
A t-test can be used to compare two means or proportions. The t-test is appropriate when all you want to do is to compare means, and when its assumptions are met (see below). In addition, a t-test is only appropriate when the mean is an appropriate when the means (or proportions) are good measures. See my earlier article for guidance on when to use the mean.
Matched and unmatched t-tests
There are two forms of the t-test. In the unmatched t-test, or independent t-test, it is assumed that the two samples are independent. In non-technical language, two samples are independent when knowing something about one does not affect what we know abou the other. For example, the average heights of men and women, drawn randomly from a population, are independent, since knowing the height of a particular man tells us nothing about the height of any particular woman. In a matched t-test, the two sample are not independent; for example, the heights of husbands and wives are not independent, since taller men may be married to taller women. More obviously, the length of people’s right and left feet are dependent, because knowing the size of a right foot tells us a lot about the size of the left foot.
Assumptions of the t-test
As noted above, the independent samples t-test assumes the two samples are independent. In addition, both forms of the t-test assume that the variances of the two populations are equal. There are good ways to adjust for unequal variances, provided that the sample sizes of the two
samples are approximately equal. However, if the variances are very different and the sample sizes are different, then the t-test is not a good measure. In addition, as noted above, the t-test only makes sense when the mean makes sense.
If not the t-test, then what?
If the t-test is not appropriate, then one alternative is a nonparametric test, such as Wilcoxon’s test. Another alternative is a permutation test, or a bootstrap. In my opinion, all three alternatives ought to be used more often.
The t-test in SAS
Suppose one wishes to test if men are heavier than women in a given population. If you sample 5 men and 5 women at random, you might get something like this:
Men: 140 180 188 210 190
Women: 120 190 129 120 130
You could read that into SAS® using
input sex $ weight @@;
M 140 F 120 M 180 F 190 M 188 F 129 M 210 F 120 M 190 F 130
and then run a t-test by using
proc ttest data = ttest;
The t-test in R
In R, one could read the same data in using
sex <- c(rep(‘M’, 5), rep(‘F’, 5))
weight <- c(140, 180, 188, 210, 190, 120, 190, 129, 120, 130)
and then run a t-test using
The output looks like
Welch Two Sample t-test
data: weight by sex
t = -2.4982, df = 7.851, p-value = 0.03758
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
mean in group F mean in group M
Which is terser than the SAS output, but says essentially the same thing. However, by default, R uses the Welch t-test, which does not assume the variances are equal. To get the test with the assumption, you would use
t.test(weight~sex, var.equal = T)
I specialize in helping graduate students and researchers in psychology, education, economics and the social sciences with all aspects of statistical analysis. Many new and relatively uncommon statistical techniques are available, and these may widen the field of hypotheses you can investigate. Graphical techniques are often misapplied, but, done correctly, they can summarize a great deal of information in a single figure. I can help with writing papers, writing grant applications, and doing analysis for grants and research.
Specialties: Regression, logistic regression, cluster analysis, statistical graphics, quantile regression.
Regression refers to a collection of techniques for modeling one variable (the dependent variable or DV), as a...
In an earlier article, we looked at simple linear regression, which involves one independent variable (IV) and...
These are the slides for a talk at New York Area SAS Users' Group on June 2, 2011.
Are you writing a dissertation? Congratulations! You've gotten through undergraduate education, graduate...
I got into a conversation on Twitter (find me there as @peterflomstat) about the user-friendliness of...
Today, I will discuss ease of learning. Unlike the earlier post (and, I hope, most of the ones to come) this...
In many substantive fields, students take one, two, or perhaps three statistical courses during graduate...
A young statistician name Myers Says tenure is all he desires But his dreams won't be met, He'll be fired,...
Today, I'll look at how to make and evaluate a good statistical argument. I'm going to base this on the...
Often, when reading a statistics book, you will see some variation on the phrase "independent data". Many...
In a previous article I looked at how to go wrong with the mean. Today, I will look at a set of alternative...
Lately, across the statistical blogosphere, the repeating discussion of R vs. SAS has started up again. In...
There are many books that teach you to use SAS or that teach you to use R. There is at least one book that...
To get a good answer, you must write a good question. Answering a statistics question without context is like...
When dealing with ordinal data, many methods require you to assign a number or score to each level of a...
Suppose your dependent variable (DV) is a Likert scale or something similar. That is, it's some sort of...
Sensitivity and specificity are measures of the effectiveness of a diagnostic test. Most often they are used...
This is a book of recreational mathematics, but it is relatively serious. Several of the chapters have some...
We very often set the cutoff for a significant p value at 0.05, and minimal acceptable power at 0.8....
When we have quantitative data, one thing we often want to know is where the center is, and, for that, we can...