In any form of regression model, we often think of the effects as additive. That is, we suppose that the effect of one variable can be added to the effect of another to get an accurate model. This is never strictly true, but how true is it? Is it true enough? How can we tell?
The traditional method of testing this is to add an interactive term to the model and test it for significance. If it is not statistically significant, we assume that we can remove it. This approach is flawed for several reasons:
1. It is heavily dependent on sample size. If you have a large sample, small effects will be significant. If you have a small sample, large effects won’t be.
2. Interactions between variables that are not perfectly reliable are less reliable than either of the main effects
3. It doesn’t assess how the interaction changes the results.
Instead, what we could do is graph the predicted values of each model against the true value (in two scatterplots) and then against each other (in one more scatterplot) and see how big the differences actually are.