In regression models of various kinds (e.g. multiple linear regression, or various others) the result includes (by default) an intercept term.
In linear regression with p predictor variables , for example, one result will be a formula like this:
What is a? It’s the intercept. What’s the intercept? It is the estimate of Y when all the X are 0. The intercept is vital in one sort of application of regression and almost always unimportant in another. It is vital when we are using regression as a formula to estimate Y when given the various X. It is (almost always) unimportant when we are trying to see the strength of the relationship between X and Y. For example, suppose Y is weight (in pounds), X1 is height (in inches) and X2 is a dummy variable that equals 1 if the person is male and 0 otherwise. Then we would get something like this (I am making up these numbers):
If we wanted to predict the weight of a man who was 5 feet 10 inches tall (60 inches) we would use the intercept: . But if we wanted to see how important height was in predicting weight, the intercept would be irrelevant.
In addition, the intercept is usually not of interest in itself. In the example, it is not really interesting to know that a man who was 0 inches tall would have a predicted weight of 10 pounds.