So far, regression has been used as a descriptive technique, to measure the relationship between the two variables. We now go on to draw inferences from the analysis about what the true regression line might look like. As with correlation, the estimated relationship is in fact a sample regression line, based upon data for 12 countries. The estimated coefficients a and b are random variables, since they would differ from sample to sample. What can be inferred about the true (but unknown) regression equation?
The question is best approached by first writing down a true or population regression equation, in a form similar to the sample regression equation:
As usual, Greek letters denote true, or population, values. a and ft are thus the population parameters, of which a and b are (point) estimates, using the method of least squares. £ is the population error term. If we could observe the individual error terms £ then we would be able to get exact values of a and ft (even from a sample), rather than just estimates.
Given that a and b are estimates, we can ask about their properties: whether they are unbiased and how precise they are, compared to alternative estimators. Under reasonable assumptions (see Thomas (1993), Chapter 1; Maddala (1992), Chapter 3) it can be shown that the OLS estimates of the coefficients are unbiased. Thus OLS provides useful point estimates of the parameters (the true values a and ft). This is one reason for using the least squares method. It can also be shown that, among the class of linear unbiased
Was this article helpful?