49 12.3 The Regression Equation
Data rarely fit a straight line exactly. Usually, you must be satisfied with rough predictions. Typically, you have a set of data whose scatter plot appears to “fit” a straight line. This is called a Line of Best Fit or LeastSquares Line.
Example 1
A random sample of 11 statistics students produced the following data, where x is the third exam score out of 80, and y is the final exam score out of 200. Can you predict the final exam score of a random student if you know the third exam score?
x (third exam score)  y (final exam score) 

65  175 
67  133 
71  185 
71  163 
66  126 
75  198 
67  153 
70  163 
71  159 
69  151 
69  159 
Table showing the scores on the final exam based on scores from the third exam.
Scatter plot showing the scores on the final exam based on scores from the third exam.
Try It
SCUBA divers have maximum dive times they cannot exceed when going to different depths. The data in the table show different depths with the maximum dive times in minutes.
Depth (in feet)  Maximum dive time (in minutes) 

50  80 
60  55 
70  45 
80  35 
90  25 
100  22 
 Can you predict the maximum dive time of a random diver if you know the depth?
 What is the maximum dive time if a diver dives at 110 feet?
Show Answer
 max dive time = 127.24 – 1.1143 * depth
 At 110 feet, a diver could dive for only five minutes.
The third exam score, x, is the independent variable and the final exam score, y, is the dependent variable. We will plot a regression line that best “fits” the data. If each of you were to fit a line “by eye,” you would draw different lines. We can use what is called a leastsquares regression line to obtain the best fit line.
Consider the following diagram. Each point of data is of the the form (x, y) and each point of the line of best fit using leastsquares linear regression has the form [latex]\displaystyle{({x}\hat{{y}})}[/latex].
The [latex]\displaystyle\hat{{y}}[/latex] is read “y hat” and is the estimated value of y. It is the value of y obtained using the regression line. It is not generally equal to y from data.
The term [latex]\displaystyle{y}_{0}\hat{y}_{0}={\epsilon}_{0}[/latex] is called the “error” or residual. It is not an error in the sense of a mistake. The absolute value of a residual measures the vertical distance between the actual value of y and the estimated value of y. In other words, it measures the vertical distance between the actual data point and the predicted point on the line.

In the diagram above, [latex]\displaystyle{y}_{0}\hat{y}_{0}={\epsilon}_{0}[/latex] is the residual for the point shown. Here the point lies above the line and the residual is positive.
ε = the Greek letter epsilon
For each data point, you can calculate the residuals or errors,
[latex]{\epsilon}_{i} = {y}_{i}\hat{y}_{i}[/latex] for i = 1, 2, 3, …, 11.
Each ε is a vertical distance.
For the example about the third exam scores and the final exam scores for the 11 statistics students, there are 11 data points. Therefore, there are 11 ε values. If you square each ε and add, you get
[latex]\displaystyle{({\epsilon}_{{1}})}^{{2}}+{({\epsilon}_{{2}})}^{{2}}+\ldots+{({\epsilon}_{{11}})}^{{2}}={\stackrel{{11}}{{\stackrel{\sum}{{{}_{{{i}={1}}}}}}}}{\epsilon}^{{2}}[/latex]
This is called the Sum of Squared Errors (SSE).
Using calculus, you can determine the values of a and b that make the SSE a minimum. When you make the SSE a minimum, you have determined the points that are on the line of best fit. It turns out that the line of best fit has the equation:
[latex]\displaystyle\hat{{y}}={a}+{b}{x}[/latex]
where [latex]\displaystyle{a}=\overline{y}{b}\overline{{x}}[/latex] and [latex]{b}=\frac{{\sum{({x}\overline{{x}})}{({y}\overline{{y}})}}}{{\sum{({x}\overline{{x}})}^{{2}}}}[/latex].
The sample means of the xvalues and the yvalues are [latex]\displaystyle\overline{{x}}[/latex] and [latex]\overline{{y}}[/latex].
The slope b can be written as [latex]\displaystyle{b}={r}{\left(\frac{{s}_{{y}}}{{s}_{{x}}}\right)}[/latex] where
 s_{y} = the standard deviation of the yvalues,
 s_{x} = the standard deviation of the xvalues,
 r is the correlation coefficient, which is discussed in the next section.
Least Squares Criteria for Best Fit
The process of fitting the bestfit line is called linear regression. The idea behind finding the bestfit line is based on the assumption that the data are scattered about a straight line. The criteria for the best fit line is that the sum of the squared errors (SSE) is minimized, that is, made as small as possible. Any other line you might choose would have a higher SSE than the best fit line. This best fit line is called the leastsquares regression line.
Note:Computer spreadsheets, statistical software, and many calculators can quickly calculate the bestfit line and create the graphs. The calculations tend to be tedious if done by hand. Instructions to use the TI83, TI83+, and TI84+ calculators to find the bestfit line and create a scatterplot are shown at the end of this section. 
Third Exam vs Final Exam Example
The graph of the line of best fit for the thirdexam/finalexam example is as follows:
The least squares regression line (bestfit line) for the thirdexam/finalexam example has the equation:
[latex]\displaystyle\hat{{y}}={173.51}+{4.83}{x}[/latex]
Remember, it is always important to plot a scatter diagram first. If the scatter plot indicates that there is a linear relationship between the variables, then it is reasonable to use a best fit line to make predictions for y given x within the domain of xvalues in the sample data, but not necessarily for xvalues outside that domain. You could use the line to predict the final exam score for a student who earned a grade of 73 on the third exam. You should NOT use the line to predict the final exam score for a student who earned a grade of 50 on the third exam, because 50 is not within the domain of the xvalues in the sample data, which are between 65 and 75.
Understanding Slope
The slope of the line, b, describes how changes in the variables are related. It is important to interpret the slope of the line in the context of the situation represented by the data. You should be able to write a sentence interpreting the slope in plain English.
Interpretation of the Slope: The slope of the bestfit line tells us how the dependent variable (y) changes for every one unit increase in the independent (x) variable, on average.
Third Exam vs Final Exam Example: [latex]\displaystyle\hat{{y}}={173.51}+{4.83}{x}[/latex] Slope: The slope of the line is b = 4.83. Interpretation: For a onepoint increase in the score on the third exam (x), the final exam score (y) increases by 4.83 points, on average. 
Using the Linear Regression T Test: LinRegTTest
 In the STAT list editor, enter the X data in list L1 and the Y data in list L2, paired so that the corresponding (x,y) values are next to each other in the lists. (If a particular pair of values is repeated, enter it as many times as it appears in the data.)
 On the STAT TESTS menu, scroll down with the cursor to select the LinRegTTest. (Be careful to select LinRegTTest, as some calculators may also have a different item called LinRegTInt.)
 On the LinRegTTest input screen enter: Xlist: L1 ; Ylist: L2 ; Freq: 1
 On the next line, at the prompt β or ρ, highlight “≠ 0” and press ENTER
 Leave the line for “RegEq:” blank
 Highlight Calculate and press ENTER.
The output screen contains a lot of information. For now we will focus on a few items from the output, and will return later to the other items.
The second line says y = a + bx. Scroll down to find the values a = –173.513, and b = 4.8273; the equation of the best fit line is ŷ = –173.51 + 4.83xThe two items at the bottom are r2 = 0.43969 and r = 0.663. For now, just note where to find these values; we will discuss them in the next two sections.
Graphing the Scatterplot and Regression Line
 We are assuming your X data is already entered in list L1 and your Y data is in list L2
 Press 2nd STATPLOT ENTER to use Plot 1
 On the input screen for PLOT 1, highlightOn, and press ENTER
 For TYPE: highlight the very first icon which is the scatterplot and press ENTER
 Indicate Xlist: L1 and Ylist: L2
 For Mark: it does not matter which symbol you highlight.
 Press the ZOOM key and then the number 9 (for menu item “ZoomStat”) ; the calculator will fit the window to the data
 To graph the bestfit line, press the “Y=” key and type the equation –173.5 + 4.83X into equation Y1. (The X key is immediately left of the STAT key). Press ZOOM 9 again to graph it.
 Optional: If you want to change the viewing window, press the WINDOW key. Enter your desired window using Xmin, Xmax, Ymin, Ymax
Note
Another way to graph the line after you create a scatter plot is to use LinRegTTest. Make sure you have done the scatter plot. Check it on your screen.Go to LinRegTTest and enter the lists. At RegEq: press VARS and arrow over to YVARS. Press 1 for 1:Function. Press 1 for 1:Y1. Then arrow down to Calculate and do the calculation for the line of best fit.Press Y = (you will see the regression equation).Press GRAPH. The line will be drawn.”
The Correlation Coefficient, r
Besides looking at the scatter plot and seeing that a line seems reasonable, how can you tell if the line is a good predictor? Use the correlation coefficient as another indicator (besides the scatterplot) of the strength of the relationship between x and y.
The correlation coefficient, r, developed by Karl Pearson in the early 1900s, is numerical and provides a measure of strength and direction of the linear association between the independent variable x and the dependent variable y.
The correlation coefficient is calculated as [latex]{r}=\frac{{ {n}\sum{({x}{y})}{(\sum{x})}{(\sum{y})} }} {{ \sqrt{\left[{n}\sum{x}^{2}(\sum{x}^{2})\right]\left[{n}\sum{y}^{2}(\sum{y}^{2})\right]}}}[/latex]
where n = the number of data points.
If you suspect a linear relationship between x and y, then r can measure how strong the linear relationship is.
What the VALUE of r tells us: The value of r is always between –1 and +1: –1 ≤ r ≤ 1. The size of the correlation rindicates the strength of the linear relationship between x and y. Values of r close to –1 or to +1 indicate a stronger linear relationship between x and y. If r = 0 there is absolutely no linear relationship between x and y (no linear correlation). If r = 1, there is perfect positive correlation. If r = –1, there is perfect negativecorrelation. In both these cases, all of the original data points lie on a straight line. Of course,in the real world, this will not generally happen.
What the SIGN of r tells us: A positive value of r means that when x increases, y tends to increase and when x decreases, y tends to decrease (positive correlation). A negative value of r means that when x increases, y tends to decrease and when x decreases, y tends to increase (negative correlation). The sign of r is the same as the sign of the slope,b, of the bestfit line.
Note:Strong correlation does not suggest that x causes y or y causes x. We say “correlation does not imply causation.” 
(a) A scatter plot showing data with a positive correlation. 0 < r < 1
(b) A scatter plot showing data with a negative correlation. –1 < r < 0
(c) A scatter plot showing data with zero correlation. r = 0
The formula for r looks formidable. However, computer spreadsheets, statistical software, and many calculators can quickly calculate r. The correlation coefficient ris the bottom item in the output screens for the LinRegTTest on the TI83, TI83+, or TI84+ calculator (see previous section for instructions).
The Coefficient of Determination, r^{2}
The variable r^{2} is called the coefficient of determination and is the square of the correlation coefficient, but is usually stated as a percent, rather than in decimal form. It has an interpretation in the context of the data:
 r^{2}, when expressed as a percent, represents the percent of variation in the dependent (predicted) variable y that can be explained by variation in the independent (explanatory) variable x using the regression (bestfit) line.
 1 – r^{2}, when expressed as a percentage, represents the percent of variation in y that is NOT explained by variation in x using the regression line. This can be seen as the scattering of the observed data points about the regression line.
Third Exam vs Final Exam Example: The line of best fit is [latex]\displaystyle\hat{{y}}={173.51}+{4.83}{x}[/latex] The correlation coefficient is r = 0.6631 The coefficient of determination is r^{2} = 0.66312 = 0.4397 Interpretation of r^{2} in the context of this example: Approximately 44% of the variation (0.4397 is approximately 0.44) in the finalexam grades can be explained by the variation in the grades on the third exam, using the bestfit regression line. Therefore, approximately 56% of the variation (1 – 0.44 = 0.56) in the final exam grades can NOT be explained by the variation in the grades on the third exam, using the bestfit regression line. (This is seen as the scattering of the points about the line.) 
Concept Review
A regression line, or a line of best fit, can be drawn on a scatter plot and used to predict outcomes for the x and y variables in a given data set or sample data. There are several ways to find a regression line, but usually the leastsquares regression line is used because it creates a uniform line. Residuals, also called “errors,” measure the distance from the actual value of y and the estimated value of y. The Sum of Squared Errors, when set to its minimum, calculates the points on the line of best fit. Regression lines can be used to predict values within the given set of data, but should not be used to make predictions for values outside the set of data.
The correlation coefficient r measures the strength of the linear association between x and y. The variable r has to be between –1 and +1. When r is positive, the x and y will tend to increase and decrease together. When r is negative, x will increase and y will decrease, or the opposite, x will decrease and y will increase. The coefficient of determination r2, is equal to the square of the correlation coefficient. When expressed as a percent, r2 represents the percent of variation in the dependent variable y that can be explained by variation in the independent variable x using the regression line.