Frequent Links
Simple linear regression
This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (January 2010) 
Regression analysis 

File:Linear regression.svg 
Models 

Estimation 
Background 
In statistics, simple linear regression is the least squares estimator of a linear regression model with a single explanatory variable. In other words, simple linear regression fits a straight line through the set of n points in such a way that makes the sum of squared residuals of the model (that is, vertical distances between the points of the data set and the fitted line) as small as possible.
The adjective simple refers to the fact that this regression is one of the simplest in statistics. The slope of the fitted line is equal to the correlation between y and x corrected by the ratio of standard deviations of these variables. The intercept of the fitted line is such that it passes through the center of mass (x, y) of the data points.
Other regression methods besides the simple ordinary least squares (OLS) also exist (see linear regression). In particular, when one wants to do regression by eye, one usually tends to draw a slightly steeper line, closer to the one produced by the total least squares method. This occurs because it is more natural for one's mind to consider the orthogonal distances from the observations to the regression line, rather than the vertical ones as OLS method does.
Contents
Fitting the regression line
Suppose there are n data points {(x_{i}, y_{i}), i = 1, ..., n}. The function that describes x and y is : <math> y_i = \alpha + \beta x_i + \varepsilon_i.</math>
The goal is to find the equation of the straight line
 <math> y = \alpha + \beta x,</math>
which would provide a "best" fit for the data points. Here the "best" will be understood as in the leastsquares approach: a line that minimizes the sum of squared residuals of the linear regression model. In other words, α (the yintercept) and β (the slope) solve the following minimization problem:
 <math>\text{Find }\min_{\alpha,\,\beta} Q(\alpha,\beta), \qquad \text{for } Q(\alpha,\beta) = \sum_{i=1}^n\varepsilon_i^{\,2} = \sum_{i=1}^n (y_i  \alpha  \beta x_i)^2\ </math>
By using either calculus, the geometry of inner product spaces, or simply expanding to get a quadratic expression in α and β, it can be shown that the values of α and β that minimize the objective function Q^{[1]} are
 <math>\begin{align}
\hat\beta &= \frac{ \sum_{i=1}^{n} (x_{i}\bar{x})(y_{i}\bar{y}) }{ \sum_{i=1}^{n} (x_{i}\bar{x})^2 } \\[6pt]
&= \frac{ \sum_{i=1}^{n}{x_{i}y_{i}}  \frac1n \sum_{i=1}^{n}{x_{i}}\sum_{j=1}^{n}{y_{j}}}{ \sum_{i=1}^{n}{x_{i}^2}  \frac1n (\sum_{i=1}^{n}{x_{i}})^2 } \\[6pt] &= \frac{ \overline{xy}  \bar{x}\bar{y} }{ \overline{x^2}  \bar{x}^2 } \\ &= \frac{ \operatorname{Cov}[x,y] }{ \operatorname{Var}[x] } \\ &= r_{xy} \frac{s_y}{s_x}, \\[6pt] \hat\alpha & = \bar{y}  \hat\beta\,\bar{x},
\end{align}</math>
where r_{xy} is the sample correlation coefficient between x and y; s_{x} is the standard deviation of x; and s_{y} is correspondingly the standard deviation of y. A horizontal bar over a quantity indicates the sampleaverage of that quantity. For example:
 <math>\overline{xy} = \tfrac{1}{n} \sum_{i=1}^n x_iy_i.</math>
Substituting the above expressions for <math>\hat\alpha</math> and <math>\hat\beta</math> into
 <math> f = \hat\alpha + \hat\beta x,</math>
yields
 <math>\frac{ f\bar{y}}{s_y} = r_{xy} \frac{ x\bar{x}}{s_x} </math>
This shows the role r_{xy} plays in the regression line of standardized data points. It is sometimes useful to calculate r_{xy} from the data independently using this equation:
 <math>r_{xy} = \frac{ \overline{xy}  \bar{x}\bar{y} }{ \sqrt{ (\overline{x^2}  \bar{x}^2) (\overline{y^2}  \bar{y}^2 )} } </math>
The coefficient of determination (R squared) is equal to <math>r_{xy}^2</math> when the model is linear with a single independent variable. See sample correlation coefficient for additional details.
Linear regression without the intercept term
Sometimes, people consider a simple linear regression model without the intercept term, y = βx. In such a case, the OLS estimator for β simplifies to
 <math>\hat\beta = \frac{ \sum_{i=1}^{n}{x_{i}y_{i}} }{ \sum_{i=1}^{n}{x_{i}^2} } = \frac{\overline{x y}}{\overline{x^2}} </math>
and the sample correlation coefficient becomes
 <math>r_{xy} = \frac{ \overline{xy} }{ \sqrt{ (\overline{x^2}) (\overline{y^2}) } } </math>
Numerical properties
 The line goes through the "center of mass" point (x, y).
 The sum of the residuals is equal to zero, if the model includes a constant: <math>\textstyle\sum_{i=1}^n\hat\varepsilon_i=0.</math>
 The linear combination of the residuals, in which the coefficients are the xvalues, is equal to zero: <math>\textstyle\sum_{i=1}^nx_i\hat\varepsilon_i=0.</math>
Modelcased properties
Description of the statistical properties of estimators from the simple linear regression estimates requires the use of a statistical model. The following is based on assuming the validity of a model under which the estimates are optimal. It is also possible to evaluate the properties under other assumptions, such as inhomogeneity, but this is discussed elsewhere.
Unbiasedness
The estimators <math>\hat\alpha</math> and <math>\hat\beta</math> are unbiased. This requires that we interpret the estimators as random variables and so we have to assume that, for each value of x, the corresponding value of y is generated as a mean response α + βx plus an additional random variable ε called the error term. This error term has to be equal to zero on average, for each value of x. Under such interpretation, the leastsquares estimators <math>\hat\alpha</math> and <math>\hat\beta</math> will themselves be random variables, and they will unbiasedly estimate the "true values" α and β.
Confidence intervals
The formulas given in the previous section allow one to calculate the point estimates of α and β — that is, the coefficients of the regression line for the given set of data. However, those formulas don't tell us how precise the estimates are, i.e., how much the estimators <math>\hat\alpha</math> and <math>\hat\beta</math> vary from sample to sample for the specified sample size. Socalled confidence intervals were devised to give a plausible set of values the estimates might have if one repeated the experiment a very large number of times.
The standard method of constructing confidence intervals for linear regression coefficients relies on the normality assumption, which is justified if either:
 the errors in the regression are normally distributed (the socalled classic regression assumption), or
 the number of observations n is sufficiently large, in which case the estimator is approximately normally distributed.
The latter case is justified by the central limit theorem.
Normality assumption
Under the first assumption above, that of the normality of the error terms, the estimator of the slope coefficient will itself be normally distributed with mean β and variance <math style="height:1.5em">\sigma^2/\sum(x_i\bar{x})^2,</math> where σ^{2} is the variance of the error terms (see Proofs involving ordinary least squares). At the same time the sum of squared residuals Q is distributed proportionally to χ^{2} with n−2 degrees of freedom, and independently from <math style="height:1.5em">\hat\beta.</math> This allows us to construct a tstatistic
 <math>t = \frac{\hat\beta  \beta}{s_{\hat\beta}}\ \sim\ t_{n2},</math>
where
 <math> s_\hat{\beta} = \sqrt{ \frac{\tfrac{1}{n2}\sum_{i=1}^n \hat{\varepsilon}_i^{\,2}} {\sum_{i=1}^n (x_i \bar{x})^2} }</math>
is the standard error of the estimator <math style="height:1.5em">\hat\beta.</math>
This tstatistic has a Student's tdistribution with n−2 degrees of freedom.
Using it we can construct a confidence interval for β:
 <math> \beta \in \left[\hat\beta  s_{\hat\beta} t^*_{n2},\ \hat\beta + s_{\hat\beta} t^*_{n2}\right],</math>
at confidence level (1−γ), where <math>t^*_{n2}</math> is the (1−γ/2)th quantile of the t_{n−2} distribution. For example, if γ = 0.05 then the confidence level is 95%.
Similarly, the confidence interval for the intercept coefficient α is given by
 <math>\alpha \in \left[ \hat\alpha  s_{\hat\alpha} t^*_{n2},\ \hat\alpha + s_{\hat\alpha} t^*_{n2}\right],</math>
at confidence level (1−γ), where
 <math>s_{\hat\alpha} = s_{\hat\beta}\sqrt{\tfrac{1}{n}\textstyle\sum_{i=1}^n x_i^2} = \sqrt{\tfrac{1}{n(n2)}\left(\textstyle\sum_{j=1}^n \hat{\varepsilon}_j^{\,2} \right) \frac{\sum_{i=1}^n x_i^2} {\sum_{i=1}^n (x_i \bar{x})^2} }</math>
The confidence intervals for α and β give us the general idea where these regression coefficients are most likely to be. For example in the "Okun's law" regression shown at the beginning of the article the point estimates are
 <math>\hat\alpha=0.859, \qquad \hat\beta=1.817.</math>
The 95% confidence intervals for these estimates are
 <math>\alpha\in\left[0.76, 0.96\right], \qquad \beta\in\left[2.06, 1.58 \right ].</math>
In order to represent this information graphically, in the form of the confidence bands around the regression line, one has to proceed carefully and account for the joint distribution of the estimators. It can be shown^{[citation needed]} that at confidence level (1−γ) the confidence band has hyperbolic form given by the equation
 <math>\hat{y}_{x=\xi} \in \left[ \hat\alpha + \hat\beta \xi \pm t^*_{n2} \sqrt{ \left(\frac{1}{n2} \sum\hat{\varepsilon}_i^{\,2} \right ) \cdot \left (\frac{1}{n} + \frac{(\xi\bar{x})^2}{\sum(x_i\bar{x})^2}\right)}\right].</math>
Asymptotic assumption
The alternative second assumption states that when the number of points in the dataset is "large enough", the law of large numbers and the central limit theorem become applicable, and then the distribution of the estimators is approximately normal. Under this assumption all formulas derived in the previous section remain valid, with the only exception that the quantile t*_{n−2} of Student's t distribution is replaced with the quantile q* of the standard normal distribution. Occasionally the fraction 1/n−2 is replaced with 1/n. When n is large such a change does not alter the results appreciably.
Numerical example
This example concerns the data set from the Ordinary least squares article. This data set gives average masses for women as a function of their height in a sample of American women of age 30–39. Although the OLS article argues that it would be more appropriate to run a quadratic regression for this data, the simple linear regression model is applied here instead.
x_{i} 1.47 1.50 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.70 1.73 1.75 1.78 1.80 1.83 Height (m) y_{i} 52.21 53.12 54.48 55.84 57.20 58.57 59.93 61.29 63.11 64.47 66.28 68.10 69.92 72.19 74.46 Mass (kg)
There are n = 15 points in this data set. Hand calculations would be started by finding the following five sums:
 <math>\begin{align}
& S_x = \sum x_i = 24.76, \quad S_y = \sum y_i = 931.17 \\ & S_{xx} = \sum x_i^2 = 41.0532, \quad S_{xy} = \sum x_iy_i = 1548.2453, \quad S_{yy} = \sum y_i^2 = 58498.5439 \end{align}</math>
These quantities would be used to calculate the estimates of the regression coefficients, and their standard errors.
 <math>\begin{align}
\hat\beta &= \frac{nS_{xy}S_xS_y}{nS_{xx}S_x^2} = 61.272 \\ \hat\alpha &= \tfrac{1}{n}S_y  \hat\beta \tfrac{1}{n}S_x = 39.062 \\ s_\varepsilon^2 &= \tfrac{1}{n(n2)} \left( nS_{yy}S_y^2  \hat\beta^2(nS_{xx}S_x^2) \right) = 0.5762 \\ s_\beta^2 &= \frac{n s_\varepsilon^2}{nS_{xx}  S_x^2} = 3.1539 \\ s_\alpha^2 &= s_\beta^2 \tfrac{1}{n} S_{xx} = 8.63185 \end{align}</math>
The 0.975 quantile of Student's tdistribution with 13 degrees of freedom is t^{*}_{13} = 2.1604, and thus the 95% confidence intervals for α and β are
 <math>\begin{align}
& \alpha \in [\,\hat\alpha \mp t^*_{13} s_\alpha \,] = [\,{45.4},\ {32.7}\,] \\ & \beta \in [\,\hat\beta \mp t^*_{13} s_\beta \,] = [\, 57.4,\ 65.1 \,] \end{align}</math>
The productmoment correlation coefficient might also be calculated:
 <math>\hat{r} = \frac{nS_{xy}  S_xS_y}{\sqrt{(nS_{xx}S_x^2)(nS_{yy}S_y^2)}} = 0.9945</math>
This example also demonstrates that sophisticated calculations will not overcome the use of badly prepared data. The heights were originally given in inches, and have been converted to the nearest centimetre. Since the conversion factor is one inch to 2.54 cm, this is not a correct conversion. The original inches can be recovered by Round(x/0.0254) and then reconverted to metric: if this is done, the results become
 <math>\hat\beta = 61.6746, \qquad \hat\alpha = 39.7468.</math>
Thus a seemingly small variation in the data has a real effect.
Derivation of simple regression estimators
We look for <math>\hat{\alpha},\hat{\beta}</math> that minimize a sum of square errors, <math>\underset{\hat{\alpha},\hat{\beta}}{\mathrm{min}}\,\mathrm{SSE}\left(\hat{\alpha},\hat{\beta}\right)</math>, which is defined as <math>\mathrm{SSE}\left(\hat{\alpha},\hat{\beta}\right)=\sum_{i=1}^{n}\left(y_{i}\hat{\alpha}\hat{\beta}x_{i}\right)^{2}</math>.
To find a minimum take partial derivatives w.r.t. <math>\hat{\alpha}</math> and <math>\hat{\beta}</math>
 <math>\begin{align}
\frac{\partial \, \mathrm{SSE} \left(\hat{\alpha},\hat{\beta}\right)}{\partial\hat{\alpha}}=2\sum_{i=1}^{n}\left(y_{i}\hat{\alpha}\hat{\beta}x_{i}\right)=0 \end{align}</math>
 <math>\begin{align}
\sum_{i=1}^{n}\left(y_{i}\hat{\alpha}\hat{\beta}x_{i}\right)=0 \end{align}</math>
 <math>\begin{align}
\sum_{i=1}^{n}y_{i}=\sum_{i=1}^{n}\hat{\alpha}+\hat{\beta}\sum_{i=1}^{n}x_{i} \end{align}</math>
By multiplying both sides by <math>\frac{1}{n}</math>
 <math>\begin{align}
\frac{1}{n}\sum_{i=1}^{n}y_{i}=\hat{\alpha}\frac{1}{n}\sum_{i=1}^{n}1+\hat{\beta}\frac{1}{n}\sum_{i=1}^{n}x_{i}. \end{align}</math>
we get
 <math>\begin{align}
\bar{y}=\hat{\alpha}+\hat{\beta}\bar{x} \end{align}</math>
Before taking partial derivative w.r.t. <math>\hat{\beta}</math>, substitute the previous result for <math>\hat{\alpha}</math>.
 <math>\begin{align}
\underset{\hat{\alpha},\hat{\beta}}{\mathrm{min}}\sum_{i=1}^{n}\left(y_{i}\left(\bar{y}\hat{\beta}\bar{x}\right)\hat{\beta}x_{i}\right)^{2} \end{align}</math>
 <math>\begin{align}
\underset{\hat{\alpha},\hat{\beta}}{\mathrm{min}}\sum_{i=1}^{n}\left[\left(y_{i}\bar{y}\right)\hat{\beta}\left(x_{i}\bar{x}\right)\right]^{2} \end{align}</math>
Now, take derivative w.r.t. <math>\hat{\beta} </math>:
 <math>\begin{align}
\frac{\partial \, \mathrm{SSE}\left(\hat{\alpha},\hat{\beta}\right)}{\partial\hat{\beta}}=2\sum_{i=1}^{n}\left[\left(y_{i}\bar{y}\right)\hat{\beta}\left(x_{i}\bar{x}\right)\right]\left(x_{i}\bar{x}\right)=0 \end{align}</math>
 <math>\begin{align}
\sum_{i=1}^{n}\left(y_{i}\bar{y}\right)\left(x_{i}\bar{x}\right)\hat{\beta}\sum_{i=1}^{n}\left(x_{i}\bar{x}\right)^{2}=0 \end{align}</math>
 <math>\begin{align}
\hat{\beta}=\frac{}{}\frac{\sum_{i=1}^{n}\left(y_{i}\bar{y}\right)\left(x_{i}\bar{x}\right)}{\sum_{i=1}^{n}\left(x_{i}\bar{x}\right)^{2}}=\frac{Cov\left(x,y\right)}{Var\left(x\right)} \end{align}</math>
And finally substitute <math>\hat{\beta}</math> to determine <math>\hat{\alpha}</math>
 <math>\begin{align}
\hat{\alpha}=\bar{y}\hat{\beta}\bar{x} \end{align}</math>
See also
 Deming regression — simple linear regression with errors measured nonvertically
 Linear segmented regression
 Proofs involving ordinary least squares — derivation of all formulas used in this article in general multidimensional case
References
 ^ Kenney, J. F. and Keeping, E. S. (1962) "Linear Regression and Correlation." Ch. 15 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252285
External links
 Wolfram MathWorld's explanation of Least Squares Fitting, and how to calculate it
 Mathematics of simple regression (Robert Nau, Duke University)
