info prev up next book cdrom email home

Least Squares Fitting

\begin{figure}\begin{center}\BoxedEPSF{LeastSquares.epsf}\end{center}\end{figure}

A mathematical procedure for finding the best fitting curve to a given set of points by minimizing the sum of the squares of the offsets (``the residuals'') of the points from the curve. The sum of the squares of the offsets is used instead of the offset absolute values because this allows the residuals to be treated as a continuous differentiable quantity. However, because squares of the offsets are used, outlying points can have a disproportionate effect on the fit, a property which may or may not be desirable depending on the problem at hand.


\begin{figure}\begin{center}\BoxedEPSF{LeastSquaresOffsets.epsf}\end{center}\end{figure}

In practice, the vertical offsets from a line are almost always minimized instead of the perpendicular offsets. This allows uncertainties of the data points along the $x$- and $y$-axes to be incorporated simply, and also provides a much simpler analytic form for the fitting parameters than would be obtained using a fit based on perpendicular distances. In addition, the fitting technique can be easily generalized from a best-fit line to a best-fit polynomial when sums of vertical distances are used (which is not the case using perpendicular distances). For a reasonable number of noisy data points, the difference between vertical and perpendicular fits is quite small.


The linear least squares fitting technique is the simplest and most commonly applied form of Linear Regression and provides a solution to the problem of finding the best fitting straight line through a set of points. In fact, if the functional relationship between the two quantities being graphed is known to within additive or multiplicative constants, it is common practice to transform the data in such a way that the resulting line is a straight line, say by plotting $T$ vs. $\sqrt{\ell}$ instead of $t$ vs. $\ell$. For this reason, standard forms for Exponential, Logarithmic, and Power laws are often explicitly computed. The formulas for linear least squares fitting were independently derived by Gauß and Legendre.


For Nonlinear Least Squares Fitting to a number of unknown parameters, linear least squares fitting may be applied iteratively to a linearized form of the function until convergence is achieved. Depending on the type of fit and initial parameters chosen, the nonlinear fit may have good or poor convergence properties. If uncertainties (in the most general case, error ellipses) are given for the points, points can be weighted differently in order to give the high-quality points more weight.


The residuals of the best-fit line for a set of $n$ points using unsquared perpendicular distances $d_i$ of points $(x_i,y_i)$ are given by

\begin{displaymath}
R_\perp\equiv \sum_{i=1}^n d_i.
\end{displaymath} (1)

Since the perpendicular distance from a line $y=a+bx$ to point $i$ is given by
\begin{displaymath}
d_i={\vert y_i-(a+bx_i)\vert\over \sqrt{1+b^2}},
\end{displaymath} (2)

the function to be minimized is
\begin{displaymath}
R_\perp\equiv \sum_{i=1}^n {\vert y_i-(a+bx_i)\vert\over\sqrt{1+b^2}}.
\end{displaymath} (3)


Unfortunately, because the absolute value function does not have continuous derivatives, minimizing $R_\perp$ is not amenable to analytic solution. However, if the square of the perpendicular distances

\begin{displaymath}
R_\perp^2\equiv \sum_{i=1}^n {[y_i-(a+bx_i)]^2\over 1+b^2}
\end{displaymath} (4)

is minimized instead, the problem can be solved in closed form. $R_\perp^2$ is a minimum when (suppressing the indices)
\begin{displaymath}
{\partial R_\perp^2 \over \partial a} = {2\over 1+b^2} \sum [y-(a+bx)](-1)=0
\end{displaymath} (5)

and


\begin{displaymath}
{\partial R_\perp^2 \over \partial b} = {2\over 1+b^2} \sum [y-(a+bx)](-x) +\sum {[y-(a+bx)]^2(-1)(2b)\over (1+b^2)^2} = 0.
\end{displaymath} (6)

The former gives
\begin{displaymath}
a={\sum y-b\sum x\over n}=\bar y-b\bar x,
\end{displaymath} (7)

and the latter
\begin{displaymath}
(1+b^2)\sum [y-(a+bx)]x+b\sum [y-(a+bx)]^2=0.
\end{displaymath} (8)


But


\begin{displaymath}[y-(a+bx)]^2 =y^2-2(a+bx)y+(a+bx)^2 = y^2-2ay-2bxy+a^2+2abx+b^2x^2,
\end{displaymath} (9)

so (8) becomes


\begin{displaymath}
(1+b^2)\left({\sum xy-a\sum x-b\sum x^2}\right)+b\left({\sum y^2-2a\sum y-2b\sum xy+a^2\sum 1+2ab\sum x+b^2\sum x^2}\right)=0
\end{displaymath} (10)

$[(1+b^2)(-b)+b(b^2)]\sum x^2+[(1+b^2)-2b^2]\sum xy$
$ +b\sum y^2+[-a(1+b^2)+2ab^2]\sum x-2ab\sum y+ba^2\sum 1=0$ (11)

\begin{displaymath}
-b\sum x^2+(1-b^2)\sum xy+b\sum y^2+a(b^2-1)\sum x-2ab\sum y+ba^2n=0.
\end{displaymath} (12)

Plugging (7) into (12) then gives

$-b\sum x^2+(1-b^2)\sum xy+b\sum y^2+{1\over n} (b^2-1)\left({\sum y-b\sum x}\right)\sum x$
$-{2\over n}\left({\sum y-b\sum x}\right)b\sum y+{1\over n}b\left({\sum y-b\sum x}\right)^2=0\quad$ (13)
After a fair bit of algebra, the result is

\begin{displaymath}
b^2+{\sum y^2-\sum x^2+{1\over n}\left[{\left({\sum x}\right...
...}\right)^2}\right]\over {1\over n}\sum x\sum y-\sum xy} b-1=0.
\end{displaymath} (14)

So define
$\displaystyle B$ $\textstyle \equiv$ $\displaystyle {1\over 2}{\left[{\sum y^2-{1\over n}\left({\sum y}\right)^2}\rig...
...-{1\over n}\left({\sum x}\right)^2}\right]\over {1\over n}\sum x\sum y-\sum xy}$  
  $\textstyle =$ $\displaystyle {1\over 2} {(\sum y^2-n\bar y^2)-(\sum x^2-n\bar x^2)\over n\sum x\sum y-\sum xy},$ (15)

and the Quadratic Formula gives
\begin{displaymath}
b=-B\pm\sqrt{B^2+1},
\end{displaymath} (16)

with $a$ found using (7). Note the rather unwieldy form of the best-fit parameters in the formulation. In addition, minimizing $R_\perp^2$ for a second- or higher-order Polynomial leads to polynomial equations having higher order, so this formulation cannot be extended.


Vertical least squares fitting proceeds by finding the sum of the squares of the vertical deviations $R^2$ of a set of $n$ data points

\begin{displaymath}
R^2 \equiv \sum[y_i-f(x_i, a_1, a_2, \ldots, a_n)]^2
\end{displaymath} (17)

from a function $f$. Note that this procedure does not minimize the actual deviations from the line (which would be measured perpendicular to the given function). In addition, although the unsquared sum of distances might seem a more appropriate quantity to minimize, use of the absolute value results in discontinuous derivatives which cannot be treated analytically. The square deviations from each point are therefore summed, and the resulting residual is then minimized to find the best fit line. This procedure results in outlying points being given disproportionately large weighting.


The condition for $R^2$ to be a minimum is that

\begin{displaymath}
{\partial(R^2)\over\partial a_i} = 0
\end{displaymath} (18)

for $i=1$, ..., $n$. For a linear fit,
\begin{displaymath}
f(a,b)=a+bx,
\end{displaymath} (19)

so
\begin{displaymath}
R^2(a,b) \equiv \sum_{i=1}^n [y_i-(a+bx_i)]^2
\end{displaymath} (20)


\begin{displaymath}
{\partial (R^2)\over\partial a} = -2\sum_{i=1}^n [y_i-(a+bx_i)] = 0
\end{displaymath} (21)


\begin{displaymath}
{\partial (R^2)\over\partial b} = -2\sum_{i=1}^n [y_i-(a+bx_i)]x_i = 0.
\end{displaymath} (22)

These lead to the equations
\begin{displaymath}
na+b\sum x= \sum y
\end{displaymath} (23)


\begin{displaymath}
a\sum x+ b\sum x^2= \sum xy,
\end{displaymath} (24)

where the subscripts have been dropped for conciseness. In Matrix form,
\begin{displaymath}
\left[{\matrix{n & \sum x\cr \sum x& \sum x^2\cr}}\right]\le...
...r b\cr}}\right]= \left[{\matrix{\sum y\cr \sum xy\cr}}\right],
\end{displaymath} (25)

so
\begin{displaymath}
\left[{\matrix{a\cr b\cr}}\right] =\left[{\matrix{n & \sum x...
...\cr}}\right]^{-1}\left[{\matrix{\sum y\cr \sum xy\cr}}\right].
\end{displaymath} (26)


The $2\times 2$ Matrix Inverse is

\begin{displaymath}
\left[{\matrix{a\cr b\cr}}\right] = {1\over n\sum x^2-\left(...
... y\sum x^2-\sum x\sum xy\cr n\sum xy-\sum x\sum y\cr}}\right],
\end{displaymath} (27)

so
$\displaystyle a$ $\textstyle =$ $\displaystyle {\sum y\sum x^2-\sum x\sum xy\over n\sum x^2-\left({\sum x}\right)^2}$ (28)
  $\textstyle =$ $\displaystyle {\bar y\sum x^2-\bar x\sum xy\over \sum x^2-n\bar x^2}$ (29)
$\displaystyle b$ $\textstyle =$ $\displaystyle {n\sum xy-\sum x\sum y\over n\sum x^2-\left({\sum x}\right)^2}$ (30)
  $\textstyle =$ $\displaystyle {\sum xy-n\bar x\bar y\over \sum x^2-n\bar x^2}$ (31)

(Kenney and Keeping 1962). These can be rewritten in a simpler form by defining the sums of squares
$\displaystyle {\rm ss}_{xx}$ $\textstyle =$ $\displaystyle \sum_{i=1}^n (x_i-\bar x)^2 = \left({\sum x^2-n\bar x^2}\right)$ (32)
$\displaystyle {\rm ss}_{yy}$ $\textstyle =$ $\displaystyle \sum_{i=1}^n (y_i-\bar y)^2 = \left({\sum y^2-n\bar y^2}\right)$ (33)
$\displaystyle {\rm ss}_{xy}$ $\textstyle =$ $\displaystyle \sum_{i=1}^n (x_i-\bar x)(y_i-\bar y) = \left({\sum xy-n\bar x\bar y}\right),$ (34)

which are also written as
$\displaystyle \sigma_x^2$ $\textstyle =$ $\displaystyle {\rm ss}_{xx}$ (35)
$\displaystyle \sigma_y^2$ $\textstyle =$ $\displaystyle {\rm ss}_{yy}$ (36)
$\displaystyle \mathop{\rm cov}\nolimits (x,y)$ $\textstyle =$ $\displaystyle {\rm ss}_{xy}.$ (37)

Here, $\mathop{\rm cov}\nolimits (x,y)$ is the Covariance and $\sigma_x^2$ and $\sigma_y^2$ are variances. Note that the quantities $\sum xy$ and $\sum x^2$ can also be interpreted as the Dot Products
$\displaystyle \sum x^2$ $\textstyle =$ $\displaystyle {\bf x}\cdot{\bf x}$ (38)
$\displaystyle \sum xy$ $\textstyle =$ $\displaystyle {\bf x}\cdot{\bf y}.$ (39)


In terms of the sums of squares, the Regression Coefficient $b$ is given by

\begin{displaymath}
b = {\mathop{\rm cov}\nolimits (x,y)\over {\sigma_x}^2}={{\rm ss}_{xy}\over{\rm ss}_{xx}},
\end{displaymath} (40)

and $a$ is given in terms of $b$ using (24) as
\begin{displaymath}
a = \bar y-b\bar x.
\end{displaymath} (41)

The overall quality of the fit is then parameterized in terms of a quantity known as the Correlation Coefficient, defined by
\begin{displaymath}
r^2={{{\rm ss}_{xy}}^2\over {\rm ss}_{xx}{\rm ss}_{yy}},
\end{displaymath} (42)

which gives the proportion of ${\rm ss}_{yy}$ which is accounted for by the regression.


The Standard Errors for $a$ and $b$ are

$\displaystyle {\rm SE}(a)$ $\textstyle =$ $\displaystyle s\sqrt{{1\over n}+{\bar x^2\over {\rm ss}_{xx}}}$ (43)
$\displaystyle {\rm SE}(b)$ $\textstyle =$ $\displaystyle {s\over\sqrt{{\rm ss}_{xx}}}.$ (44)

Let $\hat y_i$ be the vertical coordinate of the best-fit line with $x$-coordinate $x_i$, so
\begin{displaymath}
\hat y_i\equiv a+bx_i,
\end{displaymath} (45)

then the error between the actual vertical point $y_i$ and the fitted point is given by
\begin{displaymath}
e_i\equiv y_i-\hat y_i.
\end{displaymath} (46)

Now define $s^2$ as an estimator for the variance in $e_i$,
\begin{displaymath}
s^2=\sum_{i=1}^n {{e_i}^2\over n-2}.
\end{displaymath} (47)

Then $s$ can be given by
\begin{displaymath}
s = \sqrt{{\rm ss}_{yy}-b{\rm ss}_{xy}\over n-2} = \sqrt{{\rm ss}_{yy}-{{\rm ss}_{xy}^2\over {\rm ss}_{xx}}\over n-2}
\end{displaymath} (48)

(Acton 1966, pp. 32-35; Gonick and Smith 1993, pp. 202-204).


Generalizing from a straight line (i.e., first degree polynomial) to a $k$th degree Polynomial

\begin{displaymath}
y = a_0+a_1x+\ldots +a_kx^k,
\end{displaymath} (49)

the residual is given by
\begin{displaymath}
R^2 \equiv \sum_{i=1}^n [y_i-(a_0+a_1x_i+\ldots +a_k{x_i}^k)]^2.
\end{displaymath} (50)

The Partial Derivatives (again dropping superscripts) are
\begin{displaymath}
{\partial(R^2)\over\partial a_0} = -2 \sum [y-(a_0+a_1x+\ldots +a_kx^k)] = 0
\end{displaymath} (51)


\begin{displaymath}
{\partial(R^2)\over\partial a_1} = -2 \sum [y-(a_0+a_1x+\ldots +a_kx^k)]x = 0
\end{displaymath} (52)


\begin{displaymath}
{\partial(R^2)\over\partial a_k} = -2 \sum [y-(a_0+a_1x+\ldots +a_kx^k)]x^k = 0.
\end{displaymath} (53)

These lead to the equations
\begin{displaymath}
a_0n+a_1\sum x+\ldots +a_k\sum x^k = \sum y
\end{displaymath} (54)


\begin{displaymath}
a_0\sum x+a_1\sum x^2+\ldots +a_k\sum x^{k+1} = \sum xy
\end{displaymath} (55)


\begin{displaymath}
a_0\sum x^k+a_1\sum x^{k+1}+\ldots +a_k\sum x^{2k} = \sum x^ky
\end{displaymath} (56)

or, in Matrix form
$\left[{\matrix{n & \sum x& \cdots & \sum x^k\cr \sum x& \sum x^2& \cdots & \sum...
...right] = \left[{\matrix{\sum y\cr \sum xy\cr \vdots\cr \sum x^ky\cr}}\right]\!.$

(57)
This is a Vandermonde Matrix. We can also obtain the Matrix for a least squares fit by writing
\begin{displaymath}
\left[{\matrix{1 & x_1 & \cdots & {x_1}^k\cr
1 & x_2 & \cdo...
...ght] = \left[{\matrix{y_1\cr y_2\cr \vdots\cr y_n\cr}}\right].
\end{displaymath} (58)

Premultiplying both sides by the Transpose of the first Matrix then gives


\begin{displaymath}
\left[{\matrix{1 & 1 & \cdots & 1\cr x_1 & x_2 & \cdots & x_...
...right] \left[{\matrix{y_1\cr y_2\cr \vdots\cr y_n\cr}}\right],
\end{displaymath} (59)

so


\begin{displaymath}
\left[{\matrix{n & \sum x& \cdots & \sum x^n\cr \sum x& \sum...
...{\matrix{\sum y\cr \sum xy\cr \vdots\cr \sum x^ky\cr}}\right].
\end{displaymath} (60)

As before, given $m$ points $(x_i,y_i)$ and fitting with Polynomial Coefficients $a_0$, ..., $a_n$ gives
\begin{displaymath}
\left[{\matrix{y_1\cr y_2\cr \vdots\cr y_m\cr}}\right] = \le...
...right] \left[{\matrix{a_0\cr a_1\cr \vdots\cr a_n\cr}}\right],
\end{displaymath} (61)


In Matrix notation, the equation for a polynomial fit is given by

\begin{displaymath}
{\bf y} = {\hbox{\sf X}}{\bf a}.
\end{displaymath} (62)

This can be solved by premultiplying by the Matrix Transpose ${\hbox{\sf X}}^{\rm T}$,
\begin{displaymath}
{\hbox{\sf X}}^{\rm T}{\bf y}={\hbox{\sf X}}^{\rm T}{\hbox{\sf X}}{\bf a}.
\end{displaymath} (63)

This Matrix Equation can be solved numerically, or can be inverted directly if it is well formed, to yield the solution vector
\begin{displaymath}
{\bf a}=({\hbox{\sf X}}^{\rm T}{\hbox{\sf X}})^{-1}{\hbox{\sf X}}^{\rm T}{\bf y}.
\end{displaymath} (64)

Setting $m=1$ in the above equations reproduces the linear solution.

See also Correlation Coefficient, Interpolation, Least Squares Fitting--Exponential, Least Squares Fitting--Logarithmic, Least Squares Fitting--Power Law, Moore-Penrose Generalized Matrix Inverse, Nonlinear Least Squares Fitting, Regression Coefficient, Spline


References

Acton, F. S. Analysis of Straight-Line Data. New York: Dover, 1966.

Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences. New York: McGraw-Hill, 1969.

Gonick, L. and Smith, W. The Cartoon Guide to Statistics. New York: Harper Perennial, 1993.

Kenney, J. F. and Keeping, E. S. ``Linear Regression, Simple Correlation, and Contingency.'' Ch. 8 in Mathematics of Statistics, Pt. 2, 2nd ed. Princeton, NJ: Van Nostrand, pp. 199-237, 1951.

Kenney, J. F. and Keeping, E. S. ``Linear Regression and Correlation.'' Ch. 15 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, pp. 252-285, 1962.

Lancaster, P. and Salkauskas, K. Curve and Surface Fitting: An Introduction. London: Academic Press, 1986.

Lawson, C. and Hanson, R. Solving Least Squares Problems. Englewood Cliffs, NJ: Prentice-Hall, 1974.

Nash, J. C. Compact Numerical Methods for Computers: Linear Algebra and Function Minimisation, 2nd ed. Bristol, England: Adam Hilger, pp. 21-24, 1990.

Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; and Vetterling, W. T. ``Fitting Data to a Straight Line'' ``Straight-Line Data with Errors in Both Coordinates,'' and ``General Linear Least Squares.'' §15.2, 15.3, and 15.4 in Numerical Recipes in FORTRAN: The Art of Scientific Computing, 2nd ed. Cambridge, England: Cambridge University Press, pp. 655-675, 1992.

York, D. ``Least-Square Fitting of a Straight Line.'' Canad. J. Phys. 44, 1079-1086, 1966.



info prev up next book cdrom email home

© 1996-9 Eric W. Weisstein
1999-05-26