** Next:** Fitting data to a
** Up:** number10
** Previous:** Introduction

Least squares estimation is one of the techniques
for doing an *optimal estimation*. In these sorts of problems
we have an adjustable model of the way our system is supposed to
behave and some actual measurements of the system. Because our
measurements are real world measurements, they will contain some
errors (due to calibration, or resolution limitations, or because
of noise). Its also possible that our model of the way the system
works is not the way it actually works, so there could be some error
in the description of it. Nevertheless, in spite of the presence of
both types of errors, we want to make the best possible estimation.
There are many ways to quantify what we mean by the best possible
estimate. This measure of the quality of our estimate is called a
*cost function*. Frequently the type of application that we are
working with
will dictate what the cost function will be, but many times its not.
A least squares cost function works in the following way.
Given a provisional set of model parameters:

- for each measure data point, calculate what the model would
give.
- take the difference between the two, this is the error
- square the error and sum them for all the data points

Squaring the errors, eliminates any effect associated with the
difference between positive errors and negative errors (for some
problems this is *not* a good idea) and turns out to be
particularly mathematically convienient (unlike, say, the sum of
the absolute values of the errors).

** Next:** Fitting data to a
** Up:** number10
** Previous:** Introduction
Skip Carter
2008-08-20