(1) |
Our goal is to derive the form of the matrix, A, so that the expected mean square difference between the estimated field and the actual field ( x ) is minimized,
(2) |
If we put (1) into (2) and expand, we get
(3) |
If we let C_{x} be the autocorrelation of the field ( E[x x^{T}] ), be the autocorrelation of the observations ( ), and be the cross correlation between the field and the observations ( ), then we can write the above as
(4) |
The next step requires the application of the following matrix identity (proved in the appendix),
(A - B C^{-1}) C (A - B C^{-1})^{T} - B C^{-1} B^{T} = A C A^{T} - B A^{T} - (B A^{T})^{T} | (5) |
using A in (4) for A in (5), and for B as well as for C, we can reduce (x) to
(6) |
The matrices ,
is an autocorrelation matrix therefore
both it and
are nonnegative definite (see
appendix), therefore
(7) |
(8) |
(9) |
(10) |
(11) |
Further, we can write down what the expected error for the estimator
as,
(12) |
Equations (11) and (12) constitute the Gauss-Markov estimator for the linear minimum means square estimate of a random variable.