Skip to content

Data Normalization

Kurt Motekew edited this page Sep 14, 2016 · 1 revision

The measurement model

z = Ax + v

often assumes the observation covariance is equal to the identity matrix:

E[vv'] = I

This is accomplished by normalizing the observations y and partials A with the square root of the true observation covariance P:

S = sqrt(P)

where

P = SS'

Normalization is accomplished by multiplying the observations and partials by the inverse of S

y = S^-1y

and

A = S^-1A

The observation covariance information is now embedded within the observations vector and partials matrix. The resulting estimate will then lead to an estimate covariance without the use of a separate weighting matrix. For example, the unormalized batch weighted least squares estimate covariance is

(A'WA)^-1

The normalize form,

(A'A)^-1

is equivalent

(A'A)^-1 = ((S^-1A)'S^-1A)^-1 = (A'(SS')^-1A)^-1 = (A'WA)^-1

when

W = P^-1

Not only is this form essential to information filters and weighted least squares solutions involving decomposition methods, "when data of different types are to be combined in an estimation process, it is good numeric practice to introduce data normalization" [p. 48].

Note that for batch estimators employing differential correction to a nonlinear observation models, or for filters computing an update to the a priori estimate, the observation residual (true - computed) is normalized instead of the observation itself.

Clone this wiki locally