## Sunday, 7 October 2007

### Matrices

Another little bit of math related stuff for all you people who like to think about it. It's from here and it took me forever to find out what they were talking about.

To start with (and to use this nice LATEX tool) what do we have:

A time dependent vector:

$x(t) = \begin{pmatrix} x_1(t) \\ . \\. \\. \\ x_{3N}(t) \end{pmatrix}$

And we define the expected value operator <> which means averaging over time.

The question is what is represented by the following two expressions.

$<(x(t)-)\cdot(x(t)-)^T>$

$<(x(t)-)^T\cdot(x(t)-)>$

No, in this case the T's cannot just be ignored as I would usually do.

The first one is a matrix like
$\begin{pmatrix} 1 \\ 2 \end{pmatrix} (3 4) = \begin{pmatrix} 3 & 4 \\ 6 & 8 \end{pmatrix}$
is a matrix.

The covariance matrix, in other words:
$C=(Cov(x_i(t), x_j(t)))_{1\leq i,j\leq 3N} = (<(x_i(t)-) (x_j(t)-)>)_{1\leq i,j\leq 3N}$

The second expression is a number, the sum of all the variances or the trace of the covariance matrix (which stays invariant with a similarity transformation).
$\sum_{i=1}^{3N}~(<(x_i(t)-) (x_i(t)-)>) = \sum_{i=1}^{3N}Var(x_i) = tr(C)$

As far as implementation goes, I can only stress how nice numpy is. All I had to do was the parsing. Numpy quietly converts my 66x19553 matrix into its covariance matrix and diagonalises it.