## Tuesday 5 May 2009

### exp(K)

It is time for some nice extra math, with some proofs and applications. I have two theorems that don't look obvious at first. And initially I kind of tried to avoid them.

The first one is that the matrix exponential of a real antisymmetric matrix is an orthogonal matrix with determinant +1

The second one is that for any matrix K (for example the anti-symmetric one from I) and a matrix A the following holds.

Or explicitely:

This is an expansion in the powers of K. And if K is small, you can cut it at some point. The reason why this is useful is that it gives you a direct possibility to manipulate orthogonal matrices. You don't have to worry about making the matrix orthogonal but you just choose any paramters for K and exp(K) will be orthogonal. II gives the possibility to carry out the basis transformation in an efficient way.

The first part of I is actually very straight forward if you write it down like this. It is provided that the limit of a transposed sequence is the transposition of the original limit and that the inverse is formed as shown.

$\exp(K)^{T}=\exp(K^T)=\exp(-K)=\exp(K)^{-1}$

I will prove the second part with the spectral theorem (maybe you can also do this in a more direct way). If we move into the complex space. K is now correctly spoken anti-Hermitian, and we know that a unitary matrix U exists which diagonalizes K. The eigenvalues of K are purely imaginary (yet quite important).

$K=U^H\mathrm{diag}(t_1,\ldots,t_n)U$

$\mathrm{Re}(t_i)=0$

The exponential is given according to

$\exp(K)=U^H\mathrm{diag}(e^{t_1},\ldots,e^{t_n})U$

At this point you can also notice that exp(K) is unitary because it is formed as a product of three unitary matrices.

Can we say anything about the determinant? Yes of course.

$\mathrm{det}\left(\exp(K)\right)=\mathrm{det}\left(U^H\right)\times\mathrm{det}\left(\mathrm{diag}(e^{t_1},\ldots,e^{t_n})\right)\times\mathrm{det}\left(U\right)=$

The determinants of the mutually inverse matrices UH and U cancel out. The determinant of a diagonal matrix is just the product of the diagonal terms.

$=e^{t_1}\times\ldots\times e^{t_n}=e^{t_1+\ldots+t_n}=$

The sum of the eigenvalues is the trace of K. In a real antisymmetric matrix all diagonal elements have to be zero (therefore also the trace).

$=e^{\mathrm{Tr}(K)}=e^{0}=+1$

And that is what we wanted to see.

The proof shows that exp(.) is a mapping between the set of real antisymmetric matrices and the set of real orthogonal matrices with determinant +1 (and of course that -exp(.) is the mapping to matrices with det=-1). In fact it would also be necessary to show that this mapping is bijective (i.e. that there is a 1:1 correspondence). This is apparently true but I don't think the proof is so simple. Interestingly it does not work with complex anti-Hermitian matrices. In this case there are n extra purely imaginary values in the diagonal. And the mapping is no longer injective for example:

$\exp\begin{pmatrix}0&\pi\\-\pi&0\end{pmatrix}=\exp\begin{pmatrix}2i\pi&\pi\\-\pi&2i\pi\end{pmatrix}=\begin{pmatrix}-1&0\\0&-1\end{pmatrix}$

Point I was the elegant part. Point II is more of the index juggling variety but a nice piece of complete induction. I will probably put that up soon, too.