Matrix Diagonalisation
Contents
21. Matrix Diagonalisation#
21.1. Motivation#
Recall that a diagonal matrix is a square matrix with zeros everywhere except the main diagonal. Multiplying a vector by a diagonal matrix
In fact, for diagonal matrices we immediately have the eigenvalues and eigenvectors! The eigenvalues are the diagonal entries
Multiplying by a diagonal matrix is easy because any vector is a linear sum of coordinate vectors. The diagonal entries of
What about general (non-diagonal) square matrices? If
If
Then
We just need to find the values
21.2. Diagonalisation #
Matrix Diagonalisation
Let
be the matrix of eigenvectors and
the diagonal matrix of eigenvalues. If
and
Note that we use capital lambda
To see why the above result is true, note
Then the left hand side
because
Whereas right hand side
Thus we see that
Example
Diagonalise the
Solution
This matrix represents a reflection in the line
Therefore we can write the matrix of eigenvectors
To complete the diagonalisation we need to calculate
Then
The matrix of eigenvectors
Suppose
then left-multiplying by
Note
Diagonalisation is not unique
1. If we write the eigenvalues and eigenvectors in a different order, we get a different matrix
However it is important that the order of the eigenvalues is the same as the order of the eigenvectors. If you swap the eigenvectors, you must remember to also swap the eigenvalues!
2. Any eigenvector can be multiplied by a constant. For example replacing
3. The eigenvalues are unique, although different diagonalisations may result in a different order.
Exercise 21.1
Diagonalise the matrix
21.3. About Diagonalisation#
Not all matrices can be diagonalised. To diagonalise a matrix, the matrix of eigenvectors
Recall that if
is not invertible since the second column is a multiple of the first (and therefore its determinant is zero).
We can extend this idea to
is not invertible. To extend this idea to the general case, we need to introduce the important concept of linear independence.
21.4. Linear Independence#
A set of vectors is linearly dependent if one vector can be written as a linear sum of the other vectors.
Definition
Let
The vectors are linearly independent if the equation
has only the trivial solution
Otherwise, we say the vectors are linearly dependent.
Exercise 21.2
Show that a set of vectors is linearly dependent if (and only if) one of the vectors is a linear sum of the others.
If we have
is not invertible. Therefore we can extend the invertible matrix theorem with two more equivalent conditions for invertibility:
Invertible Matrix Theorem (II)
Let
is invertible. . has pivots.The null space of
is 0. has a unique solution for every .The columns of
are linearly independent.The rows of
are linearly independent.
21.5. More Eigenvalues#
Theorem
Eigenvectors
An
To prove this, suppose that
for some
Multiply by A:
then divide by
Comparing (21.2) and (21.3) we see that
We have shown that two linearly dependent eigenvectors must have identical eigenvalues. We will not show it here, but it is not difficult to extend this to the general case: eigenvectors from distinct eigenvalues are linearly independent.
We can conclude that if an
Example
Determine the characteristic equation of each of the following matrices and identify which are diagonalisable:
Solution
Exercise 21.3
Diagonalise the matrix
in the example above.Find a diagonalisable matrix with only one distinct eigenvalue.
Attention
Diagonalisability is not related to invertibility. Non-invertible matrices can be diagonalisable, as in
21.6. Algebraic and Geometric Multiplicity#
The eigenvalues of a
If there are fewer then
results in two distinct eigenvalues
Theorem
Let
geometric multiplicity of
where the algebraic multiplicity of
This means that for (21.4) there could be one or two independent eigenvectors in the eigenspace of
21.7. Matrix Powers #
The eigenvector matrix
and so on. Because
Theorem
Let
21.8. Complex Eigenvalues#
We have seen that from a square matrix
where
By the fundamental theorem of algebra,
The roots
In this section we consider the case where some of the roots are not real numbers.
21.8.1. Rotations in 2D#
Let
The characteristic polynomials is
The polynomial
resulting in two complex eigenvalues
We also find that the eigenvectors contain the imaginary number
and hence the eigenvector corresponding to the eigenvalue
Likewise the eigenspace corresponding to the eigenvalue
Example
Find the eigenvalues of an anticlockwise rotation by an angle
Solution
The matrix
represents an anticlockwise rotation by an angle
Setting this to zero and solving for
Exercise 21.4
Find the complex eigenvectors corresponding to the two complex eigenvalues of
21.9. Trace and Determinant#
Calculating eigenvalues is (in general) a difficult problem. However, in some cases we can use some ‘tricks’ to help find them.
Definition
The trace of a matrix is the sum of the diagonal entries. Given a matrix
Theorem
Let
and
The sum of the eigenvalues is the sum of the diagonal entries of
Example
Calculate the determinant of the matrix
Solution
This matrix is clearly not invertible, and so has zero determinant and at least one zero eigenvalue.
In fact there are two independent eigenvectors in the zero eigenspace (check this!). This means that we have
to determine that
Exercise 21.5
Show that the eigenvalues of a triangular matrix are its diagonal entries.
21.10. Solutions to Exercises#
Solution to Exercise 21.1
First find the eigenvalues and eigenvectors of
therefore the two eigenvalues are
To find the corresponding eigenvectors, calculate the nullspace of
Therefore
For
Therefore
The matrix of eigenvectors is
and its inverse
The matrix of eigenvalues is
and so
Solution to Exercise 21.3
1. Eigenvalues are
Therefore
Therefore
and
The eigenvalue matrix
2. A matrix of the form
has characteristic polynomial
and therefore a single repeated eigenvalue
But it is diagonalisable (it is diagonalised by the identity matrix). In fact, every vector