In a discrete dynamical system, we compute the power of linear transformation (or matrix). From the matrix viewpoint, what we have done is the following. Suppose ` A` =

= [D |
λ_{1} |
0 | . . | 0 | ]. |

0 | λ_{2} |
. . | 0 | ||

: | : | : | |||

0 | 0 | . . | λ_{n} |

The power of the diagonal matrix is easy to compute (try the case `n` = `k` = 3 to convince yourself)

= [D^{k} |
λ_{1}^{k} |
0 | . . | 0 | ]. |

0 | λ_{2}^{k} |
. . | 0 | ||

: | : | : | |||

0 | 0 | . . | λ_{n}^{k} |

Thus by **P**^{-1}` P` =

= (A^{k}PDP^{-1}) = ^{k}PDP^{-1}PDP^{-1}...PDP^{-1} = PD^{k}P^{-1} = [P |
λ_{1}^{k} |
0 | . . | 0 | ]P^{-1}. |

0 | λ_{2}^{k} |
. . | 0 | ||

: | : | : | |||

0 | 0 | . . | λ_{n}^{k} |

We see it is quite easy to compute the powers of a matrix from its diagonalization.

Example From an earlier example, we have

[ | 13 | -4 | ] = [ | 1 | 2 | ] [ | 5 | 0 | ] [ | 1 | 2 | ]^{-1}. |

-4 | 7 | -2 | 1 | 0 | 15 | -2 | 1 |

Thus

[ | 13 | -4 | ]^{k} = [ |
1 | 2 | ] [ | 5^{k} |
0 | ] [ | 1 | 2 | ]^{-1} |

-4 | 7 | -2 | 1 | 0 | 15^{k} |
-2 | 1 |

= (1/5)[ | 5^{k} + 4•15^{k} |
-2•5^{k} + 2•15^{k} |
] = 5^{k-1}[ |
1 + 4•3^{k} |
-2 + 2•3^{k} |
]. |

-2•5^{k} + 2•15^{k} |
4•5^{k} + 15^{k} |
-2 + 2•3^{k} |
4 + 3^{k} |

The computation of the power of a diagonalizable matrix ` A` =

`p`(` A`) =

= [P |
1 + 2λ_{1} + 3λ_{1}^{2} |
0 | . . | 0 | ]P^{-1}. |

0 | 1 + 2λ_{2} + 3λ_{2}^{2} |
. . | 0 | ||

: | : | : | |||

0 | 0 | . . | 1 + 2λ + 3_{n}λ_{n}^{2} |

For the exponential function

e = 1 + ^{t}t + |
1 | t^{2} + |
1 | t^{3} + ... + |
1 | t + ... ,^{k} |

2! | 3! | k! |

we find

e = ^{D} + I + D |
1 | D^{2} + |
1 | D^{3} + ... + |
1 | + ... D^{k} |

2! | 3! | k! |

to be a diagonal matrix with the exponential of the eigenvalues

e = 1 + ^{λ}λ + |
1 | λ^{2} + |
1 | λ^{3} + ... + |
1 | λ + ... ^{k} |

2! | 3! | k! |

as the diagonal entries. Then we have

e =^{A} (P + I + D |
1 | D^{2} + |
1 | D^{3} + ... + |
1 | + ...)D^{k}P^{-1} = Pe^{D}P^{-1}. |

2! | 3! | k! |

In general, for a function `f`(`t`) and a diagonalizable matrix ` A` =

f() = A[P |
f(λ_{1}) |
0 | . . | 0 | ]P^{-1}. |

0 | f(λ_{2}) |
. . | 0 | ||

: | : | : | |||

0 | 0 | . . | f(λ)_{n} |

Strictly speaking, for the similar argument to work, we need `f`(`t`) to have power series expansion at `t` = 0 and have to worry about the convergence issue. The existence of power series means that the function `f`(`t`) should be analytic at 0. The convergence means that the "size" of ` A` (called the norm) should be less than the radius of convergence for the power series.

By a more advanced theory, it is possible to define the continuous functions of symmetric matrices (or self-adjoint linear transformations).