By the equivalence between the composition of linear transformations and the multiplication of matrices, we define a matrix B to be the inverse of a matrix A if
BA = I, and AB = I,
where I denotes the identity matrix. We also denote B = A-1, similar to the notation for inverse transformations.
Our study of the inverse matrix is based on the following result. A generalization of the result can be found in this earlier exercise and this earlier exercise. The converse of the result can be found in this exercise.
Proof For any b, the following verifies that x = Bb is a solution of Ax = b:
Ax = A(Bb) = (AB)b = Ib = b.
As for the uniqueness for By = c, let y and y' be two solutions. Then
By = c, By' = c ⇒ ABy = Ac, ABy' = Ac ⇒ y = Ac = y',
where AB = I is used in the last step.
Thus the two conditions in the definition of inverse matrices imply that Ax = b must have a unique solution for any b. In fact from the proof we also know what the solution is.
Moreover, by the numerical consequences of always existence and uniqueness, we have
Translated to linear transformations, we see that there is no invertible linear transformations between euclidean spaces of different dimensions.
Example We gave the zero transformation as an example of non-invertible linear transformation. Now by the size consideration, we know that the linear transformation
T(x1, x2, x3) = (x1 + 2x2 + 3x3, 4x1 + 5x2 + 6x3): R3 → R2
is not invertible. Correspondingly, the matrix
is not invertible.
For the square matrix
because the corresponding system does not always have solutions, the matrix is not invertible.