The matrix multiplication is consistent with our notation for the left side of a system of linear equations. Specifically, by thinking of a vertical vector x as a 1 by n matrix, the matrix product
|Ax = [||a11||a12||...||a1n||][||x1||] = [||a11x1 + a12x2 + ... + a1nxn||]|
|a21||a22||...||a2n||x2||a21x1 + a22x2 + ... + a2nxn|
|am1||am2||...||amn||xn||am1x1 + am2x2 + ... + amnxn|
is exactly the left of the system Ax = b. The remark also shows that the rule for matrix multiplication is equivalent to
Now we give a rigorous proof of the rule for matrix multiplication by showing the above is true.
Proof Let A and B be matrices for linear transformations T: Rm → Rn and S: Rk → Rn. Then by this formula and the additional reason provided below, we have
= [TS(e1) TS(e2) ... TS(ek)] (definition: AB is matrix for TS)
= [T(S(e1)) T(S(e2)) ... T(S(ek))] (definition of composition TS)
= [T(b1) T(b2) ... T(bk)] (B is matrix for S: B = [S(e1) S(e2) ... S(ek)])
= [Ab1 Ab2 ... Abk]. (A is matrix for T: T(x) = Ax)
We have seen the associativity (AB)C = A(BC) for the matrix multiplication in this example. In general, this is the consequence of the corresponding associativity (TS)R = T(SR) for the compositions of transformations (which is clearly true). Moreover, composing with the identity does not change the transformations: id T = T = T id. Since the identity transformation corresponds to the identity matrix I, we conclude that the product with the identity matrix does not change the matrix.
Thus the matrix multiplication is similar to number multiplication. However, the following example shows that one should be careful in stretching the analogue.
Example In an earlier example, we have shown
(Reflection in x-axis)(Rotation by 90 degrees)
= (Reflection in the line x + y = 0)
(Rotation by 90 degrees)(Reflection in x-axis) = (Reflection in the line x = y)
In particular, we see that the compositions TS and ST are different in general. In terms of the matrix product, we have
|[||1||0||][||0||-1||] = [||0||-1||]|
|[||0||-1||][||1||0||] = [||0||1||]|
which shows AB ≠ BA in general.
Example Consider two transformations T = Projection to x-axis and S = Projection to y-axis from R2 to itself.
The composition TS is the zero transformation - taking everything to the zero vector 0.
Correspondingly, we have two nonzero matrices with product being zero.
|[||1||0||][||0||0||] = [||0||0||]|
Finally, we denote
A0 = I
A1 = A
A2 = AA
A3 = AAA
. . .
In general, for positive k and square matrix A, we use Ak to denote AA...A, the product of k copies of A. The same notation is used for a transformation with the same same domain and codomain.
Example For a diagonal matrix,
|A = [||a1||0||. .||0||]|
|AK = [||a1k||0||. .||0||]|