math111_logo Linear Indpendence

1. Definition and relation to uniqueness

The concept of span is derived from the existence problem for the vector equation

c1v1 + c2v2 + ... + ckvk = b,

where c1, c2, ..., ck are the unknown variables. Since the left side of the vector equation is linear in the variable c = (c1, c2, ..., ck) (see this exercise), the solution is unique if and only if the corresponding homogeneous equation

c1v1 + c2v2 + ... + ckvk = 0,

has only the trivial solution. This leads to the following definition.

Vectors v1, v2, ..., vk are linearly independent if

c1v1 + c2v2 + ... + ckvk = 0c1 = c2 = ... = ck = 0.

In other words, if a linear combination is zero, then the coefficients must be zero.

Example The equality

x = (x1, x2, x3) = x1e1 + x2e2 + x3e3

has been used in deriving the fact that the standard basis e1, e2, e3 span R3. The same equality also gives us the following implication

x1e1 + x2e2 + x3e3 = 0x = 0 ⇒ all coordinates x1, x2, x3 of x = 0.

The implication means exactly that e1, e2, e3 are linearly independent. The same reason also shows that the standard basis e1, e2, ..., en of Rn is linearly independent.

Example The trivial statement

a0 + a1t + a2t2 + ... + antn = 0 ⇒ a0 = a1 = a2 = ... = an = 0

means exactly that 1, t, t2, ..., tn are linearly independent.

The opposite of linear independence is

Vectors v1, v2, ..., vk are linearly dependent if there are c1, c2, ..., ck, not all zero, such that

c1v1 + c2v2 + ... + ckvk = 0.

By "not all zero", we mean at least one ci ≠ 0. Note that this is different from "all not zero", which would mean all ci ≠ 0.

Example To find out whether the vectors (1, 3, 2) and (1, -1, 1) are linearly independent, we check whether the following implication holds

c1(1, 3, 2) + c2(1, -1, 1) = (0, 0, 0) ⇒ c1 = c2 = 0.

By considering the three coordinates, this is the same as whether a homogeneous system of linear equations has only the the trivial solution

c1 + c2 = 0, 3c1 - c2 = 0, 2c1 + c2 = 0 ⇒ c1 = c2 = 0.

It is easy to see that the implication indeed holds. Therefore we conclude that (1, 3, 2) and (1, -1, 1) are linearly independent.

On the other hand, since

(-1)(1, 3, 1) + 2(-1, 1, 1) + 1(3, 1, -1) = (0, 0, 0),
(-2)(1, 3, 1) + 0(-1, 1, 1) + 1(2, 6, 2) = (0, 0, 0),

the vectors (1, 3, 1), (-1, 1, 1), (3, 1, -1) are linearly dependent, and the vectors (1, 3, 1), (-1, 1, 1), (2, 6, 2) are also linearly dependent.

Example To find out whether cost and sint are linearly independent, we start with the equation

a cost + b sint = 0

and try to deduce that (the constants) a = b = 0. By taking t = 0, we have a = 0. Substituting into the equation, we get b sint = 0. Since sint = 0 is a nonzero function, we have b = 0. Therefore the implication

a cost + b sint = 0 ⇒ a = b = 0

does hold, and cost and sint are linearly independent.

On the other hand, the equalities

1 cos2t + 1 sin2t + (-1) 1 = 0,
sin2t - 2 sintcost + 0 t = 0,

show that the functions cos2t, sin2t, 1 are linearly dependent, and the functions sin2t, sintcost, t are also linearly dependent.

Example We want to determine whether polynomials p1 = 1 - t + t3, p2 = 3 - t + 4t2 + 3t3, p3 = 2 - t + 2t2 + 2t3, p4 = t + 4t2 - 2t3, p5 = 1 + 3t2 are linearly independent. Thus we study the equation

c1(1 - t + t3) + c2(3 - t + 4t2 + 3t3) + c3(2 - t + 2t2 + 2t3) + c4(t + 4t2 - 2t3) + c5(1 + 3t2) = 0

and see whether we can deduce c1 = c2 = c3 = c4 = c5 = 0 from the equation.

By comparing the coefficients of 1, t, t2, and t3, the equation is the same as a system of four linear homogeneous equations.

x1 + 3x2 + 2x3   + x5 = 0
- x1 - x2 - x3 + x4   = 0
  4x2 + 2x3 + 4x4 + 3x5 = 0
x1 + 3x2 + 2x3 - 2x4   = 0

The question of deducing c1 = c2 = c3 = c4 = c5 = 0 is the same as the uniqueness of the solution. By this example, a row echelon form of the coefficient matrix (also see this example)

[ [p1] [p2] [p3] [p4] [p5] ] = [ 1 3 2 0 1 ]
-1 -1 -1 1 0
0 4 2 4 3
1 3 2 -2 0

is

[ 1 3 2 0 1 ].
0 2 1 -1 0
0 0 0 2 1
0 0 0 0 0

Since not all columns are pivot, the solution is not unique. Therefore the five polynomials are linearly dependent.

Similar to the earlier example, what we have really done was to use

p = a + bt + ct2 + dt3P3 ↔ [p] = (a, b, c, d) ∈ R4

to translate a linear independence problem about polynomials to the a linear independence problem about euclidean vectors. The later problem is the same as the uniqueness of a system of linear equations and can be solved by looking at the pivot columns of the coefficient matrix. For example, the computation of the row echelon form also tells us that

[ [p1] [p2] [p4] ] = [ 1 3 0 ]
-1 -1 1
0 4 4
1 3 -2

has

[ 1 3 0 ]
0 2 -1
0 0 2
0 0 0

as a row echelon form. Since all three columns are pivot, we have the uniqueness for the solution of the corresponding linear equation. In particular, this translates back to polynomials and tells us that p1, p2, p4 are linearly independent.


[previous topic] [part 1] [part 2] [part 3] [next topic]