The concept of span is derived from the existence problem for the vector equation

`c`_{1}`v`_{1} + `c`_{2}`v`_{2} + ... +
`c _{k}v_{k}` =

where `c`_{1}, `c`_{2}, ..., `c _{k}` are the unknown variables.
Since the left side of the vector equation is linear in the variable

`c`_{1}`v`_{1} + `c`_{2}`v`_{2} + ... +
`c _{k}v_{k}` =

has only the trivial solution. This leads to the following definition.

Vectors `v`_{1}, `v`_{2}, ..., ` v_{k}` are linearly independent if

`c`_{1}`v`_{1} + `c`_{2}`v`_{2} + ... + `c _{k}v_{k}` =

In other words, if a linear combination is zero, then the coefficients must be zero.

Example The equality

` x` = (

has been used in deriving the fact that the
standard basis
**e**_{1}, **e**_{2}, **e**_{3} span **R**^{3}.
The same equality also gives us the following implication

`x`_{1}**e**_{1} + `x`_{2}**e**_{2} + `x`_{3}**e**_{3} = **0**
⇒ ** x** =

The implication means exactly that **e**_{1}, **e**_{2}, **e**_{3} are linearly independent. The same reason also shows
that the standard basis **e**_{1}, **e**_{2}, ..., ` e_{n}` of

`a`_{0} + `a`_{1}`t` + `a`_{2}`t`^{2} + ... + `a _{n}t^{n}` = 0
⇒

means exactly that 1, `t`, `t`^{2}, ..., `t ^{n}` are linearly independent.

The opposite of linear independence is

Vectors `v`_{1}, `v`_{2}, ..., ` v_{k}` are linearly dependent if there are

`c`_{1}`v`_{1} + `c`_{2}`v`_{2} + ... + `c _{k}v_{k}` =

By "not all zero", we mean at least one `c _{i}` ≠ 0. Note that this is different from "all not zero", which would mean all

Example To find out whether the vectors (1, 3, 2) and (1, -1, 1) are linearly independent, we check whether the following implication holds

`c`_{1}(1, 3, 2) + `c`_{2}(1, -1, 1) = (0, 0, 0)
⇒ `c`_{1} = `c`_{2} = 0.

By considering the three coordinates, this is the same as whether a homogeneous system of linear equations has only the the trivial solution

`c`_{1} + `c`_{2} = 0,
3`c`_{1} - `c`_{2} = 0,
2`c`_{1} + `c`_{2} = 0
⇒ `c`_{1} = `c`_{2} = 0.

It is easy to see that the implication indeed holds. Therefore we conclude that (1, 3, 2) and (1, -1, 1) are linearly independent.

On the other hand, since

(-1)(1, 3, 1) + 2(-1, 1, 1) + 1(3, 1, -1) = (0, 0, 0),

(-2)(1, 3, 1) + 0(-1, 1, 1) + 1(2, 6, 2) = (0, 0, 0),

the vectors (1, 3, 1), (-1, 1, 1), (3, 1, -1) are linearly dependent, and the vectors (1, 3, 1), (-1, 1, 1), (2, 6, 2) are also linearly dependent.

Example To find out whether cos`t` and sin`t` are linearly independent, we start with the equation

`a` cos`t` + `b` sin`t` = 0

and try to deduce that (the constants) `a` = `b` = 0. By taking `t` = 0,
we have `a` = 0. Substituting into the equation, we get `b` sin`t` = 0. Since sin`t` = 0 is a nonzero function, we have `b` = 0. Therefore the implication

`a` cos`t` + `b` sin`t` = 0 ⇒ `a` = `b` = 0

does hold, and cos`t` and sin`t` are linearly independent.

On the other hand, the equalities

1 cos^{2}`t` + 1 sin^{2}`t` + (-1) 1 = 0,

sin2`t` - 2 sin`t`cos`t` + 0 `t` = 0,

show that the functions cos^{2}`t`, sin^{2}`t`, 1 are linearly dependent,
and the functions sin2`t`, sin`t`cos`t`, `t` are also linearly dependent.

Example We want to determine whether polynomials
`p`_{1} = 1 - `t` + `t`^{3},
`p`_{2} = 3 - `t` + 4`t`^{2} + 3`t`^{3},
`p`_{3} = 2 - `t` + 2`t`^{2} + 2`t`^{3},
`p`_{4} = `t` + 4`t`^{2} - 2`t`^{3},
`p`_{5} = 1 + 3`t`^{2} are linearly independent.
Thus we study the equation

`c`_{1}(1 - `t` + `t`^{3}) +
`c`_{2}(3 - `t` + 4`t`^{2} + 3`t`^{3}) +
`c`_{3}(2 - `t` + 2`t`^{2} + 2`t`^{3}) +
`c`_{4}(`t` + 4`t`^{2} - 2`t`^{3}) +
`c`_{5}(1 + 3`t`^{2}) = 0

and see whether we can deduce `c`_{1} = `c`_{2} = `c`_{3} = `c`_{4} = `c`_{5} = 0 from the equation.

By comparing the coefficients of 1, `t`, `t`^{2}, and `t`^{3}, the equation is the same as a system of four linear homogeneous equations.

x_{1} |
+ 3x_{2} |
+ 2x_{3} |
+ x_{5} |
= | 0 | |

- x_{1} |
- x_{2} |
- x_{3} |
+ x_{4} |
= | 0 | |

4x_{2} |
+ 2x_{3} |
+ 4x_{4} |
+ 3x_{5} |
= | 0 | |

x_{1} |
+ 3x_{2} |
+ 2x_{3} |
- 2x_{4} |
= | 0 |

The question of deducing `c`_{1} = `c`_{2} = `c`_{3} =
`c`_{4} = `c`_{5} = 0 is the same as the uniqueness of the solution.
By this example,
a row echelon form of the coefficient matrix (also see this example)

[ [p_{1}]
[p_{2}]
[p_{3}]
[p_{4}]
[p_{5}]
] = [ |
1 | 3 | 2 | 0 | 1 | ] |

-1 | -1 | -1 | 1 | 0 | ||

0 | 4 | 2 | 4 | 3 | ||

1 | 3 | 2 | -2 | 0 |

is

[ | 1 | 3 | 2 | 0 | 1 | ]. |

0 | 2 | 1 | -1 | 0 | ||

0 | 0 | 0 | 2 | 1 | ||

0 | 0 | 0 | 0 | 0 |

Since not all columns are pivot, the solution is not unique. Therefore the five polynomials are linearly dependent.

Similar to the earlier example, what we have really done was to use

`p` = `a` + `bt` + `ct`^{2} + `dt`^{3} ∈ `P`_{3}
↔ [`p`] = (`a`, `b`, `c`, `d`) ∈ **R**^{4}

to translate a linear independence problem about polynomials to the a linear independence problem about euclidean vectors. The later problem is the same as the uniqueness of a system of linear equations and can be solved by looking at the pivot columns of the coefficient matrix. For example, the computation of the row echelon form also tells us that

[ [p_{1}]
[p_{2}]
[p_{4}]
] = [ |
1 | 3 | 0 | ] |

-1 | -1 | 1 | ||

0 | 4 | 4 | ||

1 | 3 | -2 |

has

[ | 1 | 3 | 0 | ] |

0 | 2 | -1 | ||

0 | 0 | 2 | ||

0 | 0 | 0 |

as a row echelon form. Since all three columns are pivot, we have the uniqueness for the solution of the
corresponding linear equation. In particular, this translates back to polynomials and tells us that
`p`_{1}, `p`_{2}, `p`_{4} are linearly independent.