### Range and Column Space

##### 1. Range

Recall the range of a transformation **T**: `X` → `Y` is all the images under the transformation

`range`**T** = {`T`(`x`): any `x` ∈ `X`}

= {`b` ∈ `Y`: There is `x` ∈ `X`, such that `T`(`x`) = `b`}

= {`b` ∈ `Y`: `T`(`x`) = `b` has solutions}.

For a *linear* transformation, we have

The range of a linear transformation `T`: `V` → `W` is a subspace of `W`.

Proof `w`, `w`' ∈ `range`**T** ⇒ `w` = `T`(`v`), `w`' = `T`(`v`') for some `v`, `v`' ∈ `V` ⇒ `w` + `w`' = `T`(`v`) + `T`(`v`') = `T`(`v` + `v`') ∈ `range`**T**,
where the linearity of `T` is used in the red equality. The proof of `w` ∈ `range`**T** ⇒ `c`**w** ∈ `range`**T** is similar.

Example The subset (see this exercise)

{(`a` + `b`, 3`a` - `b`, 2`a` + `b`): `a`, `b` ∈ **R**} ⊂ **R**^{3}

is a subspace because it is the range of the linear transformation

`T`(`a`, `b`) = (`a` + `b`, 3`a` - `b`, 2`a` + `b`): **R**^{2} → R^{3}.

The subset (see this exercise)

{`a` cos`t` + `b` sin`t`: `a`, `b` ∈ **R**} ⊂ `F`(**R**)

is a subspace because it is the range of the linear transformation

`C`(`a`, `b`) = `a` cos`t` + `b` sin`t`: **R**^{2} → `F`(**R**).

We also recall that a transformation is onto (surjective) if and only if the range is the whole target space. In other words, any element of `Y` is the image of some element of `X`.

Example Consider the derivative transformation

`D`(`f`) = `f`': `C`^{1}(**R**) → `C`(**R**).

By the fundamental theorem of calculus, any continuous function `f` ∈ `C`(**R**) has an antiderivative (for example, given by `F`(`t`) = ∫_{0}^{t}f(`t`)`dt`). This implies that `D` is onto. However, if we change the target space from `C`(**R**) to `F`(**R**), then `D` is no longer onto.

Example Consider the linear transformation

`M`(`f`) = (`t`^{2}+1)`f`: `F`(**R**) → `F`(**R**)

of multiplying the function `t`^{2}+1. For any function `f` ∈ `F`(**R**),
we may construct another function `F`(`t`) = `f`(`t`)/(`t`^{2}+1).
Then we have `M`(`F`) = `f`. Thus the transformation `M` is onto.
Note that the key here is that `t`^{2} + 1 never vanishes, so that it can be "inverted".
If we change the function to `t`^{2} - 1, then `M` is no longer onto.

For the similar reason, if `X` is an invertible `n` by `n` matrix, then the
linear transformation

`M`(**X**) = **AX**: `M`(`k`, `n`) → `M`(`m`, `n`)

is onto.