Lecture 3: Informal Example
We briefly review the concept of a vector space. A vector space has the following key property: If then for any scalars and . That is, linear combinations of vectors give vectors.
Most of your background with vectors has been for vectors in . But: the signals that we deal with are also elements of a vector space , since linear combinations of signals also gives a signal. This is a very important and powerful idea.
Recall that in vector spaces we deal with concepts like the length of a vector, the angle between vectors, and the idea of orthogonal vectors. All of these concepts carry over, by suitable definitions, to vector spaces of signals.
This powerful idea captures most of the significant and interesting notions in signal processing, controls, and communications. This is really the reason why the study of linear algebra is so important.
In this lecture we will learn about geometric representations of signals via signal space (vector) concepts. This straightforward idea is the key to a variety of topics in signals and systems:
 It provides a distance concept useful in many pattern recognition techniques.
 It is used in statistical signal processing for the filtering, smoothing, and prediction of noisy signals.
 It forms the heart and geometric framework for the tremendous advances that have been made in digital communications.
 It is every waveformbased transform you ever wanted (Fourier series, FFT, DCT, wavelet, etc.)
 It is also used in the solution of partial differential equations, etc.
 It relies on our old friend, linearity. One might even say it is the reason that we care so much about linearity in the first place.
We will soon turn our attention to Fourier series, which are a way of analyzing and synthesizing signals.
Vectors will be written in
bold
font (like the ingredients above. Initially,
we can think of a vector
as an ordered set of
numbers, written in a column:
While we have written a vector as an tuple, that is not what defines a vector. A vector is an element of a vector space, which is to say, it satisfies the linearity property given above.
Scalar multiplication of vectors is in the usual fashion. Matrix multiplication is also taken in the traditional manner.
Let
The inner product can be expanded using the following rules:

For a scalar
,


For real vectors (which is all we will be concerned about for the moment)
The (Euclidean) norm of a vector is given by
The projection of a vector
onto a vector
is given by
Now suppose that we have a vector
(an ``ingredient'') and we have a vector
and we want to make the best approximation to
using some amount of our ingredient. Draw a picture. We can write
Now let's do it the hard way: we want to find the amount of
to minimize the (length of the) error. The squared length of the error is
We may actually have more than one ``ingredient'' vector to deal with. Suppose
we want to approximate
with the vectors
and
.
As before write
Of course, what we can do for two ingredient vectors, we can do for
ingredient vectors (and
may be infinite). We want to approximate
as
It would seem that if we take large enough, we should be able to represent any vector. without any error. (Analogy: given enough ingredients, we could make any cake. We might not be able to make everything, but we could make everything some class of objects.) If this is true, the set of ingredient vectors are said to be complete . A more formal name for the ingredient vectors is basis vectors .
Although we have come up with a way of doing the approximation, there is still
a lot of work to solve for the coefficients, since we have to first find a matrix
and then invert it. Something that is commonly done is to choose a set of basis
vectors that is
orthogonal
. That is, if
and
are any pair of basis vectors, then