Personal tools
•
You are here: Home Lecture 3: Informal Example

Lecture 3: Informal Example

Document Actions
Schedule :: Intro :: Informal Example :: Function Spaces

We briefly review the concept of a vector space. A vector space has the following key property: If then for any scalars and . That is, linear combinations of vectors give vectors.

Most of your background with vectors has been for vectors in . But: the signals that we deal with are also elements of a vector space , since linear combinations of signals also gives a signal. This is a very important and powerful idea.

Recall that in vector spaces we deal with concepts like the length of a vector, the angle between vectors, and the idea of orthogonal vectors. All of these concepts carry over, by suitable definitions, to vector spaces of signals.

This powerful idea captures most of the significant and interesting notions in signal processing, controls, and communications. This is really the reason why the study of linear algebra is so important.

In this lecture we will learn about geometric representations of signals via signal space (vector) concepts. This straightforward idea is the key to a variety of topics in signals and systems:

1. It provides a distance concept useful in many pattern recognition techniques.
2. It is used in statistical signal processing for the filtering, smoothing, and prediction of noisy signals.
3. It forms the heart and geometric framework for the tremendous advances that have been made in digital communications.
4. It is every waveform-based transform you ever wanted (Fourier series, FFT, DCT, wavelet, etc.)
5. It is also used in the solution of partial differential equations, etc.
6. It relies on our old friend, linearity. One might even say it is the reason that we care so much about linearity in the first place.

We will soon turn our attention to Fourier series, which are a way of analyzing and synthesizing signals.

Vectors will be written in bold font (like the ingredients above. Initially, we can think of a vector as an ordered set of numbers, written in a column:

Often to conserve writing, this will be written in transposed form,

While we have written a vector as an -tuple, that is not what defines a vector. A vector is an element of a vector space, which is to say, it satisfies the linearity property given above.

Scalar multiplication of vectors is in the usual fashion. Matrix multiplication is also taken in the traditional manner.

Let

and
be two vectors. The inner product (known to many of you as the dot product ) of the vectors and is written as
In words, multiply component by component, and add them up. Two vectors are said to be orthogonal or perpendicular if their inner product is zero:
If and are orthogonal, this is sometimes written

The inner product can be expanded using the following rules:

1. For a scalar ,
2.
3. For real vectors (which is all we will be concerned about for the moment)

The (Euclidean) norm of a vector is given by

The distance between two vectors is given by

The projection of a vector onto a vector is given by

Geometrically, this is the amount of the vector in the direction of . (Show a picture.) Obviously, if and are orthogonal, then the projection of onto is 0.

Now suppose that we have a vector (an ingredient'') and we have a vector and we want to make the best approximation to using some amount of our ingredient. Draw a picture. We can write

where is the amount of we want and is the error between the thing we want and our approximation of it. To get the best approximation we want to minimize the length of the error vector. Before we go through and do it the hard way, let us make a geometric observation. The length of the error is minimized when the error vector is orthogonal to our ingredient vector :
or

Giving us
Note that this is simply the projection of onto the vector .

Now let's do it the hard way: we want to find the amount of to minimize the (length of the) error. The squared length of the error is

To minimize this, take the derivative with respect to the coefficient and equate to zero:
Solving for the coefficient,
This is the same one we got before.

We may actually have more than one ingredient'' vector to deal with. Suppose we want to approximate with the vectors and . As before write

where is the error in the approximation. Note that we can write this in the following way:
using the usual matrix multiplication. We want to find the coefficients and to minimize the length of the error. We could to it the calculus way, or using our orthogonality idea. We will go for the latter: The error is orthogonal to the data means that

(that is, the error is orthogonal to each of the ingredient "data'' points). Expanding these out gives

This is two equations in two unknowns that we can write in the form
If we know and the ingredient vectors, we can solve for the coefficients.

Of course, what we can do for two ingredient vectors, we can do for ingredient vectors (and may be infinite). We want to approximate as

We can find the set of coefficients that minimize the length of the error using the orthogonality principle as before, applied times. This gives us equations in the unknowns which may be written as
This could be readily solved (say, using Matlab).

It would seem that if we take large enough, we should be able to represent any vector. without any error. (Analogy: given enough ingredients, we could make any cake. We might not be able to make everything, but we could make everything some class of objects.) If this is true, the set of ingredient vectors are said to be complete . A more formal name for the ingredient vectors is basis vectors .

Although we have come up with a way of doing the approximation, there is still a lot of work to solve for the coefficients, since we have to first find a matrix and then invert it. Something that is commonly done is to choose a set of basis vectors that is orthogonal . That is, if and are any pair of basis vectors, then

Let us return to the case of two basis vectors when the vectors are orthogonal. Then the equation for the coefficients becomes

or

so the coefficients are
So solving for the coefficients in this case is as easy as doing it for the case of a single vector, and the coefficient is simply the projection of onto its corresponding basis vector. This generalizes to basis vectors: If the basis vectors are orthogonal, then the coefficient is simply

Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, May 22). Lecture 3: Informal Example. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Signals_and_Systems/3_2node2.html. This work is licensed under a Creative Commons License