# Lecture 3: Function Spaces

One of the neat things about all of this is that we can do the same techniques for infinite-dimensional spaces, which includes spaces of functions. Now our ingredients are not vectors in the usual sense, but functions. Suppose we have a set of ingredient functions called . We want to find a representation of some other function in terms of these functions:

Getting back to our representation,

To find the coefficients, we proceed as we did in the vector case: we want
the error between the function
and its representation to be as small as possible. This produces again the
**
orthogonality
theorem
**
: the error is orthogonal to the data. This leads to the following
equation for the
coefficients:

**It doesn't matter whether you are dealing with vectors or functions: the result is the same.**This means that you can use any geometric insight that you might obtain from vectors and apply it to functions when represented in this way. This is an extremely powerful notion and, in a sense, forms the very heart of digital communications and a good part of signal processing theory.

As before, it is often convenient to deal with orthogonal functions. Suppose
that the set of basis functions that we choose is orthogonal, so that

Now we are ready to deal with Fourier series!