##### Personal tools
•
You are here: Home Programming Assignments

# Programming Assignments

##### Document Actions

Introduction   ::   System Model   ::   Expectation   ::   Part 1   ::   Filters   ::   Part 2   ::   Submission

Solving for the unknown filter coefficients requires setting up the estimated autocorrelation and cross-correlation information, then solving a set of linear equations. These will work provided that the system is stationary, that sufficient data are available to make accurate estimates, and that the process does not take too long. (In real-time circumstances, the computation time might exceed the available computational resources.)

Overcoming these demands can be accomplished in part by the use of adaptive filters. These are filters that adapt themselves to minimize some mean-squared error criterion. Here we provide a very brief introduction to the topic, by introducing what are known as LMS adaptive filters. (LMS stands for least mean squares.)

Let be the input to an FIR filter, with output

The coefficients are the FIR filter coefficients, also referred to as filter weights. This filter operation can be represented as

where

is the vector of filter coefficients and

is the vector of filter inputs/memory.

Suppose that some desired signal is available, and we with to filter the input signal so that the filter output matches the desired signal as closely as possible. That is, we form

and choose to minimize . More precisely, we desire to minimize the mean-squared error in :

Exercise 5
Show that is minimized when

where and .
By this point, this form of the optimal solution should be familiar.

So far, the filter is not adaptive. We make an adaptive filter as possible. We form an error measure as a function of the filter coefficients:

Rather than minimizing all in one step, as before, we compute the gradient of with respect to , and update the current by moving in the direction of the negative gradient. That is, we slide downhill'' on the surface of . The idea of sliding downhill is conveyed in the following figure:
This figure shows the contours of a function (of two variables). At each point in the plane, the contours are orthogonal to the direction of the gradient, with the gradient pointing in the direction of greatest increase. Thus, starting at an initial point and moving some small distance from the point in the direction of the negative gradient at that point decreases the function value. Then starting at the new point and moving in the direction of the negative gradient at that point again decreases the function value. A series of such steps will eventually reach a point of local minimum. An update rule such as this is referred to as steepest descent .

We denote the gradient of with respect to as . Based on this, an update rule can be written as

That is, the filter weights at the next time around, , are obtained by moving from the current weights, , in the direction of the negative gradient, evaluated at the current weights, . The quantity is a step size,'' indicating how far the move should be.

Exercise 6
Show that

can be written as

Hence the weight update rule is

The last exercise describes how to update the coefficients of an adaptive filter in such a way that the filter will become as close as possible to (given enough time). Eventually, the solution will converge to the exact MMSE solution.

However, there is a practical problem with the filter to this point. It requires computing . That is, the expected value must be computed, which requires (theoretically) some kind of probability information, or some kind of ensemble to average over. This is problematic in practice. In the LMS filter, the stochastic gradient approximation is used:

Assume that, for every instance (draw) of a random variable, that that instance is equal to the mean value of the random variable.
This seems somewhat reasonable. By the Chebyshev inequality, we would expect that a randomly-drawn value would be close to the mean value. On the other hand, it is not exact: We don't expect any toss of a die (taking outputs in the range ) to have the value of the mean, which is 3.5!

Under the stochastic gradient approximation, we therefore assume that

While we don't get precise equality at every time step, over a number of runs, or over a number of time steps, we will be right on average. Under this approximation, the gradient descent does not run strictly downhill in the steepest direction, but it does proceed downhill on average .

Under this approximation the filter weight update rule is written

Putting all the pieces together, we obtain the following algorithm for the LMS adaptive filter:

Starting from some initial filter coefficients , and :

 Given inputs and : Form (compute filter output) (compute error between output and desired) update filter coefficients) (increment iteration number) repeat

The adaptive filter configuration is often portrayed as in the figure here, where represents the adaptive filter,

The dashed line feeding back through the filter box is intended to suggest variability, such as in a variable resistor.

The adaptive filter can be used as an adaptive predictor, that is, a predictor which learns the taps that it needs to predict its input signal.

In this configuration, the input to the filter is . The output of the filter is interpreted as

that is, the prediction of based on previous information. The desired signal is : The filter adapts itself until is as close as possible to , using the information in .

Thus, the predictor in the figure in section 2.2.1 can be replaced with an adaptive filter.