##### Personal tools
•
You are here: Home Programming Assignments

# Programming Assignments

##### Document Actions

Introduction   ::   System Model   ::   Expectation   ::   Part 1   ::   Filters   ::   Part 2   ::   Submission

## System Model

For brevity, we will employ an operator notation frequently used in the literature. If

we will use the notation

as a shorthand for the filtering operation

A known discrete-time input signal is applied to a system which is assumed to be FIR, having transfer function

where we will also assume that the order of the filter is known. The filtered signal is corrupted by an additive noise signal which is assumed to be autoregressive (AR):

where is a zero-mean, stationary, ergodic, white-noise signal with variance , and has the all-pole form

 (1)

The measured output signal is thus

The system identification problem to be addressed here is this: given the input/output measurements , estimate and .

## The AR Model and Its Prediction

The model for the noise can be written another way. The IIR filter could be written (say, using long division) as

with . (That is why we call it IIR -- it has an infinite number of coefficients in its impulse response.) So

For the development below, we will find it convenient to develop a predictor for . Given the sequence of measurements for , what is the best estimate of ? We will denote this estimate by . If by best'' we mean best in the minimum mean-squared error sense, we want to find an estimator which minimizes

We know that this will be the conditional mean:

Note that if we know all the data , we equivalently know all the data . Thus can equivalently be written

From the form

we see immediately that this conditional expectation is

since has zero mean. This can be written in our operator notation as

Since , we can write

 (2)

Let the inverse transfer function be written in the form

 (3)

where . (Note: It should be clear that in general.) In the case that is in fact an all-pole filter as in ( 1 ), we have

so that . Substituting ( 3 ) into ( 2 ) we obtain in the general case

In the case that is an all-pole filter, we obtain

That is: the best predictor of given previous measurements of has the form of an FIR filter, as shown in the block diagram here:
From a system identification perspective, we would want to choose the coefficients in this filter so that the prediction error is as small as possible. That is, we would choose so that

is as small as possible. We can write this in a vector form as follows. Let

and

Then we want to minimize

 (4)

Exercise 1:
Show that minimizing the expression in ( 4 ) leads to the Widrow-Hopf normal equations

 (5)

where and .

Also show that the minimum mean squared prediction error is

where is the solution to ( 5 ). This can be used as an estimate of the variance of the noise , .

## Prediction of and System Identification

An important step in the overall system identification process is a one-step-ahead predictor for . Suppose that is known for . What is the best estimate (prediction) that can be made for using all of this information? We will denote this predictor as . Since the input is known, this prediction must be made on the best prediction of given the information up to time , which we have denoted as :

We have seen above that can be written as . We thus have

or

The prediction error is

Exercise 2
Show that the prediction error can be written as

The results of this exercise are very important: The prediction error for the optimal predictor form a white noise sequence. The signal that cannot be predicted from past observations represents new information. For this reason, is called the innovation at time .

Let us take the prediction error and write it in two ways. We have

 (6)

We can also write

 (7)

Let us consider what we can learn from each of these.

### Representation 1

Suppose that were known. Then in ( 6 ) we could form the signal

With the input known and the output known, can be computed. Now rewrite ( 6 ) as

 (8)

The identification problem at this point is then: Determine the coefficients of the linear predictor with coefficients that minimizes . The idea is suggested by the following figure.

Exercise 3
Let

Show that the predictor which minimizes in ( 8 ) has coefficients determined by

 (9)

where

is the autocorrelation matrix of , and where

is the cross-correlation vector between and , where

### Representation 2

Now let us look at the representation in ( 7 ). Suppose that were known, and let

Then ( 7 ) can be written as

 (10)

A system identification problem can be stated: Find to minimize

The idea is suggested by the following figure.

Exercise 4
Let

be the vector of coefficients in and let

Show that the filter which minimizes , with as in ( 10 ), has coefficients determined by

 (11)

where

is the autocorrelation matrix of and where

is the cross-correlation matrix between and .

### Putting the pieces together

The above sections suggest that:

• Knowing , we can estimate , and
• Knowing , we can estimate .
However, to begin with we don't know either or . Nevertheless, all is not lost. We start with a guess of , then iterate as follows:

Pick an initial .

1. Solve for an updated using ( 9 ).
2. Solve for an updated using ( 11 ).
3. Repeat from step 1 until convergence.

A suggested is something like

(put a 0.1 in the middle of the impulse response).
Copyright 2008, by the Contributing Authors. Cite/attribute Resource . admin. (2006, June 13). Programming Assignments. Retrieved January 07, 2011, from Free Online Course Materials — USU OpenCourseWare Web site: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Stochastic_Processes/sysid_2.htm. This work is licensed under a Creative Commons License