System Identification

In this chapter we introduce the system identification framework. First, a bird’s eye view on the three lead actors in system identification is given. Next, some properties of estimators (unbiasedness, consistency, and efficiency) are discussed, followed by a series of exercises that discuss in more detail the different system identification aspects. Typical questions that are addressed are: How to design an experiment? What are the important aspects of a model? How to tune the complexity of the model? Should we always use a weighted least squares cost function? By the end of this chapter, the reader will be ready to apply data driven modeling to the identification of dynamic systems.

The following hands-on exercises guide the reader through the system identification process:



Encounter the Main Actors in System Identification
This Hands-On exercise provides a bird’s-eye perspective on the data driven modeling approach. An introduction to the main actors  (the data, the model, the cost, and validation) is given. Although the discussion will be made on a very simple example (a first order system), we will draw general conclusions that will be further elaborated in the other Hands-On illustrations in this chapter.

What you will learn:
– An integrated picture of the data-driven modeling process is provided.
– The role and importance of the main actors: system identification is a process where all the main actors come together.
– A good experiment reduces the impact of the disturbing noise on the results.
– Represent the system through non-parametric or parametric model structures.
– The choice of the cost function determines the properties of the estimates.
– How to select the model structure and to tune the model complexity.
– The plant and noise model can be simultaneously identified.
– How to provide confidence regions on the estimated parameters.

Download the MATLAB® live script Main Actors to run the session.
A pdf version of the file can be downloaded here.



Location properties: Unbiasedness and Consistency
In this Hands-On, the location properties of estimators are studied. Estimates depend on data that are affected by random disturbances and are therefore also stochastic variables. We cannot expect that the parameter values estimated from a finite length experiment equal the true value, we should relax our expectations. Here we consider two options: unbiased estimators (the expected value of the estimate equals the true value) and consistent estimators (the estimate converges in probability to the true value). In this Hands-On, both properties are introduced, discussed in more detail, and illustrated on some examples.

Download the MATLAB® live script Location Properties to run the session.
A pdf version of the file can be downloaded here.


Dispersion Properties: The Covariance Matrix
In this exercise, we study the covariance matrix of the estimates. Estimates depend on data that affected by random disturbances and are therefore also stochastic variables. The stochastic behavior of an estimate is fully characterized by its pdf (probability density function). In practice, often the mean (location characteristic) and covariance matrix (dispersion characteristic) are used because these are much easier to retrieve. The location characteristic of the estimates was studied in the previous exercise (see Location properties: Unbiasedness and Consistency), here we study the covariance matrix of the estimates and how it can be obtained during the estimation procedure. So, at the end of the estimation process an estimate of the parameters along with their covariance matrix will be available for the user.

What you will learn:
– Exact expression for the covariance matrix for models that are linear-in-the-parameters.
– Approximate expression for the covariance matrix for general models.
– Relation between the covariance matrix and the cost function.
– Dependency of the covariance matrix on the experiment.

Download the MATLAB® live script Dispersion Properties to run the session.
A pdf version of the file can be downloaded here.



Likelihood Function 
In this section, the likelihood function is introduced.  It is a very useful tool for embedding system identification in a systematic statistical framework that can be used as a guideline for the reader to make a good selection among the many user choices, especially for matching the model to the data. 

What you will learn: 
– Introduction to the likelihood function 
– Role of the likelihood function in system identification 
– How to obtain the likelihood function 
– Illustration on the estimation of the mean of a normal distribution

Download the MATLAB® live script Likelihood Function to run the session.
A pdf version of the file can be downloaded here.




Fisher Information and Cramér-Rao Lower Bound 
How much information is hidden in the experimental data? Because these data are disturbed with noise, there will remain always an uncertainty on the estimates that are calculated from it. For a finite length experiment with noisy observations, the covariance matrix on the parameter estimates cannot be arbitrarily small. The Fisher Information Matrix (How much information is there in the experiment?) and the Cramér-Rao lower bound (What is the lower bound on the covariance matrix of the estimated parameters?) will be introduced to quantify these statements.  

What you will learn: 
– The Fisher information matrix and Cramér-Rao lower bound are defined. 
– Simple examples are given to show how to calculate and interpret these matrices. 
– The concept of efficient estimators is introduced.

Download the MATLAB® live script Fisher Information and Cramér-Rao Lower Bound to run the session.
A pdf version of the file can be downloaded here.



Matching the Model to the Data: Choice of the Error Signal and Cost Function
In this Hands-On we learn how to match the model to the data. Two important choices must be made:
– What error signal will be used to describe the difference between the modeled and the experimental data?
– What cost function will be used to turn the error signal into a distance between the model and the data that can be minimized with respect to the model parameters?
Both aspects will be studied and illustrated in detail in this Hands-On. Using the maximum likelihood framework, we will offer a systematic approach to select the error signal and the cost function. We also provide more insight to understand better why some choices must be preferred over the other depending on the specific situation. This will guide the reader in new situations towards good solutions.

What you will learn:
– The maximum likelihood framework is presented as a guideline for the selection of the error signal and the cost function.
– A set of simple examples illustrates the flexibility and the power of the framework.
– A summary of the most important properties of the maximum likelihood estimator is given.
– Adding prior information to the data can improve the properties of the maximum likelihood estimator for small data sets.

Download the MATLAB® live script Matching the Model to the Data to run the session.
A pdf version of the file can be downloaded here.



Tuning the Model Complexity
How to select the complexity of a model? That question is addressed in this Hands-On.
Increasing the model complexity will reduce the structural model errors (more aspects of the systems’ behavior are covered by the model) at a cost of an increased variability (more parameters to be estimated from the same amount of information in the experimental data). A bias-variance trade-off should be made to balance the impact of both effects on the model quality. We should check if the increased model complexity leads to a significant increase of the model quality.

What you will learn:
– What is the price for increasing the model complexity?
– How to tune the model complexity when the plant and noise model are estimated?
– How to tune the model complexity when a prior noise model is available?
– Use the auto- and cross-correlation test to check the model quality.

Download the MATLAB® live script Tuning the Model Complexity to run the session.
A pdf version of the file can be downloaded here.



Identification of linear-in-the-parameters models
In this Hands-On we show that the (weighted) least squares estimation of models that are linear-in-the-parameters has an analytical solution. No nonlinear optimization procedures are needed to find the estimates. This turns this simple but versatile approach into a very attractive (first step) method in system identification.

What you will learn:
– The (weighted) least squares identification problem is formulated (data, model, estimates).
– The statistical properties of the estimates are studied (bias, covariance, distribution).
– The best choice for the weighting matrix is discussed.
– The numerical stability of the analytical solution is studied.

Download the MATLAB® live script
Identification of Linear-in-the-Parameters Models to run the session.
A pdf version of the file can be downloaded here.



Recursive Identification – Illustration on the Sample Mean
In this Hands-On we illustrate the idea of recursive identification on the calculation of the sample mean. Instead of waiting till all measurements are available (batch estimation), the parameter estimates are updated each time that a new sample is available using a recursive implementation.
In this Hands-On, we only illustrate the idea on the example of the sample mean. The reader is referred to the literature for a further discussion.

What you will learn:
– Recursive least-squares estimation of the sample mean.
– Discussion of the main actors in the algorithm.
– Discussion of the role of the gain factor in the recursive algorithm.

Download the MATLAB® live script Recursive Estimation to run the session.
A pdf version of the file can be downloaded here.