Layman’s Summary

**Why do we need mathematical models?**

The prime goal of science is to help us understand, predict, and control the surrounding world. The scientiﬁc community makes intensively use of mathematical models to address this challenge in a systematic and quantiﬁed approach. The models can be used to i) predict the future evolution of a system (for example a weather forecast); ii) To simulate what would happen under new conditions without needing to make time consuming, expensive, or even dangerous experiments (for example the simulation of a new airplane); and iii) To get a deeper physical insight in nature (for example Newton’s law of gravitation to describe the planets orbiting around the sun).

Mathematical models are intensively used in traditional industrial and emerging new high technological applications coming, among others, from the mechanical, electrical, electronic, telecommunication, and automotive ﬁeld; biomechanical and biomedical applications can also take full advantage of such models. Highly structured models provide designers with (intuitive) insight that can guide them towards better solutions for tomorrow’s products. Scientists, also in other disciplines, formalize their ideas using mathematical models. For instance, to address the challenges of global climate change, it is of utmost importance to understand all processes and interactions that affect the climate. Again, this boils down to the development of good mathematical descriptions.

Since the quality of the prediction, the simulation and the theoretical understanding depends directly on the quality of the model, we need good methods to build such models.

**Mathematical modeling: a generic activity**

Data driven modeling is a very generic activity that is deployed in widely scattered application ﬁelds. The modeling effort is not the ﬁnal goal; it is a tool for the optimal extraction of the information from the experimental data. Using good tools provides access to information that is hidden in the experimental data and that would be otherwise lost or unreachable.

This wide but hidden use of data driven modeling makes it very difﬁcult for the larger public to get a good feeling for it. Data driven modeling cannot be conﬁned to a single application, not even with an application ﬁeld (for example, telecommunication). The data driven modeling framework provides good paintbrushes to the painter facilitating the creation of a masterpiece. We are not the artist but offer full scope to her/his creative talent. Good data modeling tools open new possibilities for science and engineering, but we still need good scientists/engineers in every ﬁeld to arrive to new insights and groundbreaking applications.

**The main actors in data driven modeling**

Good models are a major tool used by engineers and scientists, but how can these be obtained? Because a mathematical model is often closely linked to the physical problem that is studied, clearly the specialists in the ﬁeld can provide valuable input. They know what aspects are important, and what processes can be neglected, resulting eventually in a physical model. But that is not enough to arrive at a usable model. The parameters in the physical model (also called white box model) should be tuned such that it matches the reality as well as possible. Because physical modeling is often time consuming, alternative black box modeling approaches were developed to reduce the need for expensive physical knowledge. In black box modeling, a ﬂexible model structure is tuned to the data, without making extensive use of physical insight. And this brings us to the essence of each data driven model activity, that can be aligned along the following basic steps and questions:

– *Experiment design*: Experimental data are collected. What are the best experiments to perform?

– *Model selection*: A general model structure is proposed. How do we select from the various options?

– *Matching model and data*: The model and the experimental data should be matched as closely as possible. How do we select the criteria for measuring the quality of the match?

– *Validation*: What is the quality of the ﬁnal model? Is it also a valid model that can explain new unseen data?

Although the answers to these questions are very closely linked to the speciﬁc goal and application ﬁeld, it turns out that the tools that can be used to address these questions are very universal. System identiﬁcation is the theory that systematically and comprehensively addresses how to model dynamical systems from experimental data.

**Collecting experimental data**

In most applications, the mathematical model should reﬂect a part of the surrounding reality. We call this a system.

To model this system, we should collect information about it. Sometimes we can only observe what happens, but often we can actively ask questions: An astronomer can only look to the sky (wait, watch and see), but an audio engineer can make experiments to design a new loudspeaker (ask questions). Because experiments are often time consuming and expensive, it is important to design them so that they provide maximum information for a minimal cost. The result of these experiments is not perfect, measurements are disturbed by noise. Noise disturbances are unexplained variations that disturb our view on the world, like the way ‘ampliﬁer noise’ or ‘trafﬁc noise’ can blur a small audio signal. Due to this noise, we get an imperfect answer to our question. Extreme noise sensitivity can lead bad methods to completely wrong results, without any warning to the user of this failure. She/he remains unaware of the pitfalls of the model. This is a most dangerous situation.

**Selecting a model structure**

A model is the mathematical translation of the scientist’s and engineer’s knowledge. Building a model is closely linked to the application ﬁeld. The knowledge required to build a climate model of the earth is completely different from the know-how needed to describe an electrical engine.

But it is also possible to create a more general approach. To do so, we introduce the class of linear dynamic systems.

Firstly, we brieﬂy explain both terms.

– *Linear systems* respond to a combined experiment *x + y* by the sum of the responses on experiment x and on experiment y, or more formally *f(x + y) = f(x) + f(y)* and *f(αx) = αf(x)*. The Belgian tax system is not linear: doubling the gross income does not result in a doubled net income. Although most systems are not linear (nonlinear), we can often approximate them sufﬁciently well by using a linear model.

– *Dynamic systems* remember the past. Their response at time t to an input *u(τ)* also depends on the past values *u(τ ≤ t)*. If we hit a metal beam, it will oscillate for a few seconds because it ‘remembers’ that it was hit by a hammer, even when the hammer has been removed for some time.

Linear dynamic systems can be used in a very wide range of applications. They are very popular in mechanical, electrical, electronic, chemical, etc., engineering. Also, econometricians make intensive use of these models. However, the linearity assumption is too restrictive in many applications, and for that reason it should be relaxed to include also nonlinear effects that do not follow the above explained linearity property. In general such systems do no longer add up *f(x+y) ≠ f(x)+f(y)*. Many groups are searching for methods that can also model this nonlinear behavior. The major issue here is to ﬁnd mathematical descriptions that are as universal as those for linear cases. Although signiﬁcant progress has been made, we still face many unresolved problems. Nonlinear system identiﬁcation is one of the major challenges that we currently face.

**Matching the model with the data**

The next major issue to be addressed, once the data are available and the model structure is selected, is to ﬁnd out how we should match both. If there was no measurement noise, if the models were perfect (no structural model errors), and if all inputs to the system would be known, it would be a simple task. But in practice, we obtain wrong measurements, we use too simple models, and unknown inputs excite the system. Replacing them with perfect measurements and models is not a solution, because this is an impossible task, for both technical and ﬁnancial reasons. For that reason, we should learn to deal with imperfect measurements and models, and incomplete knowledge of the inputs acting on the system. How can we minimize the impact of these effects on the ﬁnal model quality?

In the previous century a general statistical theoretic framework was developed to deal with noisy data, and it still is at the basis of the actual thinking. The major difference with that era is that we presently have much greater computing power available for transforming these theoretical ideas into practice. Following this approach, we end up with more than just a model. We also obtain an estimate of the model’s reliability. It is easy to forecast the temperature one year in advance, but it becomes impossible if this must be done with tight uncertainty bounds. However, the prediction has no value at all without these bounds. Knowing the uncertainty bounds is as important as knowing the value itself.

*Obtaining a mathematical model with reliable uncertainty bounds is the challenge to be addressed in data driven modeling.*

**Validation**

Model validation addresses the questions ‘Does the model solve our problem?’ and/or ‘Is the model in conﬂict with either the data or prior knowledge?’. It seems obvious that we should aim for validated models. However, that is an impossible task, we can only guarantee that the model is not invalidated on the available data. On the basis of this observation, we decide that it is safe to use the model on our application, but the reader should be aware that there is no full guarantee that this is true!

Because an invalidated model is shown to be unable to explain all information present in the data, there is still headroom to improve it. Although it looks obvious to search for a better model that passes the validation tests, it is often not done because the expense to improve the model might be too high or the errors of the invalidated model might be below a user’s threshold. The too simple model can be a very good tool as long as the user keeps in mind that structural model errors are present, and understands the impact of these errors on the model use.

**What can you learn on this website?**

The goal of this website is twofold. First, nonlinear system identiﬁcation is introduced to a wide audience, guiding practical engineers and newcomers in the ﬁeld to a sound solution of their data-driven modeling problems for nonlinear dynamic systems. In addition, the website also provides a broad perspective on the topic for researchers who are already familiar with linear system identiﬁcation theory, showing the similarities and differences between linear and nonlinear problems. The focus is on the basic philosophy, giving an intuitive understanding of the problems and the solutions, providing a guided tour of the wide range of user choices in (non)linear system identiﬁcation.

To reach these goals, we will make use of slides, supported by video presentations and short texts. Links are provided for the readers who want to learn more or to refresh their background knowledge. The existing literature will be referred too for detailed mathematical explanations and formal proofs. The information is structured along four main lines:

– System Identification,

– Identiﬁcation of linear dynamic systems,

– Identiﬁcation of linear systems in the presence of nonlinear distortions,

– Identiﬁcation of nonlinear systems.

For each of these topics, the theoretical aspects are explained in a series of slides and video presentations. In addition, a series of exercises is available to provide hands-on experience and to make the abstract theory more accessible. In a ﬁrst step, Matlab ® ﬁles with the solutions are provided.

*System Identification* explains the basic concepts of data driven modeling. The statistical framework provides a systematic approach to the extraction of information from data, providing a deeper insight in the stochastic behavior of the estimators.

*Identiﬁcation of linear dynamic systems* learns how the general tools from the previous section can be used to identify a discrete or continuous time model for a linear dynamic system. Besides the model of the plant, also a noise model and uncertainty bounds are estimated.

*Identiﬁcation of linear systems in the presence of nonlinear distortions* gives a ﬁrst introduction to understand the behavior of nonlinear systems. This insight is used to explain how the linear identiﬁcation framework is affected by nonlinear distortions. This allows the reader to make a well-informed decision to proceed with a linear modeling framework or to switch to a more involved nonlinear identiﬁcation approach.

*Identiﬁcation of nonlinear systems* guides the reader through the many choices to be made in nonlinear system identiﬁcation.

**Long term perspective**

The development of this website is a long-term project. Starting from the current basis, we intend to expand/update gradually the information in the coming years. We decided to make the website publicly accessible in this period, even if it is far from being ﬁnished. It is our strong believe that also the partial information can be very useful for many of our users. Moreover, the feedback that we get from these early experiences provides very valuable inputs for the further development of our project.