Model predictive control for systems described by stochastic differential equations

Morten Hagdrup*

*Corresponding author for this work

Research output: Book/ReportPh.D. thesis

792 Downloads (Pure)

Abstract

This thesis focuses on the theoretical underpinnings of model predictive control (MPC) for linear stochastic systems. The plant model comprises a deterministic and stochastic part. Each part is modeled by a linear time-invariant (LTI) system and parametrized through its respective transfer function (TRF). Only single-input-single-output (SISO) systems are considered. We show how this set-up gives rise to a linear stochastic state space model in continuous time comprising a linear stochastic differential equation (SDE). This is done by rigorous application of the distribution theory of Laurent Schwartz. We use the convention that continuous-time white noise should be what results by differentiating the sample paths of Brownian Motion in the sense of distributions. The derivation leads directly to the notion of Wiener integrals. It shows why the linear SDE framework does not conflict with deterministic system theory allowing distribution-valued input. The external behaviour of an LTI SISO system L is characterized by its impulse response h. Using distribution theory we show how the output of L is described by a Wiener integral in terms of h in the case of continuous-time white noise input. The derivations require that h be of locally finite variation. For deterministic linear systems the formula Ypsq“ HpsqUpsq relates the Laplace transforms Upsq and Ypsq of input and output respectively for a causal system at rest prior to the onset of excitation. It is assumed that the Laplace transform H of h exists and that h is locally of finite variation. We prove that the formula retains its validity also in case the input is the distributional derivative of a sample path of Brownian Motion. Consider a SISO LTI system L with square integrable h with continuous-time white noise input. The finite-dimensional probability distributions of the output will converge with time to a family of distributions which in turn define a stationary process. We show that there exists a version of this process which has almost surely (a.s.) continuous sample paths, as long as h is globally of bounded variation. This is a special case of a general theorem on Gaussian processes, the proof of which is quite involved. Here we offer a simpler proof for the relevant special case by exploiting a result from Fourier Analysis. We apply MPC to a linear stochastic system L in continuous time. Assuming equidistant sampling and zero-order hold (ZOH) input, an equivalent discrete-time linear stochastic model is established. We derive sufficient conditions on the TRF ensuring that the resulting Kalman filter be stabilizing, in particular that the relevant discrete-time algebraic Riccati equation (DARE) has a unique positive definite stabilizing solution. Implementations of MPC in discrete-time often feature an optimal control problem (OCP) with a cost function comprising a term which is quadratic in the input rate of change. We propose a continuous-time analogue of this OCP and for an LTI system in state space form. Sufficient conditions are derived for the minimizer of the continuoustime OCP to provide feedback that ensures nominal stability. Using MPC we treat the problem of minimizing the mean square tracking error for the output with respect to some reference trajectory. Assuming that the referecence trajectory is constant throughout each sampling interval we provide a transcription of this problem to an OCP in discrete time. This discrete-time OCP is in a form which permits the use of Riccati iteration solver the complexity of which is approximately linear in the prediction horizon. Using work by C. Van Loan formulae are derived allowing for the efficient calculation of the parameters of the discrete-time OCP. We consider discretizations for the continuous-time OCP proposed. No constraints on the state vector are imposed but both input and input rate of change are subject to constraints. For ZOH discretization we establish the convergence of the minimizers punq of the discretized problems to the continuous-time solution u˚ P H1 as the (uniform) discretization step tends to zero. The convergence takes place in L2-norm.
Original languageEnglish
Place of PublicationKgs. Lyngby
PublisherTechnical University of Denmark
Number of pages248
Publication statusPublished - 2019
SeriesDTU Compute PHD-2018
Volume491
ISSN0909-3192

Fingerprint

Dive into the research topics of 'Model predictive control for systems described by stochastic differential equations'. Together they form a unique fingerprint.

Cite this