Functions: Predictive Controllers

All the predictive controllers in this module rely on a state estimator to compute the predictions. The default LinMPC estimator is a SteadyKalmanFilter, and NonLinMPC with nonlinear models, an UnscentedKalmanFilter. For simpler and more classical designs, an InternalModel structure is also available, that assumes by default that the current model mismatch estimation is constant in the future (same approach as dynamic matrix control, DMC).

Info

The nomenclature uses capital letters for time series (and matrices) and hats for the predictions (and estimations, for state estimators).

To be precise, at the $k$th control period, the vectors that encompass the future measured disturbances $\mathbf{d̂}$, model outputs $\mathbf{ŷ}$ and setpoints $\mathbf{r̂_y}$ over the prediction horizon $H_p$ are defined as:

\[ \mathbf{D̂} = \begin{bmatrix} \mathbf{d̂}(k+1) \\ \mathbf{d̂}(k+2) \\ \vdots \\ \mathbf{d̂}(k+H_p) \end{bmatrix} \: , \quad \mathbf{Ŷ} = \begin{bmatrix} \mathbf{ŷ}(k+1) \\ \mathbf{ŷ}(k+2) \\ \vdots \\ \mathbf{ŷ}(k+H_p) \end{bmatrix} \quad \text{and} \quad \mathbf{R̂_y} = \begin{bmatrix} \mathbf{r̂_y}(k+1) \\ \mathbf{r̂_y}(k+2) \\ \vdots \\ \mathbf{r̂_y}(k+H_p) \end{bmatrix}\]

The vectors for the manipulated input $\mathbf{u}$ are shifted by one time step:

\[ \mathbf{U} = \begin{bmatrix} \mathbf{u}(k+0) \\ \mathbf{u}(k+1) \\ \vdots \\ \mathbf{u}(k+H_p-1) \end{bmatrix} \quad \text{and} \quad \mathbf{R̂_u} = \begin{bmatrix} \mathbf{r̂_u}(k+0) \\ \mathbf{r̂_u}(k+1) \\ \vdots \\ \mathbf{r̂_u}(k+H_p-1) \end{bmatrix}\]

Defining the manipulated input increment as $\mathbf{Δu}(k+j) = \mathbf{u}(k+j) - \mathbf{u}(k+j-1)$, the control horizon $H_c$ enforces that $\mathbf{Δu}(k+j) = \mathbf{0}$ for $j ≥ H_c$. For this reason, the vector that collects them is truncated up to $k+H_c-1$:

\[ \mathbf{ΔU} = \begin{bmatrix} \mathbf{Δu}(k+0) \\ \mathbf{Δu}(k+1) \\ \vdots \\ \mathbf{Δu}(k+H_c-1) \end{bmatrix}\]

PredictiveController

ModelPredictiveControl.PredictiveControllerType

Abstract supertype of all predictive controllers.


(mpc::PredictiveController)(ry, d=[]; kwargs...) -> u

Functor allowing callable PredictiveController object as an alias for moveinput!.

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1, direct=false);

julia> u = mpc([5]); round.(u, digits=3)
1-element Vector{Float64}:
 1.0
source

LinMPC

ModelPredictiveControl.LinMPCType
LinMPC(model::LinModel; <keyword arguments>)

Construct a linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} \min_{\mathbf{ΔU}, ϵ} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\ + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} + C ϵ^2 \end{aligned}\]

in which the weight matrices are repeated $H_p$ or $H_c$ times by default:

\[\begin{aligned} \mathbf{M}_{H_p} &= \text{diag}\mathbf{(M,M,...,M)} \\ \mathbf{N}_{H_c} &= \text{diag}\mathbf{(N,N,...,N)} \\ \mathbf{L}_{H_p} &= \text{diag}\mathbf{(L,L,...,L)} \end{aligned}\]

Time-varying and non-diagonal weights are also supported. Modify the last block in $\mathbf{M}_{H_p}$ to specify a terminal weight. The $\mathbf{ΔU}$ includes the input increments $\mathbf{Δu}(k+j) = \mathbf{u}(k+j) - \mathbf{u}(k+j-1)$ from $j=0$ to $H_c-1$, the $\mathbf{Ŷ}$ vector, the output predictions $\mathbf{ŷ}(k+j)$ from $j=1$ to $H_p$, and the $\mathbf{U}$ vector, the manipulated inputs $\mathbf{u}(k+j)$ from $j=0$ to $H_p-1$. The slack variable $ϵ$ relaxes the constraints, as described in setconstraint! documentation. See Extended Help for a detailed nomenclature.

This method uses the default state estimator, a SteadyKalmanFilter with default arguments.

Arguments

  • model::LinModel : model used for controller predictions and state estimations.

  • Hp=10+nk: prediction horizon $H_p$, nk is the number of delays in model.

  • Hc=2 : control horizon $H_c$.

  • Mwt=fill(1.0,model.ny) : main diagonal of $\mathbf{M}$ weight matrix (vector).

  • Nwt=fill(0.1,model.nu) : main diagonal of $\mathbf{N}$ weight matrix (vector).

  • Lwt=fill(0.0,model.nu) : main diagonal of $\mathbf{L}$ weight matrix (vector).

  • M_Hp=diagm(repeat(Mwt,Hp)) : positive semidefinite symmetric matrix $\mathbf{M}_{H_p}$.

  • N_Hc=diagm(repeat(Nwt,Hc)) : positive semidefinite symmetric matrix $\mathbf{N}_{H_c}$.

  • L_Hp=diagm(repeat(Lwt,Hp)) : positive semidefinite symmetric matrix $\mathbf{L}_{H_p}$.

  • Cwt=1e5 : slack variable weight $C$ (scalar), use Cwt=Inf for hard constraints only.

  • optim=JuMP.Model(OSQP.MathOptInterfaceOSQP.Optimizer) : quadratic optimizer used in the predictive controller, provided as a JuMP.Model (default to OSQP optimizer).

  • additional keyword arguments are passed to SteadyKalmanFilter constructor.

Examples

julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);

julia> mpc = LinMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
LinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, SteadyKalmanFilter estimator and:
 30 prediction steps Hp
  1 control steps Hc
  1 slack variable ϵ (control constraints)
  1 manipulated inputs u (0 integrating states)
  4 estimated states x̂
  2 measured outputs ym (2 integrating states)
  0 unmeasured outputs yu
  0 measured disturbances d

Extended Help

Extended Help

Manipulated inputs setpoints $\mathbf{r_u}$ are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.

The objective function follows this nomenclature:

VARIABLEDESCRIPTIONSIZE
$H_p$prediction horizon (integer)()
$H_c$control horizon (integer)()
$\mathbf{ΔU}$manipulated input increments over $H_c$(nu*Hc,)
$\mathbf{Ŷ}$predicted outputs over $H_p$(ny*Hp,)
$\mathbf{U}$manipulated inputs over $H_p$(nu*Hp,)
$\mathbf{R̂_y}$predicted output setpoints over $H_p$(ny*Hp,)
$\mathbf{R̂_u}$predicted manipulated input setpoints over $H_p$(nu*Hp,)
$\mathbf{M}_{H_p}$output setpoint tracking weights over $H_p$(ny*Hp, ny*Hp)
$\mathbf{N}_{H_c}$manipulated input increment weights over $H_c$(nu*Hc, nu*Hc)
$\mathbf{L}_{H_p}$manipulated input setpoint tracking weights over $H_p$(nu*Hp, nu*Hp)
$C$slack variable weight()
$ϵ$slack variable for constraint softening()
source
LinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct LinMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

Examples

julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);

julia> mpc = LinMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
LinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, KalmanFilter estimator and:
 30 prediction steps Hp
  1 control steps Hc
  1 slack variable ϵ (control constraints)
  1 manipulated inputs u (0 integrating states)
  3 estimated states x̂
  1 measured outputs ym (1 integrating states)
  1 unmeasured outputs yu
  0 measured disturbances d
source

ExplicitMPC

ModelPredictiveControl.ExplicitMPCType
ExplicitMPC(model::LinModel; <keyword arguments>)

Construct an explicit linear predictive controller based on LinModel model.

The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} \min_{\mathbf{ΔU}} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\ + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} \end{aligned}\]

See LinMPC for the variable definitions. This controller does not support constraints but the computational costs are extremely low (array division), therefore suitable for applications that require small sample times. The keyword arguments are identical to LinMPC, except for Cwt and optim which are not supported.

This method uses the default state estimator, a SteadyKalmanFilter with default arguments.

Examples

julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);

julia> mpc = ExplicitMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
ExplicitMPC controller with a sample time Ts = 4.0 s, SteadyKalmanFilter estimator and:
 30 prediction steps Hp
  1 control steps Hc
  1 manipulated inputs u (0 integrating states)
  4 estimated states x̂
  2 measured outputs ym (2 integrating states)
  0 unmeasured outputs yu
  0 measured disturbances d
source
ExplicitMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct ExplicitMPC.

estim.model must be a LinModel. Else, a NonLinMPC is required.

source

NonLinMPC

ModelPredictiveControl.NonLinMPCType
NonLinMPC(model::SimModel; <keyword arguments>)

Construct a nonlinear predictive controller based on SimModel model.

Both NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time $k$:

\[\begin{aligned} \min_{\mathbf{ΔU}, ϵ}\ & \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\& + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} + C ϵ^2 + E J_E(\mathbf{U}_E, \mathbf{Ŷ}_E, \mathbf{D̂}_E, \mathbf{p}) \end{aligned}\]

See LinMPC for the variable definitions. The custom economic function $J_E$ can penalizes solutions with high economic costs. Setting all the weights to 0 except $E$ creates a pure economic model predictive controller (EMPC). The arguments of $J_E$ include the manipulated inputs, the predicted outputs and measured disturbances from $k$ to $k+H_p$ inclusively:

\[ \mathbf{U}_E = \begin{bmatrix} \mathbf{U} \\ \mathbf{u}(k+H_p-1) \end{bmatrix} , \quad \mathbf{Ŷ}_E = \begin{bmatrix} \mathbf{ŷ}(k) \\ \mathbf{Ŷ} \end{bmatrix} , \quad \mathbf{D̂}_E = \begin{bmatrix} \mathbf{d}(k) \\ \mathbf{D̂} \end{bmatrix}\]

since $H_c ≤ H_p$ implies that $\mathbf{Δu}(k+H_p) = \mathbf{0}$ or $\mathbf{u}(k+H_p)= \mathbf{u}(k+H_p-1)$. The vector $\mathbf{D̂}$ includes the predicted measured disturbance over $H_p$. The argument $\mathbf{p}$ is a custom parameter object of any type but use a mutable one if you want to modify it later e.g.: a vector.

Tip

Replace any of the 4 arguments with _ if not needed (see JE default value below).

This method uses the default state estimator :

Warning

See Extended Help if you get an error like: MethodError: no method matching Float64(::ForwardDiff.Dual).

Arguments

  • model::SimModel : model used for controller predictions and state estimations.
  • Hp=nothing: prediction horizon $H_p$, must be specified for NonLinModel.
  • Hc=2 : control horizon $H_c$.
  • Mwt=fill(1.0,model.ny) : main diagonal of $\mathbf{M}$ weight matrix (vector).
  • Nwt=fill(0.1,model.nu) : main diagonal of $\mathbf{N}$ weight matrix (vector).
  • Lwt=fill(0.0,model.nu) : main diagonal of $\mathbf{L}$ weight matrix (vector).
  • M_Hp=diagm(repeat(Mwt,Hp)) : positive semidefinite symmetric matrix $\mathbf{M}_{H_p}$.
  • N_Hc=diagm(repeat(Nwt,Hc)) : positive semidefinite symmetric matrix $\mathbf{N}_{H_c}$.
  • L_Hp=diagm(repeat(Lwt,Hp)) : positive semidefinite symmetric matrix $\mathbf{L}_{H_p}$.
  • Cwt=1e5 : slack variable weight $C$ (scalar), use Cwt=Inf for hard constraints only.
  • Ewt=0.0 : economic costs weight $E$ (scalar).
  • JE=(_,_,_,_)->0.0 : economic function $J_E(\mathbf{U}_E, \mathbf{Ŷ}_E, \mathbf{D̂}_E, \mathbf{p})$.
  • p=model.p : $J_E$ function parameter $\mathbf{p}$ (any type).
  • optim=JuMP.Model(Ipopt.Optimizer) : nonlinear optimizer used in the predictive controller, provided as a JuMP.Model (default to Ipopt optimizer).
  • additional keyword arguments are passed to UnscentedKalmanFilter constructor (or SteadyKalmanFilter, for LinModel).

Examples

julia> model = NonLinModel((x,u,_,_)->0.5x+u, (x,_,_)->2x, 10.0, 1, 1, 1, solver=nothing);

julia> mpc = NonLinMPC(model, Hp=20, Hc=1, Cwt=1e6)
NonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedKalmanFilter estimator and:
 20 prediction steps Hp
  1 control steps Hc
  1 slack variable ϵ (control constraints)
  1 manipulated inputs u (0 integrating states)
  2 estimated states x̂
  1 measured outputs ym (1 integrating states)
  0 unmeasured outputs yu
  0 measured disturbances d

Extended Help

Extended Help

NonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization, especially for the constraint handling, and is not available in any other package, to my knowledge.

The optimization relies on JuMP automatic differentiation (AD) to compute the objective and constraint derivatives. Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel state-space functions must be compatible with this feature. See Automatic differentiation for common mistakes when writing these functions.

Note that if Cwt≠Inf, the attribute nlp_scaling_max_gradient of Ipopt is set to 10/Cwt (if not already set), to scale the small values of $ϵ$.

source
NonLinMPC(estim::StateEstimator; <keyword arguments>)

Use custom state estimator estim to construct NonLinMPC.

Examples

julia> model = NonLinModel((x,u,_,_)->0.5x+u, (x,_,_)->2x, 10.0, 1, 1, 1, solver=nothing);

julia> estim = UnscentedKalmanFilter(model, σQint_ym=[0.05]);

julia> mpc = NonLinMPC(estim, Hp=20, Hc=1, Cwt=1e6)
NonLinMPC controller with a sample time Ts = 10.0 s, Ipopt optimizer, UnscentedKalmanFilter estimator and:
 20 prediction steps Hp
  1 control steps Hc
  1 slack variable ϵ (control constraints)
  1 manipulated inputs u (0 integrating states)
  2 estimated states x̂
  1 measured outputs ym (1 integrating states)
  0 unmeasured outputs yu
  0 measured disturbances d
source

Move Manipulated Input u

ModelPredictiveControl.moveinput!Function
moveinput!(mpc::PredictiveController, ry=mpc.estim.model.yop, d=[]; <keyword args>) -> u

Compute the optimal manipulated input value u for the current control period.

Solve the optimization problem of mpc PredictiveController and return the results $\mathbf{u}(k)$. Following the receding horizon principle, the algorithm discards the optimal future manipulated inputs $\mathbf{u}(k+1), \mathbf{u}(k+2), ...$ Note that the method mutates mpc internal data but it does not modifies mpc.estim states. Call preparestate!(mpc, ym, d) before moveinput!, and updatestate!(mpc, u, ym, d) after, to update mpc state estimates.

Calling a PredictiveController object calls this method.

See also LinMPC, ExplicitMPC, NonLinMPC.

Arguments

Info

Keyword arguments with emphasis are non-Unicode alternatives.

  • mpc::PredictiveController : solve optimization problem of mpc.
  • ry=mpc.estim.model.yop : current output setpoints $\mathbf{r_y}(k)$.
  • d=[] : current measured disturbances $\mathbf{d}(k)$.
  • D̂=repeat(d, mpc.Hp) or Dhat : predicted measured disturbances $\mathbf{D̂}$, constant in the future by default or $\mathbf{d̂}(k+j)=\mathbf{d}(k)$ for $j=1$ to $H_p$.
  • R̂y=repeat(ry, mpc.Hp) or Rhaty : predicted output setpoints $\mathbf{R̂_y}$, constant in the future by default or $\mathbf{r̂_y}(k+j)=\mathbf{r_y}(k)$ for $j=1$ to $H_p$.
  • R̂u=mpc.Uop or Rhatu : predicted manipulated input setpoints, constant in the future by default or $\mathbf{r̂_u}(k+j)=\mathbf{u_{op}}$ for $j=0$ to $H_p-1$.

Examples

julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);

julia> preparestate!(mpc, [0]); ry = [5];

julia> u = moveinput!(mpc, ry); round.(u, digits=3)
1-element Vector{Float64}:
 1.0
source