Functions: Predictive Controllers
All the predictive controllers in this module rely on a state estimator to compute the predictions. The default LinMPC estimator is a SteadyKalmanFilter, and NonLinMPC with nonlinear models, an UnscentedKalmanFilter. For simpler and more classical designs, an InternalModel structure is also available, that assumes by default that the current model mismatch estimation is constant in the future (same approach as dynamic matrix control, DMC).
The nomenclature uses boldfaces for vectors or matrices, capital boldface letters for vectors representing time series (and also for matrices), and hats for the predictions (and also for the observer estimations).
To be precise, at the $k$th control period, the vectors that encompass the future measured disturbances $\mathbf{d̂}$, model outputs $\mathbf{ŷ}$ and setpoints $\mathbf{r̂_y}$ over the prediction horizon $H_p$ are defined as:
\[ \mathbf{D̂} = \begin{bmatrix} \mathbf{d̂}(k+1) \\ \mathbf{d̂}(k+2) \\ \vdots \\ \mathbf{d̂}(k+H_p) \end{bmatrix} \: , \quad \mathbf{Ŷ} = \begin{bmatrix} \mathbf{ŷ}(k+1) \\ \mathbf{ŷ}(k+2) \\ \vdots \\ \mathbf{ŷ}(k+H_p) \end{bmatrix} \quad \text{and} \quad \mathbf{R̂_y} = \begin{bmatrix} \mathbf{r̂_y}(k+1) \\ \mathbf{r̂_y}(k+2) \\ \vdots \\ \mathbf{r̂_y}(k+H_p) \end{bmatrix}\]
in which $\mathbf{D̂}$, $\mathbf{Ŷ}$ and $\mathbf{R̂_y}$ are vectors of nd*Hp, ny*Hp and ny*Hp elements, respectively. The vectors for the manipulated input $\mathbf{u}$ are shifted by one time step:
\[ \mathbf{U} = \begin{bmatrix} \mathbf{u}(k+0) \\ \mathbf{u}(k+1) \\ \vdots \\ \mathbf{u}(k+H_p-1) \end{bmatrix} \quad \text{and} \quad \mathbf{R̂_u} = \begin{bmatrix} \mathbf{r̂_u}(k+0) \\ \mathbf{r̂_u}(k+1) \\ \vdots \\ \mathbf{r̂_u}(k+H_p-1) \end{bmatrix}\]
in which $\mathbf{U}$ and $\mathbf{R̂_u}$ are both vectors of nu*Hp elements. Defining the manipulated input increment as $\mathbf{Δu}(k+j) = \mathbf{u}(k+j) - \mathbf{u}(k+j-1)$, the control horizon $H_c$ enforces that $\mathbf{Δu}(k+j) = \mathbf{0}$ for $j ≥ H_c$. For this reason, the vector that collects them is truncated up to $k+H_c-1$ (without any custom move blocking):
\[ \mathbf{ΔU} = \begin{bmatrix} \mathbf{Δu}(k+0) \\ \mathbf{Δu}(k+1) \\ \vdots \\ \mathbf{Δu}(k+H_c-1) \end{bmatrix}\]
in which $\mathbf{ΔU}$ is a vector of nu*Hc elements.
PredictiveController
ModelPredictiveControl.PredictiveController — TypeAbstract supertype of all predictive controllers.
(mpc::PredictiveController)(ry, d=[]; kwargs...) -> uFunctor allowing callable PredictiveController object as an alias for moveinput!.
Examples
julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1, direct=false);
julia> u = mpc([5]); round.(u, digits=3)
1-element Vector{Float64}:
1.0LinMPC
ModelPredictiveControl.LinMPC — TypeLinMPC(model::LinModel; <keyword arguments>)Construct a linear predictive controller based on LinModel model.
The controller minimizes the following objective function at each discrete time $k$:
\[\begin{aligned} \min_{\mathbf{Z}, ϵ} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\ + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} + C ϵ^2 \end{aligned}\]
subject to setconstraint! bounds, and in which the weight matrices are repeated $H_p$ or $H_c$ times by default:
\[\begin{aligned} \mathbf{M}_{H_p} &= \text{diag}\mathbf{(M,M,...,M)} \\ \mathbf{N}_{H_c} &= \text{diag}\mathbf{(N,N,...,N)} \\ \mathbf{L}_{H_p} &= \text{diag}\mathbf{(L,L,...,L)} \end{aligned}\]
Time-varying and non-diagonal weights are also supported. Modify the last block in $\mathbf{M}_{H_p}$ to specify a terminal weight. The content of the decision vector $\mathbf{Z}$ depends on the chosen TranscriptionMethod (default to SingleShooting, hence $\mathbf{Z = ΔU}$). The $\mathbf{ΔU}$ includes the input increments $\mathbf{Δu}(k+j) = \mathbf{u}(k+j) - \mathbf{u}(k+j-1)$ from $j=0$ to $H_c-1$, the $\mathbf{Ŷ}$ vector, the output predictions $\mathbf{ŷ}(k+j)$ from $j=1$ to $H_p$, and the $\mathbf{U}$ vector, the manipulated inputs $\mathbf{u}(k+j)$ from $j=0$ to $H_p-1$. The slack variable $ϵ$ relaxes the constraints, as described in setconstraint! documentation. See Extended Help for a detailed nomenclature.
This method uses the default state estimator, a SteadyKalmanFilter with default arguments. This controller allocates memory at each time step for the optimization.
Arguments
model::LinModel: model used for controller predictions and state estimations.Hp::Int=10+nk: prediction horizon $H_p$,nkis the number of delays inmodel.Hc::Union{Int, Vector{Int}}=2: control horizon $H_c$, custom move blocking pattern is specified with a vector of integers (seemove_blockingfor details).Mwt=fill(1.0,model.ny): main diagonal of $\mathbf{M}$ weight matrix (vector).Nwt=fill(0.1,model.nu): main diagonal of $\mathbf{N}$ weight matrix (vector).Lwt=fill(0.0,model.nu): main diagonal of $\mathbf{L}$ weight matrix (vector).M_Hp=Diagonal(repeat(Mwt,Hp)): positive semidefinite symmetric matrix $\mathbf{M}_{H_p}$.N_Hc=Diagonal(repeat(Nwt,Hc)): positive semidefinite symmetric matrix $\mathbf{N}_{H_c}$.L_Hp=Diagonal(repeat(Lwt,Hp)): positive semidefinite symmetric matrix $\mathbf{L}_{H_p}$.Cwt=1e5: slack variable weight $C$ (scalar), useCwt=Inffor hard constraints only.transcription=SingleShooting():SingleShootingorMultipleShooting.optim=JuMP.Model(OSQP.MathOptInterfaceOSQP.Optimizer): quadratic optimizer used in the predictive controller, provided as aJuMP.Modelobject (default toOSQPoptimizer).- additional keyword arguments are passed to
SteadyKalmanFilterconstructor.
Examples
julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);
julia> mpc = LinMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
LinMPC controller with a sample time Ts = 4.0 s:
├ estimator: SteadyKalmanFilter
├ model: LinModel
├ optimizer: OSQP
├ transcription: SingleShooting
└ dimensions:
├ 30 prediction steps Hp
├ 1 control steps Hc
├ 1 slack variable ϵ (control constraints)
├ 1 manipulated inputs u (0 integrating states)
├ 4 estimated states x̂
├ 2 measured outputs ym (2 integrating states)
├ 0 unmeasured outputs yu
└ 0 measured disturbances dExtended Help
Extended Help
Manipulated inputs setpoints $\mathbf{r_u}$ are not common but they can be interesting for over-actuated systems, when nu > ny (e.g. prioritize solutions with lower economical costs). The default Lwt value implies that this feature is disabled by default.
The objective function follows this nomenclature:
| VARIABLE | DESCRIPTION | SIZE |
|---|---|---|
| $H_p$ | prediction horizon (integer) | () |
| $H_c$ | control horizon (integer) | () |
| $\mathbf{Z}$ | decision variable vector (excluding $ϵ$) | var. |
| $\mathbf{ΔU}$ | manipulated input increments over $H_c$ | (nu*Hc,) |
| $\mathbf{D̂}$ | predicted measured disturbances over $H_p$ | (nd*Hp,) |
| $\mathbf{Ŷ}$ | predicted outputs over $H_p$ | (ny*Hp,) |
| $\mathbf{X̂}$ | predicted states over $H_p$ | (nx̂*Hp,) |
| $\mathbf{U}$ | manipulated inputs over $H_p$ | (nu*Hp,) |
| $\mathbf{R̂_y}$ | predicted output setpoints over $H_p$ | (ny*Hp,) |
| $\mathbf{R̂_u}$ | predicted manipulated input setpoints over $H_p$ | (nu*Hp,) |
| $\mathbf{M}_{H_p}$ | output setpoint tracking weights over $H_p$ | (ny*Hp, ny*Hp) |
| $\mathbf{N}_{H_c}$ | manipulated input increment weights over $H_c$ | (nu*Hc, nu*Hc) |
| $\mathbf{L}_{H_p}$ | manipulated input setpoint tracking weights over $H_p$ | (nu*Hp, nu*Hp) |
| $C$ | slack variable weight | () |
| $ϵ$ | slack variable for constraint softening | () |
LinMPC(estim::StateEstimator; <keyword arguments>)Use custom state estimator estim to construct LinMPC.
estim.model must be a LinModel. Else, a NonLinMPC is required. See ManualEstimator for linear controllers with nonlinear state estimation.
Examples
julia> estim = KalmanFilter(LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4), i_ym=[2]);
julia> mpc = LinMPC(estim, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
LinMPC controller with a sample time Ts = 4.0 s:
├ estimator: KalmanFilter
├ model: LinModel
├ optimizer: OSQP
├ transcription: SingleShooting
└ dimensions:
├ 30 prediction steps Hp
├ 1 control steps Hc
├ 1 slack variable ϵ (control constraints)
├ 1 manipulated inputs u (0 integrating states)
├ 3 estimated states x̂
├ 1 measured outputs ym (1 integrating states)
├ 1 unmeasured outputs yu
└ 0 measured disturbances dExplicitMPC
ModelPredictiveControl.ExplicitMPC — TypeExplicitMPC(model::LinModel; <keyword arguments>)Construct an explicit linear predictive controller based on LinModel model.
The controller minimizes the following objective function at each discrete time $k$:
\[\begin{aligned} \min_{\mathbf{ΔU}} \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\ + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} \end{aligned}\]
See LinMPC for the variable definitions. This controller does not support constraints but the computational costs are extremely low (array division), therefore suitable for applications that require small sample times. The keyword arguments are identical to LinMPC, except for Cwt, transcription and optim, which are not supported. It uses a SingleShooting transcription method and is allocation-free.
This method uses the default state estimator, a SteadyKalmanFilter with default arguments.
Examples
julia> model = LinModel([tf(3, [30, 1]); tf(-2, [5, 1])], 4);
julia> mpc = ExplicitMPC(model, Mwt=[0, 1], Nwt=[0.5], Hp=30, Hc=1)
ExplicitMPC controller with a sample time Ts = 4.0 s:
├ estimator: SteadyKalmanFilter
├ model: LinModel
└ dimensions:
├ 1 manipulated inputs u (0 integrating states)
├ 4 estimated states x̂
├ 2 measured outputs ym (2 integrating states)
├ 0 unmeasured outputs yu
└ 0 measured disturbances dExplicitMPC(estim::StateEstimator; <keyword arguments>)Use custom state estimator estim to construct ExplicitMPC.
estim.model must be a LinModel. Else, a NonLinMPC is required.
NonLinMPC
ModelPredictiveControl.NonLinMPC — TypeNonLinMPC(model::SimModel; <keyword arguments>)Construct a nonlinear predictive controller based on SimModel model.
Both NonLinModel and LinModel are supported (see Extended Help). The controller minimizes the following objective function at each discrete time $k$:
\[\begin{aligned} \min_{\mathbf{Z}, ϵ}\ & \mathbf{(R̂_y - Ŷ)}' \mathbf{M}_{H_p} \mathbf{(R̂_y - Ŷ)} + \mathbf{(ΔU)}' \mathbf{N}_{H_c} \mathbf{(ΔU)} \\& + \mathbf{(R̂_u - U)}' \mathbf{L}_{H_p} \mathbf{(R̂_u - U)} + C ϵ^2 + E J_E(\mathbf{U_e}, \mathbf{Ŷ_e}, \mathbf{D̂_e}, \mathbf{p}) \end{aligned}\]
subject to setconstraint! bounds, and the custom inequality constraints:
\[\mathbf{g_c}(\mathbf{U_e}, \mathbf{Ŷ_e}, \mathbf{D̂_e}, \mathbf{p}, ϵ) ≤ \mathbf{0}\]
with the decision variables $\mathbf{Z}$ and slack $ϵ$. By default, a SingleShooting transcription method is used, hence $\mathbf{Z=ΔU}$. The economic function $J_E$ can penalizes solutions with high economic costs. Setting all the weights to 0 except $E$ creates a pure economic model predictive controller (EMPC). As a matter of fact, $J_E$ can be any nonlinear function as a custom objective, even if there is no economic interpretation to it. The arguments of $J_E$ and $\mathbf{g_c}$ include the manipulated inputs, predicted outputs and measured disturbances, extended from $k$ to $k+H_p$ (inclusively, see Extended Help for more details):
\[ \mathbf{U_e} = \begin{bmatrix} \mathbf{U} \\ \mathbf{u}(k+H_p-1) \end{bmatrix} , \quad \mathbf{Ŷ_e} = \begin{bmatrix} \mathbf{ŷ}(k) \\ \mathbf{Ŷ} \end{bmatrix} , \quad \mathbf{D̂_e} = \begin{bmatrix} \mathbf{d}(k) \\ \mathbf{D̂} \end{bmatrix}\]
The argument $\mathbf{p}$ is a custom parameter object of any type, but use a mutable one if you want to modify it later e.g.: a vector. See LinMPC Extended Help for the definition of the other variables.
Replace any of the arguments of $J_E$ and $\mathbf{g_c}$ functions with _ if not needed (see e.g. the default value of JE below).
This method uses the default state estimator:
- if
modelis aLinModel, aSteadyKalmanFilterwith default arguments; - else, an
UnscentedKalmanFilterwith default arguments.
This controller allocates memory at each time step for the optimization.
See Extended Help if you get an error like: MethodError: no method matching Float64(::ForwardDiff.Dual).
Arguments
model::SimModel: model used for controller predictions and state estimations.Hp::Int=10+nk: prediction horizon $H_p$,nkis the number of delays ifmodelis aLinModel(must be specified otherwise).Hc::Union{Int, Vector{Int}}=2: control horizon $H_c$, custom move blocking pattern is specified with a vector of integers (seemove_blockingfor details).Mwt=fill(1.0,model.ny): main diagonal of $\mathbf{M}$ weight matrix (vector).Nwt=fill(0.1,model.nu): main diagonal of $\mathbf{N}$ weight matrix (vector).Lwt=fill(0.0,model.nu): main diagonal of $\mathbf{L}$ weight matrix (vector).M_Hp=Diagonal(repeat(Mwt,Hp)): positive semidefinite symmetric matrix $\mathbf{M}_{H_p}$.N_Hc=Diagonal(repeat(Nwt,Hc)): positive semidefinite symmetric matrix $\mathbf{N}_{H_c}$.L_Hp=Diagonal(repeat(Lwt,Hp)): positive semidefinite symmetric matrix $\mathbf{L}_{H_p}$.Cwt=1e5: slack variable weight $C$ (scalar), useCwt=Inffor hard constraints only.Ewt=0.0: economic costs weight $E$ (scalar).JE=(_,_,_,_)->0.0: economic or custom cost function $J_E(\mathbf{U_e}, \mathbf{Ŷ_e}, \mathbf{D̂_e}, \mathbf{p})$.gc=(_,_,_,_,_,_)->nothingorgc!: custom inequality constraint function $\mathbf{g_c}(\mathbf{U_e}, \mathbf{Ŷ_e}, \mathbf{D̂_e}, \mathbf{p}, ϵ)$, mutating or not (details in Extended Help).nc=0: number of custom inequality constraints.p=model.p: $J_E$ and $\mathbf{g_c}$ functions parameter $\mathbf{p}$ (any type).transcription=SingleShooting(): aTranscriptionMethodfor the optimization.optim=JuMP.Model(Ipopt.Optimizer): nonlinear optimizer used in the predictive controller, provided as aJuMP.Modelobject (default toIpoptoptimizer).gradient=AutoForwardDiff(): anAbstractADTypebackend for the gradient of the objective function, seeDifferentiationInterfacedoc.jacobian=default_jacobian(transcription): anAbstractADTypebackend for the Jacobian of the nonlinear constraints, seegradientabove for the options (default in Extended Help).oracle=JuMP.solver_name(optim)=="Ipopt": use the efficientVectorNonlinearOraclefor the nonlinear constraints (not supported by most optimizers for now).- additional keyword arguments are passed to
UnscentedKalmanFilterconstructor (orSteadyKalmanFilter, forLinModel).
Examples
julia> model = NonLinModel((x,u,_,_)->0.5x+u, (x,_,_)->2x, 10.0, 1, 1, 1, solver=nothing);
julia> mpc = NonLinMPC(model, Hp=20, Hc=10, transcription=MultipleShooting())
NonLinMPC controller with a sample time Ts = 10.0 s:
├ estimator: UnscentedKalmanFilter
├ model: NonLinModel
├ optimizer: Ipopt
├ transcription: MultipleShooting
├ gradient: AutoForwardDiff
├ jacobian: AutoSparse (AutoForwardDiff, TracerSparsityDetector, GreedyColoringAlgorithm)
└ dimensions:
├ 20 prediction steps Hp
├ 10 control steps Hc
├ 1 slack variable ϵ (control constraints)
├ 1 manipulated inputs u (0 integrating states)
├ 2 estimated states x̂
├ 1 measured outputs ym (1 integrating states)
├ 0 unmeasured outputs yu
└ 0 measured disturbances dExtended Help
Extended Help
NonLinMPC controllers based on LinModel compute the predictions with matrix algebra instead of a for loop. This feature can accelerate the optimization, especially for the constraint handling, and is not available in any other package, to my knowledge.
The economic cost $J_E$ and custom constraint $\mathbf{g_c}$ functions receive the extended vectors $\mathbf{U_e}$ (nu*Hp+nu elements), $\mathbf{Ŷ_e}$ (ny+ny*Hp elements) and $\mathbf{D̂_e}$ (nd+nd*Hp elements) as arguments. They all include the values from $k$ to $k + H_p$ (inclusively). The custom constraint also receives the slack $ϵ$ (scalar), which is always zero if Cwt=Inf.
More precisely, the last two time steps in $\mathbf{U_e}$ are forced to be equal, i.e. $\mathbf{u}(k+H_p) = \mathbf{u}(k+H_p-1)$, since $H_c ≤ H_p$ implies that $\mathbf{Δu}(k+H_p) = \mathbf{0}$. The vectors $\mathbf{ŷ}(k)$ and $\mathbf{d}(k)$ are the current state estimator output and measured disturbance, respectively, and $\mathbf{Ŷ}$ and $\mathbf{D̂}$, their respective predictions from $k+1$ to $k+H_p$. If LHS represents the result of the left-hand side in the inequality $\mathbf{g_c}(\mathbf{U_e}, \mathbf{Ŷ_e}, \mathbf{D̂_e}, \mathbf{p}, ϵ) ≤ \mathbf{0}$, the function gc can be implemented in two possible ways:
- Non-mutating function (out-of-place): define it as
gc(Ue, Ŷe, D̂e, p, ϵ) -> LHS. This syntax is simple and intuitive but it allocates more memory. - Mutating function (in-place): define it as
gc!(LHS, Ue, Ŷe, D̂e, p, ϵ) -> nothing. This syntax reduces the allocations and potentially the computational burden as well.
The keyword argument nc is the number of elements in LHS, and gc!, an alias for the gc argument (both gc and gc! accepts non-mutating and mutating functions).
By default, the optimization relies on dense ForwardDiff automatic differentiation (AD) to compute the objective and constraint derivatives. One exception: if transcription is not a SingleShooting, the jacobian argument defaults to this sparse backend:
AutoSparse(
AutoForwardDiff();
sparsity_detector = TracerSparsityDetector(),
coloring_algorithm = GreedyColoringAlgorithm()
)Optimizers generally benefit from exact derivatives like AD. However, the NonLinModel state-space functions must be compatible with this feature. See JuMP documentation for common mistakes when writing these functions.
Note that if Cwt≠Inf, the attribute nlp_scaling_max_gradient of Ipopt is set to 10/Cwt (if not already set), to scale the small values of $ϵ$.
NonLinMPC(estim::StateEstimator; <keyword arguments>)Use custom state estimator estim to construct NonLinMPC.
Examples
julia> model = NonLinModel((x,u,_,_)->0.5x+u, (x,_,_)->2x, 10.0, 1, 1, 1, solver=nothing);
julia> estim = UnscentedKalmanFilter(model, σQint_ym=[0.05]);
julia> mpc = NonLinMPC(estim, Hp=20, Cwt=1e6)
NonLinMPC controller with a sample time Ts = 10.0 s:
├ estimator: UnscentedKalmanFilter
├ model: NonLinModel
├ optimizer: Ipopt
├ transcription: SingleShooting
├ gradient: AutoForwardDiff
├ jacobian: AutoForwardDiff
└ dimensions:
├ 20 prediction steps Hp
├ 2 control steps Hc
├ 1 slack variable ϵ (control constraints)
├ 1 manipulated inputs u (0 integrating states)
├ 2 estimated states x̂
├ 1 measured outputs ym (1 integrating states)
├ 0 unmeasured outputs yu
└ 0 measured disturbances dMove Manipulated Input u
ModelPredictiveControl.moveinput! — Functionmoveinput!(mpc::PredictiveController, ry=mpc.estim.model.yop, d=[]; <keyword args>) -> uCompute the optimal manipulated input value u for the current control period.
Solve the optimization problem of mpc PredictiveController and return the results $\mathbf{u}(k)$. Following the receding horizon principle, the algorithm discards the optimal future manipulated inputs $\mathbf{u}(k+1), \mathbf{u}(k+2), ...$ Note that the method mutates mpc internal data (it stores u - mpc.estim.model.uop at mpc.lastu0 for instance) but it does not modifies mpc.estim states. Call preparestate!(mpc, ym, d) before moveinput!, and updatestate!(mpc, u, ym, d) after, to update mpc state estimates. Setpoint and measured disturbance previews can be implemented with the R̂y, R̂u and D̂ keyword arguments.
Calling a PredictiveController object calls this method.
See also LinMPC, ExplicitMPC, NonLinMPC.
Arguments
mpc::PredictiveController: solve optimization problem ofmpc.ry=mpc.estim.model.yop: current output setpoints $\mathbf{r_y}(k)$.d=[]: current measured disturbances $\mathbf{d}(k)$.D̂=repeat(d, mpc.Hp)orDhat: predicted measured disturbances $\mathbf{D̂}$, constant in the future by default or $\mathbf{d̂}(k+j)=\mathbf{d}(k)$ for $j=1$ to $H_p$.R̂y=repeat(ry, mpc.Hp)orRhaty: predicted output setpoints $\mathbf{R̂_y}$, constant in the future by default or $\mathbf{r̂_y}(k+j)=\mathbf{r_y}(k)$ for $j=1$ to $H_p$.R̂u=mpc.UoporRhatu: predicted manipulated input setpoints $\mathbf{R̂_u}$, constant in the future by default or $\mathbf{r̂_u}(k+j)=\mathbf{u_{op}}$ for $j=0$ to $H_p-1$.
Examples
julia> mpc = LinMPC(LinModel(tf(5, [2, 1]), 3), Nwt=[0], Hp=1000, Hc=1);
julia> preparestate!(mpc, [0]); ry = [5];
julia> u = moveinput!(mpc, ry); round.(u, digits=3)
1-element Vector{Float64}:
1.0Direct Transcription Methods
TranscriptionMethod
ModelPredictiveControl.TranscriptionMethod — TypeAbstract supertype of all transcription methods of PredictiveController.
The module currently supports SingleShooting, MultipleShooting and TrapezoidalCollocation transcription methods.
SingleShooting
ModelPredictiveControl.SingleShooting — TypeSingleShooting()Construct a direct single shooting TranscriptionMethod.
The decision variable in the optimization problem is (excluding the slack $ϵ$ and without any custom move blocking):
\[\mathbf{Z} = \mathbf{ΔU} = \begin{bmatrix} \mathbf{Δu}(k+0) \\ \mathbf{Δu}(k+1) \\ \vdots \\ \mathbf{Δu}(k+H_c-1) \end{bmatrix}\]
This method computes the predictions by calling the augmented discrete-time model recursively over the prediction horizon $H_p$ in the objective function, or by updating the linear coefficients of the quadratic optimization for LinModel. It is generally more efficient for small control horizon $H_c$, stable and mildly nonlinear plant model/constraints.
MultipleShooting
ModelPredictiveControl.MultipleShooting — TypeMultipleShooting(; f_threads=false, h_threads=false)Construct a direct multiple shooting TranscriptionMethod.
The decision variable is (excluding $ϵ$):
\[\mathbf{Z} = \begin{bmatrix} \mathbf{ΔU} \\ \mathbf{X̂_0} \end{bmatrix}\]
thus it also includes the predicted states, expressed as deviation vectors from the operating point $\mathbf{x̂_{op}}$ (see augment_model):
\[\mathbf{X̂_0} = \mathbf{X̂ - X̂_{op}} = \begin{bmatrix} \mathbf{x̂}_i(k+1) - \mathbf{x̂_{op}} \\ \mathbf{x̂}_i(k+2) - \mathbf{x̂_{op}} \\ \vdots \\ \mathbf{x̂}_i(k+H_p) - \mathbf{x̂_{op}} \end{bmatrix}\]
where $\mathbf{x̂}_i(k+j)$ is the state prediction for time $k+j$, estimated by the observer at time $i=k$ or $i=k-1$ depending on its direct flag. Note that $\mathbf{X̂_0 = X̂}$ if the operating point is zero, which is typically the case in practice for NonLinModel.
This transcription computes the predictions by calling the augmented discrete-time model in the equality constraint function recursively over $H_p$, or by updating the linear equality constraint vector for LinModel. It is generally more efficient for large control horizon $H_c$, unstable or highly nonlinear models/constraints. Multithreading with f_threads or h_threads keyword arguments can be advantageous if $\mathbf{f}$ or $\mathbf{h}$ in the NonLinModel is expensive to evaluate, respectively.
Sparse optimizers like OSQP or Ipopt and sparse Jacobian computations are recommended for this transcription method.
TrapezoidalCollocation
ModelPredictiveControl.TrapezoidalCollocation — TypeTrapezoidalCollocation(h::Int=0; f_threads=false, h_threads=false)Construct an implicit trapezoidal TranscriptionMethod with hth order hold.
This is the simplest collocation method. It supports continuous-time NonLinModels only. The decision variables are the same as for MultipleShooting, hence similar computational costs. See the same docstring for descriptions of f_threads and h_threads keywords. The h argument is 0 or 1, for piecewise constant or linear manipulated inputs $\mathbf{u}$ (h=1 is slightly less expensive). Note that the various DiffSolver here assume zero-order hold, so h=1 will induce a plant-model mismatch if the plant is simulated with these solvers.
This transcription computes the predictions by calling the continuous-time model in the equality constraint function and by using the implicit trapezoidal rule. It can handle moderately stiff systems and is A-stable. Note that the built-in StateEstimator will still use the solver provided at the construction of the NonLinModel to estimate the plant states, not the trapezoidal rule (see supersample option of RungeKutta for stiff systems). See Extended Help for more details.
Sparse optimizers like Ipopt and sparse Jacobian computations are recommended for this transcription method.
Extended Help
Extended Help
Note that the stochastic model of the unmeasured disturbances is strictly discrete-time, as described in ModelPredictiveControl.init_estimstoch. Collocation methods require continuous-time dynamics. Because of this, the stochastic states are transcribed separately using a MultipleShooting method. See con_nonlinprogeq! for more details.