Functions: PredictiveController Internals
The prediction methodology of this module is mainly based on Maciejowski textbook [1].
Controller Construction
ModelPredictiveControl.move_blocking — Functionmove_blocking(Hp::Int, Hc::Vector{Int}) -> nbGet the move blocking vector nb from the Hc argument, and modify it to match Hp.
This feature is also known as manipulated variable blocking. The argument Hc is interpreted as the move blocking vector nb. It specifies the length of each step (or "block") in the $\mathbf{ΔU}$ vector, to customize the pattern (in time steps, thus strictly positive integers):
\[ \mathbf{n_b} = \begin{bmatrix} n_1 & n_2 & \cdots & n_{H_c} \end{bmatrix}'\]
The vector that includes all the manipulated input increments $\mathbf{Δu}$ is then defined as:
\[\mathbf{ΔU} = \begin{bmatrix} \mathbf{Δu}(k + 0) \\[0.1em] \mathbf{Δu}(k + ∑_{i=1}^1 n_i) \\[0.1em] \mathbf{Δu}(k + ∑_{i=1}^2 n_i) \\[0.1em] \vdots \\[0.1em] \mathbf{Δu}(k + ∑_{i=1}^{H_c-1} n_i) \end{bmatrix}\]
The provided nb vector is modified to ensure sum(nb) == Hp:
- If
sum(nb) < Hp, a new element is pushed tonbwith the valueHp - sum(nb). - If
sum(nb) > Hp, the intervals are truncated untilsum(nb) == Hp. For example, ifHp = 10andnb = [1, 2, 3, 6, 7], thennbis truncated to[1, 2, 3, 4].
move_blocking(Hp::Int, Hc::Int) -> nbConstruct a move blocking vector nb that match the provided Hp and Hc integers.
The vector is filled with 1s, except for the last element which is Hp - Hc + 1.
ModelPredictiveControl.init_ZtoΔU — Functioninit_ZtoΔU(estim::StateEstimator, transcription::TranscriptionMethod, Hp, Hc) -> PΔuInit decision variables to input increments over $H_c$ conversion matrix PΔu.
The conversion from the decision variables $\mathbf{Z}$ to $\mathbf{ΔU}$, the input increments over $H_c$, is computed by:
\[\mathbf{ΔU} = \mathbf{P_{Δu}} \mathbf{Z}\]
in which $\mathbf{P_{Δu}}$ is defined in the Extended Help section.
Extended Help
Extended Help
Following the decision variable definition of the TranscriptionMethod, the conversion matrix $\mathbf{P_{Δu}}$, we have:
- $\mathbf{P_{Δu}} = \mathbf{I}$ if
transcriptionis aSingleShooting - $\mathbf{P_{Δu}} = [\begin{smallmatrix}\mathbf{I} & \mathbf{0} \end{smallmatrix}]$ if
transcriptionis aMultipleShooting
The matrix is store as as SparseMatrixCSC to support both cases efficiently.
ModelPredictiveControl.init_ZtoU — Functioninit_ZtoU(estim, transcription, Hp, Hc, nb) -> Pu, TuInit decision variables to inputs over $H_p$ conversion matrices.
The conversion from the decision variables $\mathbf{Z}$ to $\mathbf{U}$, the manipulated inputs over $H_p$, is computed by:
\[\mathbf{U} = \mathbf{P_u} \mathbf{Z} + \mathbf{T_u} \mathbf{u}(k-1)\]
The $\mathbf{P_u}$ and $\mathbf{T_u}$ matrices are defined in the Extended Help section.
Extended Help
Extended Help
With $n_i$, the $i$th element of the $\mathbf{n_b}$ vector defined in move_blocking documentation, we introduce the $\mathbf{Q}(n_i)$ matrix of size (nu*ni, nu):
\[\mathbf{Q}(n_i) = \begin{bmatrix} \mathbf{I} \\ \mathbf{I} \\ \vdots \\ \mathbf{I} \end{bmatrix} \]
The $\mathbf{U}$ vector and the conversion matrices are defined as:
\[\mathbf{U} = \begin{bmatrix} \mathbf{u}(k + 0) \\ \mathbf{u}(k + 1) \\ \vdots \\ \mathbf{u}(k + H_p - 1) \end{bmatrix} , \quad \mathbf{P_u^†} = \begin{bmatrix} \mathbf{Q}(n_1) & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Q}(n_2) & \mathbf{Q}(n_2) & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Q}(n_{H_c}) & \mathbf{Q}(n_{H_c}) & \cdots & \mathbf{Q}(n_{H_c}) \end{bmatrix} , \quad \mathbf{T_u} = \begin{bmatrix} \mathbf{I} \\ \mathbf{I} \\ \vdots \\ \mathbf{I} \end{bmatrix}\]
and, depending on the transcription method, we have:
- $\mathbf{P_u} = \mathbf{P_u^†}$ if
transcriptionis aSingleShooting - $\mathbf{P_u} = [\begin{smallmatrix}\mathbf{P_u^†} & \mathbf{0} \end{smallmatrix}]$ if
transcriptionis aMultipleShooting
The conversion matrices are stored as SparseMatrixCSC since it was benchmarked that it is generally more performant than normal dense matrices, even for small nu, Hp and Hc values. Using Bool element type and BitMatrix is also slower.
ModelPredictiveControl.init_predmat — Functioninit_predmat(
model::LinModel, estim, transcription::SingleShooting, Hp, Hc
) -> E, G, J, K, V, ex̂, gx̂, jx̂, kx̂, vx̂Construct the prediction matrices for LinModel and SingleShooting.
The model predictions are evaluated from the deviation vectors (see setop!), the decision variable $\mathbf{Z}$ (see TranscriptionMethod), and:
\[\begin{aligned} \mathbf{Ŷ_0} &= \mathbf{E Z} + \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ &= \mathbf{E Z} + \mathbf{F} \end{aligned}\]
in which $\mathbf{x̂_0}(k) = \mathbf{x̂}_i(k) - \mathbf{x̂_{op}}$, with $i = k$ if estim.direct==true, otherwise $i = k - 1$. The predicted outputs $\mathbf{Ŷ_0}$ and measured disturbances $\mathbf{D̂_0}$ respectively include $\mathbf{ŷ_0}(k+j)$ and $\mathbf{d̂_0}(k+j)$ values with $j=1$ to $H_p$, and input increments $\mathbf{ΔU}$, $\mathbf{Δu}(k+j)$ from $j=0$ to $H_c-1$. The vector $\mathbf{B}$ contains the contribution for non-zero state $\mathbf{x̂_{op}}$ and state update $\mathbf{f̂_{op}}$ operating points (for linearization at non-equilibrium point, see linearize). The stochastic predictions $\mathbf{Ŷ_s=0}$ if estim is not a InternalModel, see init_stochpred. The method also computes similar matrices for the predicted terminal state at $k+H_p$:
\[\begin{aligned} \mathbf{x̂_0}(k+H_p) &= \mathbf{e_x̂ Z} + \mathbf{g_x̂ d_0}(k) + \mathbf{j_x̂ D̂_0} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u_0}(k-1) + \mathbf{b_x̂} \\ &= \mathbf{e_x̂ Z} + \mathbf{f_x̂} \end{aligned}\]
The matrices $\mathbf{E, G, J, K, V, B, e_x̂, g_x̂, j_x̂, k_x̂, v_x̂, b_x̂}$ are defined in the Extended Help section. The $\mathbf{F}$ and $\mathbf{f_x̂}$ vectors are recalculated at each control period $k$, see initpred! and linconstraint!.
Extended Help
Extended Help
Using the augmented matrices $\mathbf{Â, B̂_u, Ĉ, B̂_d, D̂_d}$ in estim (see augment_model), and the function $\mathbf{W}(j) = ∑_{i=0}^j \mathbf{Â}^i$, the prediction matrices are computed by :
\[\begin{aligned} \mathbf{E} &= \begin{bmatrix} \mathbf{Ĉ W}(0)\mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ W}(1)\mathbf{B̂_u} & \mathbf{Ĉ W}(0)\mathbf{B̂_u} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ W}(H_p-1)\mathbf{B̂_u} & \mathbf{Ĉ W}(H_p-2)\mathbf{B̂_u} & \cdots & \mathbf{Ĉ W}(H_p-H_c+1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{G} &= \begin{bmatrix} \mathbf{Ĉ}\mathbf{Â}^{0} \mathbf{B̂_d} \\ \mathbf{Ĉ}\mathbf{Â}^{1} \mathbf{B̂_d} \\ \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p-1} \mathbf{B̂_d} \end{bmatrix} \\ \mathbf{J} &= \begin{bmatrix} \mathbf{D̂_d} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ}\mathbf{Â}^{0} \mathbf{B̂_d} & \mathbf{D̂_d} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p-2} \mathbf{B̂_d} & \mathbf{Ĉ}\mathbf{Â}^{H_p-3} \mathbf{B̂_d} & \cdots & \mathbf{D̂_d} \end{bmatrix} \\ \mathbf{K} &= \begin{bmatrix} \mathbf{Ĉ}\mathbf{Â}^{1} \\ \mathbf{Ĉ}\mathbf{Â}^{2} \\ \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p} \end{bmatrix} \\ \mathbf{V} &= \begin{bmatrix} \mathbf{Ĉ W}(0)\mathbf{B̂_u} \\ \mathbf{Ĉ W}(1)\mathbf{B̂_u} \\ \vdots \\ \mathbf{Ĉ W}(H_p-1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{B} &= \begin{bmatrix} \mathbf{Ĉ W}(0) \\ \mathbf{Ĉ W}(1) \\ \vdots \\ \mathbf{Ĉ W}(H_p-1) \end{bmatrix} \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
For the terminal constraints, the matrices are computed with:
\[\begin{aligned} \mathbf{e_x̂} &= \begin{bmatrix} \mathbf{W}(H_p-1)\mathbf{B̂_u} & \mathbf{W}(H_p-2)\mathbf{B̂_u} & \cdots & \mathbf{W}(H_p-H_c+1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{g_x̂} &= \mathbf{Â}^{H_p-1} \mathbf{B̂_d} \\ \mathbf{j_x̂} &= \begin{bmatrix} \mathbf{Â}^{H_p-2} \mathbf{B̂_d} & \mathbf{Â}^{H_p-3} \mathbf{B̂_d} & \cdots & \mathbf{0} \end{bmatrix} \\ \mathbf{k_x̂} &= \mathbf{Â}^{H_p} \\ \mathbf{v_x̂} &= \mathbf{W}(H_p-1)\mathbf{B̂_u} \\ \mathbf{b_x̂} &= \mathbf{W}(H_p-1) \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
init_predmat(
model::LinModel, estim, transcription::MultipleShooting, Hp, Hc
) -> E, G, J, K, V, B, ex̂, gx̂, jx̂, kx̂, vx̂, bx̂Construct the prediction matrices for LinModel and MultipleShooting.
They are defined in the Extended Help section.
Extended Help
Extended Help
They are all appropriately sized zero matrices $\mathbf{0}$, except for:
\[\begin{aligned} \mathbf{E} &= [\begin{smallmatrix}\mathbf{0} & \mathbf{E^†} \end{smallmatrix}] \\ \mathbf{E^†} &= \text{diag}\mathbf{(Ĉ,Ĉ,...,Ĉ)} \\ \mathbf{J} &= \text{diag}\mathbf{(D̂_d,D̂_d,...,D̂_d)} \\ \mathbf{e_x̂} &= [\begin{smallmatrix}\mathbf{0} & \mathbf{I}\end{smallmatrix}] \end{aligned}\]
init_predmat(model::NonLinModel, estim, transcription::SingleShooting, Hp, Hc)Return empty matrices for SingleShooting of NonLinModel
init_predmat(model::NonLinModel, estim, transcription::TranscriptionMethod, Hp, Hc)Return the terminal state matrices for NonLinModel and other TranscriptionMethod.
The output prediction matrices are all empty matrices. The terminal state matrices are given in the Extended Help section.
Extended Help
ModelPredictiveControl.init_defectmat — Functioninit_defectmat(model::LinModel, estim, transcription::MultipleShooting, Hp, Hc)Init the matrices for computing the defects over the predicted states.
An equation similar to the prediction matrices (see init_predmat) computes the defects over the predicted states:
\[\begin{aligned} \mathbf{Ŝ} &= \mathbf{E_ŝ Z} + \mathbf{G_ŝ d_0}(k) + \mathbf{J_ŝ D̂_0} + \mathbf{K_ŝ x̂_0}(k) + \mathbf{V_ŝ u_0}(k-1) + \mathbf{B_ŝ} \\ &= \mathbf{E_ŝ Z} + \mathbf{F_ŝ} \end{aligned}\]
They are forced to be $\mathbf{Ŝ = 0}$ using the optimization equality constraints. The matrices $\mathbf{E_ŝ, G_ŝ, J_ŝ, K_ŝ, V_ŝ, B_ŝ}$ are defined in the Extended Help section.
Extended Help
Extended Help
The matrices are computed by:
\[\begin{aligned} \mathbf{E_ŝ} &= \begin{bmatrix} \mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} & -\mathbf{I} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{B̂_u} & \mathbf{B̂_u} & \cdots & \mathbf{0} & \mathbf{Â} & -\mathbf{I} & \cdots & \mathbf{0} & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ \mathbf{B̂_u} & \mathbf{B̂_u} & \cdots & \mathbf{B̂_u} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{Â} & -\mathbf{I} \end{bmatrix} \\ \mathbf{G_ŝ} &= \begin{bmatrix} \mathbf{B̂_d} \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{bmatrix} \\ \mathbf{J_ŝ} &= \begin{bmatrix} \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{B̂_d} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{B̂_d} & \mathbf{0} \end{bmatrix} \\ \mathbf{K_ŝ} &= \begin{bmatrix} \mathbf{Â} \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{bmatrix} \\ \mathbf{V_ŝ} &= \begin{bmatrix} \mathbf{B̂_u} \\ \mathbf{B̂_u} \\ \vdots \\ \mathbf{B̂_u} \end{bmatrix} \\ \mathbf{B_ŝ} &= \begin{bmatrix} \mathbf{f̂_{op} - x̂_{op}} \\ \mathbf{f̂_{op} - x̂_{op}} \\ \vdots \\ \mathbf{f̂_{op} - x̂_{op}} \end{bmatrix} \end{aligned}\]
init_defectmat(model::SimModel, estim, transcription::TranscriptionMethod, Hp, Hc)Return empty matrices for all other cases (N/A).
ModelPredictiveControl.relaxU — FunctionrelaxU(Pu, C_umin, C_umax, nϵ) -> A_Umin, A_Umax, P̃uAugment manipulated inputs constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{P̃_u}$, similar to the one described at init_ZtoU. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{U_{min}}} \\ \mathbf{A_{U_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{U_{min} + T_u u}(k-1) \\ + \mathbf{U_{max} - T_u u}(k-1) \end{bmatrix}\]
in which $\mathbf{U_{min}}$ and $\mathbf{U_{max}}$ vectors respectively contains $\mathbf{u_{min}}$ and $\mathbf{u_{max}}$ repeated $H_p$ times.
ModelPredictiveControl.relaxΔU — FunctionrelaxΔU(PΔu, C_Δumin, C_Δumax, ΔUmin, ΔUmax, nϵ) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax, P̃ΔuAugment input increments constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{P̃_{Δu}}$, similar to the one described at init_ZtoΔU, but extracting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}] = \mathbf{P̃_{Δu} Z̃}$. Also, knowing that $0 ≤ ϵ ≤ ∞$, it also returns the augmented bounds $\mathbf{ΔŨ_{min}} = [\begin{smallmatrix} \mathbf{ΔU_{min}} \\ 0 \end{smallmatrix}]$ and $\mathbf{ΔŨ_{max}} = [\begin{smallmatrix} \mathbf{ΔU_{min}} \\ ∞ \end{smallmatrix}]$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{ΔŨ_{min}}} \\ \mathbf{A_{ΔŨ_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{ΔŨ_{min}} \\ + \mathbf{ΔŨ_{max}} \end{bmatrix}\]
Note that strictly speaking, the lower bound on the slack variable ϵ is a decision variable bound, which is more precise than a linear inequality constraint. However, it is more convenient to treat it as a linear inequality constraint since the optimizer OSQP.jl does not support pure bounds on the decision variables.
ModelPredictiveControl.relaxŶ — FunctionrelaxŶ(E, C_ymin, C_ymax, nϵ) -> A_Ymin, A_Ymax, ẼAugment linear output prediction constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ_0 = Ẽ Z̃ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{Y_{min}}} \\ \mathbf{A_{Y_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{(Y_{min} - Y_{op}) + F} \\ + \mathbf{(Y_{max} - Y_{op}) - F} \end{bmatrix}\]
in which $\mathbf{Y_{min}, Y_{max}}$ and $\mathbf{Y_{op}}$ vectors respectively contains $\mathbf{y_{min}, y_{max}}$ and $\mathbf{y_{op}}$ repeated $H_p$ times.
ModelPredictiveControl.relaxterminal — Functionrelaxterminal(ex̂, c_x̂min, c_x̂max, nϵ) -> A_x̂min, A_x̂max, ẽx̂Augment terminal state constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{ẽ_{x̂}}$ matrix that appears in the terminal state equation $\mathbf{x̂_0}(k + H_p) = \mathbf{ẽ_x̂ Z̃ + f_x̂}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{x̂_{min}}} \\ \mathbf{A_{x̂_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{(x̂_{min} - x̂_{op}) + f_x̂} \\ + \mathbf{(x̂_{max} - x̂_{op}) - f_x̂} \end{bmatrix}\]
ModelPredictiveControl.init_quadprog — Functioninit_quadprog(
model::LinModel, transcriptions::TranscriptionMethod, weights::ControllerWeights,
Ẽ, P̃Δu, P̃u; warn_cond=1e6
) -> H̃Init the quadratic programming Hessian H̃ for MPC.
The matrix appear in the quadratic general form:
\[ J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + r \]
The Hessian matrix is constant if the model and weights are linear and time invariant (LTI):
\[ \mathbf{H̃} = 2 ( \mathbf{Ẽ'} \mathbf{M}_{H_p} \mathbf{Ẽ} + \mathbf{P̃_{Δu}'} \mathbf{Ñ}_{H_c} \mathbf{P̃_{Δu}} + \mathbf{P̃_{u}'} \mathbf{L}_{H_p} \mathbf{P̃_{u}} )\]
in which $\mathbf{Ẽ}$, $\mathbf{P̃_{Δu}}$ and $\mathbf{P̃_{u}}$ matrices are defined at relaxŶ, relaxΔU and relaxU documentation, respectively. The vector $\mathbf{q̃}$ and scalar $r$ need recalculation each control period $k$, see initpred!. $r$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value. A @warn will be displayed if the condition number cond(H̃) > warn_cond and transcription is a SingleShooting (warn_cond=Inf for no warning).
Return empty matrix if model is not a LinModel.
ModelPredictiveControl.init_stochpred — Functioninit_stochpred(estim::InternalModel, Hp) -> Ks, PsInit the stochastic prediction matrices for InternalModel.
Ks and Ps matrices are defined as:
\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k) + \mathbf{P_s ŷ_s}(k)\]
Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].
Return empty matrices if estim is not a InternalModel.
ModelPredictiveControl.init_matconstraint_mpc — Functioninit_matconstraint_mpc(
model::LinModel, transcription::TranscriptionMethod, nc::Int,
i_Umin, i_Umax, i_ΔŨmin, i_ΔŨmax, i_Ymin, i_Ymax, i_x̂min, i_x̂max,
args...
) -> i_b, i_g, A, Aeq, neqInit i_b, i_g, neq, and A and Aeq matrices for the all the MPC constraints.
The linear and nonlinear constraints are respectively defined as:
\[\begin{aligned} \mathbf{A Z̃ } &≤ \mathbf{b} \\ \mathbf{A_{eq} Z̃} &= \mathbf{b_{eq}} \\ \mathbf{g(Z̃)} &≤ \mathbf{0} \\ \mathbf{g_{eq}(Z̃)} &= \mathbf{0} \\ \end{aligned}\]
The argument nc is the number of custom nonlinear inequality constraints in $\mathbf{g_c}$. i_b is a BitVector including the indices of $\mathbf{b}$ that are finite numbers. i_g is a similar vector but for the indices of $\mathbf{g}$. The method also returns the $\mathbf{A, A_{eq}}$ matrices and neq if args is provided. In such a case, args needs to contain all the inequality and equality constraint matrices: A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ymin, A_Ymax, A_x̂min, A_x̂max, A_ŝ. The integer neq is the number of nonlinear equality constraints in $\mathbf{g_{eq}}$.
Init i_b without output & terminal constraints if NonLinModel and SingleShooting.
Init i_b without output constraints if NonLinModel and other TranscriptionMethod.
ModelPredictiveControl.get_nonlinobj_op — Methodget_nonlinobj_op(mpc::NonLinMPC, optim::JuMP.GenericModel{JNT}) -> J_opReturn the nonlinear operator for the objective of mpc NonLinMPC.
It is based on the splatting syntax. This method is really intricate and that's because of:
- These functions are used inside the nonlinear optimization, so they must be type-stable and as efficient as possible. All the function outputs and derivatives are cached and updated in-place if required to use the efficient
value_and_gradient!. - The splatting syntax for objective functions implies the use of
Vararg{T,N}(see the performance tip) and memoization to avoid redundant computations. This is already complex, but it's even worse knowing that the automatic differentiation tools do not support splatting. - The signature of gradient and hessian functions is not the same for univariate (
nZ̃ == 1) and multivariate (nZ̃ > 1) operators inJuMP. Both must be defined.
ModelPredictiveControl.get_nonlincon_oracle — Methodget_nonlincon_oracle(mpc::NonLinMPC, optim) -> g_oracle, geq_oracleReturn the nonlinear constraint oracles for NonLinMPC mpc.
Return g_oracle and geq_oracle, the inequality and equality VectorNonlinearOracle for the two respective constraints. Note that g_oracle only includes the non-Inf inequality constraints, thus it must be re-constructed if they change. This method is really intricate because the oracles are used inside the nonlinear optimization, so they must be type-stable and as efficient as possible. All the function outputs and derivatives are ached and updated in-place if required to use the efficient value_and_jacobian!.
Update Quadratic Optimization
ModelPredictiveControl.initpred! — Methodinitpred!(mpc::PredictiveController, model::LinModel, d, lastu, D̂, R̂y, R̂u) -> nothingInit linear model prediction matrices F, q̃, r and current estimated output ŷ.
See init_predmat and init_quadprog for the definition of the matrices. They are computed with these equations using in-place operations:
\[\begin{aligned} \mathbf{F} &= \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ \mathbf{C_y} &= \mathbf{F} + \mathbf{Y_{op}} - \mathbf{R̂_y} \\ \mathbf{C_u} &= \mathbf{T_u}\mathbf{u}(k-1) - \mathbf{R̂_u} \\ \mathbf{q̃} &= 2[ (\mathbf{M}_{H_p} \mathbf{Ẽ})' \mathbf{C_y} + (\mathbf{L}_{H_p} \mathbf{P̃_U})' \mathbf{C_u} ] \\ r &= \mathbf{C_y'} \mathbf{M}_{H_p} \mathbf{C_y} + \mathbf{C_u'} \mathbf{L}_{H_p} \mathbf{C_u} \end{aligned}\]
ModelPredictiveControl.linconstraint! — Methodlinconstraint!(mpc::PredictiveController, model::LinModel)Set b vector for the linear model inequality constraints ($\mathbf{A Z̃ ≤ b}$).
Also init $\mathbf{f_x̂} = \mathbf{g_x̂ d_0}(k) + \mathbf{j_x̂ D̂_0} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u_0}(k-1) + \mathbf{b_x̂}$ vector for the terminal constraints, see init_predmat.
ModelPredictiveControl.linconstrainteq! — Functionlinconstrainteq!(
mpc::PredictiveController, model::LinModel, transcription::MultipleShooting
)Set beq vector for the linear model equality constraints ($\mathbf{A_{eq} Z̃ = b_{eq}}$).
Also init $\mathbf{F_ŝ} = \mathbf{G_ŝ d_0}(k) + \mathbf{J_ŝ D̂_0} + \mathbf{K_ŝ x̂_0}(k) + \mathbf{V_ŝ u_0}(k-1) + \mathbf{B_ŝ}$, see init_defectmat.
Solve Optimization Problem
ModelPredictiveControl.optim_objective! — Methodoptim_objective!(mpc::PredictiveController) -> Z̃Optimize the objective function of mpc PredictiveController and return the solution Z̃.
If first warm-starts the solver with set_warmstart!. It then calls JuMP.optimize!(mpc.optim) and extract the solution. A failed optimization prints an @error log in the REPL and returns the warm-start value. A failed optimization also prints getinfo results in the debug log if activated.
ModelPredictiveControl.set_warmstart! — Functionset_warmstart!(mpc::PredictiveController, ::SingleShooting, Z̃var) -> Z̃sSet and return the warm-start value of Z̃var for SingleShooting transcription.
If supported by mpc.optim, it warm-starts the solver at:
\[\mathbf{Z̃_s} = \begin{bmatrix} \mathbf{Δu}(k+0|k-1) \\ \mathbf{Δu}(k+1|k-1) \\ \vdots \\ \mathbf{Δu}(k+H_c-2|k-1) \\ \mathbf{0} \\ ϵ(k-1) \end{bmatrix}\]
where $\mathbf{Δu}(k+j|k-1)$ is the input increment for time $k+j$ computed at the last control period $k-1$, and $ϵ(k-1)$, the slack variable of the last control period.
set_warmstart!(mpc::PredictiveController, ::TranscriptionMethod, Z̃var) -> Z̃sSet and return the warm-start value of Z̃var for other TranscriptionMethod.
It warm-starts the solver at:
\[\mathbf{Z̃_s} = \begin{bmatrix} \mathbf{Δu}(k+0|k-1) \\ \mathbf{Δu}(k+1|k-1) \\ \vdots \\ \mathbf{Δu}(k+H_c-2|k-1) \\ \mathbf{0} \\ \mathbf{x̂_0}(k+1|k-1) \\ \mathbf{x̂_0}(k+2|k-1) \\ \vdots \\ \mathbf{x̂_0}(k+H_p-1|k-1) \\ \mathbf{x̂_0}(k+H_p-1|k-1) \\ ϵ(k-1) \end{bmatrix}\]
where $\mathbf{x̂_0}(k+j|k-1)$ is the predicted state for time $k+j$ computed at the last control period $k-1$, expressed as a deviation from the operating point $\mathbf{x̂_{op}}$.
ModelPredictiveControl.predict! — Functionpredict!(
Ŷ0, x̂0end, _ , _ , _ ,
mpc::PredictiveController, model::LinModel, transcription::TranscriptionMethod,
_ , Z̃
) -> Ŷ0, x̂0endCompute the predictions Ŷ0, terminal states x̂0end if model is a LinModel.
The method mutates Ŷ0 and x̂0end vector arguments. The x̂end vector is used for the terminal constraints applied on $\mathbf{x̂_0}(k+H_p)$. The computations are identical for any TranscriptionMethod if the model is linear:
\[\begin{aligned} \mathbf{Ŷ_0} &= \mathbf{Ẽ Z̃} + \mathbf{F} \\ \mathbf{x̂_0}(k+H_p) &= \mathbf{ẽ_x̂ Z̃} + \mathbf{f_x̂} \end{aligned}\]
predict!(
Ŷ0, x̂0end, X̂0, Û0, K0,
mpc::PredictiveController, model::NonLinModel, transcription::SingleShooting,
U0, _
) -> Ŷ0, x̂0endCompute vectors if model is a NonLinModel and for SingleShooting.
The method mutates Ŷ0, x̂0end, X̂0, Û0 and K0 arguments. The augmented model of f̂! and ĥ! functions is called recursively in a for loop:
\[\begin{aligned} \mathbf{x̂_0}(k+1) &= \mathbf{f̂}\Big(\mathbf{x̂_0}(k), \mathbf{u_0}(k), \mathbf{d̂_0}(k) \Big) \\ \mathbf{ŷ_0}(k) &= \mathbf{ĥ}\Big(\mathbf{x̂_0}(k), \mathbf{d̂_0}(k) \Big) \end{aligned}\]
predict!(
Ŷ0, x̂0end, _ , _ , _ ,
mpc::PredictiveController, model::NonLinModel, transcription::TranscriptionMethod,
_ , Z̃
) -> Ŷ0, x̂0endCompute vectors if model is a NonLinModel and other TranscriptionMethod.
The method mutates Ŷ0 and x̂0end arguments. The augmented output function ĥ! is called multiple times in a for loop:
\[\mathbf{ŷ_0}(k) = \mathbf{ĥ}\Big(\mathbf{x̂_0}(k), \mathbf{d̂_0}(k) \Big)\]
in which $\mathbf{x̂_0}$ is the augmented state extracted from the decision variable Z̃.
Compute the predictions but not the terminal states if mpc is an ExplicitMPC.
ModelPredictiveControl.con_nonlinprog! — Functioncon_nonlinprog!(
g, mpc::PredictiveController, model::LinModel, ::TranscriptionMethod, _ , _ , gc, ϵ
) -> gNonlinear constrains when model is a LinModel.
The method mutates the g vectors in argument and returns it. Only the custom constraints gc are include in the g vector.
con_nonlinprog!(
g, mpc::PredictiveController, model::NonLinModel, ::TranscriptionMethod, x̂0end, Ŷ0, gc, ϵ
) -> gNonlinear constrains when model is a NonLinModel with non-SingleShooting.
The method mutates the g vectors in argument and returns it. The output prediction and the custom constraints are include in the g vector.
con_nonlinprog!(
g, mpc::PredictiveController, model::NonLinModel, ::SingleShooting, x̂0end, Ŷ0, gc, ϵ
) -> gNonlinear constrains when model is NonLinModel with SingleShooting.
The method mutates the g vectors in argument and returns it. The output prediction, the terminal state and the custom constraints are include in the g vector.
ModelPredictiveControl.con_nonlinprogeq! — Functioncon_nonlinprogeq!(
geq, X̂0, Û0, K0
mpc::PredictiveController, model::NonLinModel, transcription::MultipleShooting,
U0, Z̃
)Nonlinear equality constrains for NonLinModel and MultipleShooting.
The method mutates the geq, X̂0, Û0 and K0 vectors in argument. The nonlinear equality constraints geq only includes the augmented state defects, computed with:
\[\mathbf{ŝ}(k+1) = \mathbf{f̂}\Big(\mathbf{x̂_0}(k), \mathbf{u_0}(k), \mathbf{d̂_0}(k)\Big) - \mathbf{x̂_0}(k+1)\]
in which the augmented state $\mathbf{x̂_0}$ are extracted from the decision variables Z̃, and $\mathbf{f̂}$ is the augmented state function defined in f̂!.
con_nonlinprogeq!(
geq, X̂0, Û0, K0
mpc::PredictiveController, model::NonLinModel, transcription::TrapezoidalCollocation,
U0, Z̃
)Nonlinear equality constrains for NonLinModel and TrapezoidalCollocation.
The method mutates the geq, X̂0, Û0 and K0 vectors in argument.
The nonlinear equality constraints geq only includes the state defects. The deterministic and stochastic states are handled separately since collocation methods require continuous- time state-space models, and the stochastic model of the unmeasured disturbances is discrete-time. The deterministic and stochastic defects are respectively computed with:
\[\begin{aligned} \mathbf{s_d}(k+1) &= \mathbf{x_0}(k) - \mathbf{x_0}(k+1) + 0.5 T_s (\mathbf{k}_1 + \mathbf{k}_2) \\ \mathbf{s_s}(k+1) &= \mathbf{A_s x_s}(k) - \mathbf{x_s}(k+1) \end{aligned}\]
in which $\mathbf{x_0}$ and $\mathbf{x_s}$ are the deterministic and stochastic states extracted from the decision variables Z̃. The $\mathbf{k}$ coefficients are evaluated from the continuous-time function model.f! and:
\[\begin{aligned} \mathbf{k}_1 &= \mathbf{f}\Big(\mathbf{x_0}(k), \mathbf{û_0}(k), \mathbf{d̂_0}(k) \Big) \\ \mathbf{k}_2 &= \mathbf{f}\Big(\mathbf{x_0}(k+1), \mathbf{û_0}(k+h), \mathbf{d̂_0}(k+1)\Big) \end{aligned}\]
in which $h$ is the hold order transcription.h and the disturbed input is:
\[\mathbf{û_0}(k) = \mathbf{u_0}(k) + \mathbf{C_{s_u} x_s}(k)\]
the $\mathbf{A_s, C_{s_u}}$ matrices are defined in init_estimstoch doc.
No eq. constraints for other cases e.g. SingleShooting, returns geq unchanged.
ModelPredictiveControl.getinput! — Functiongetinput!(mpc::PredictiveController, Z̃) -> uGet current manipulated input u from the solution Z̃, store it and return it.
The first manipulated input $\mathbf{u}(k)$ is extracted from the decision vector $\mathbf{Z̃}$ and applied on the plant (from the receding horizon principle). It also stores u - mpc.estim.model.uop at mpc.lastu0.
- 1Maciejowski, J. 2000, "Predictive control : with constraints", 1st ed., Prentice Hall, ISBN 978-0201398236.
- 2Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control: a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.