Functions: PredictiveController Internals
The prediction methodology of this module is mainly based on Maciejowski textbook [1].
Controller Construction
ModelPredictiveControl.init_ZtoΔU
— Functioninit_ZtoΔU(estim::StateEstimator, transcription::TranscriptionMethod, Hp, Hc) -> PΔu
Init decision variables to input increments over $H_c$ conversion matrix PΔu
.
The conversion from the decision variables $\mathbf{Z}$ to $\mathbf{ΔU}$, the input increments over $H_c$, is computed by:
\[\mathbf{ΔU} = \mathbf{P_{Δu}} \mathbf{Z}\]
in which $\mathbf{P_{Δu}}$ is defined in the Extended Help section.
Extended Help
Extended Help
Following the decision variable definition of the TranscriptionMethod
, the conversion matrix $\mathbf{P_{Δu}}$, we have:
- $\mathbf{P_{Δu}} = \mathbf{I}$ if
transcription
is aSingleShooting
- $\mathbf{P_{Δu}} = [\begin{smallmatrix}\mathbf{I} & \mathbf{0} \end{smallmatrix}]$ if
transcription
is aMultipleShooting
ModelPredictiveControl.init_ZtoU
— Functioninit_ZtoU(estim, transcription, Hp, Hc) -> Pu, Tu
Init decision variables to inputs over $H_p$ conversion matrices.
The conversion from the decision variables $\mathbf{Z}$ to $\mathbf{U}$, the manipulated inputs over $H_p$, is computed by:
\[\mathbf{U} = \mathbf{P_u} \mathbf{Z} + \mathbf{T_u} \mathbf{u}(k-1)\]
The $\mathbf{P_u}$ and $\mathbf{T_u}$ matrices are defined in the Extended Help section.
Extended Help
Extended Help
The $\mathbf{U}$ vector and the conversion matrices are defined as:
\[\mathbf{U} = \begin{bmatrix} \mathbf{u}(k + 0) \\ \mathbf{u}(k + 1) \\ \vdots \\ \mathbf{u}(k + H_c - 1) \\ \vdots \\ \mathbf{u}(k + H_p - 1) \end{bmatrix} , \quad \mathbf{P_u^†} = \begin{bmatrix} \mathbf{I} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{I} & \mathbf{I} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{I} & \mathbf{I} & \cdots & \mathbf{I} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{I} & \mathbf{I} & \cdots & \mathbf{I} \end{bmatrix} , \quad \mathbf{T_u} = \begin{bmatrix} \mathbf{I} \\ \mathbf{I} \\ \vdots \\ \mathbf{I} \\ \vdots \\ \mathbf{I} \end{bmatrix}\]
and, depending on the transcription method, we have:
- $\mathbf{P_u} = \mathbf{P_u^†}$ if
transcription
is aSingleShooting
- $\mathbf{P_u} = [\begin{smallmatrix}\mathbf{P_u^†} & \mathbf{0} \end{smallmatrix}]$ if
transcription
is aMultipleShooting
ModelPredictiveControl.init_predmat
— Functioninit_predmat(
model::LinModel, estim, transcription::SingleShooting, Hp, Hc
) -> E, G, J, K, V, ex̂, gx̂, jx̂, kx̂, vx̂
Construct the prediction matrices for LinModel
and SingleShooting
.
The model predictions are evaluated from the deviation vectors (see setop!
), the decision variable $\mathbf{Z}$ (see TranscriptionMethod
), and:
\[\begin{aligned} \mathbf{Ŷ_0} &= \mathbf{E Z} + \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ &= \mathbf{E Z} + \mathbf{F} \end{aligned}\]
in which $\mathbf{x̂_0}(k) = \mathbf{x̂}_i(k) - \mathbf{x̂_{op}}$, with $i = k$ if estim.direct==true
, otherwise $i = k - 1$. The predicted outputs $\mathbf{Ŷ_0}$ and measured disturbances $\mathbf{D̂_0}$ respectively include $\mathbf{ŷ_0}(k+j)$ and $\mathbf{d̂_0}(k+j)$ values with $j=1$ to $H_p$, and input increments $\mathbf{ΔU}$, $\mathbf{Δu}(k+j)$ from $j=0$ to $H_c-1$. The vector $\mathbf{B}$ contains the contribution for non-zero state $\mathbf{x̂_{op}}$ and state update $\mathbf{f̂_{op}}$ operating points (for linearization at non-equilibrium point, see linearize
). The stochastic predictions $\mathbf{Ŷ_s=0}$ if estim
is not a InternalModel
, see init_stochpred
. The method also computes similar matrices for the predicted terminal state at $k+H_p$:
\[\begin{aligned} \mathbf{x̂_0}(k+H_p) &= \mathbf{e_x̂ Z} + \mathbf{g_x̂ d_0}(k) + \mathbf{j_x̂ D̂_0} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u_0}(k-1) + \mathbf{b_x̂} \\ &= \mathbf{e_x̂ Z} + \mathbf{f_x̂} \end{aligned}\]
The matrices $\mathbf{E, G, J, K, V, B, e_x̂, g_x̂, j_x̂, k_x̂, v_x̂, b_x̂}$ are defined in the Extended Help section. The $\mathbf{F}$ and $\mathbf{f_x̂}$ vectors are recalculated at each control period $k$, see initpred!
and linconstraint!
.
Extended Help
Extended Help
Using the augmented matrices $\mathbf{Â, B̂_u, Ĉ, B̂_d, D̂_d}$ in estim
(see augment_model
), and the function $\mathbf{W}(j) = ∑_{i=0}^j \mathbf{Â}^i$, the prediction matrices are computed by :
\[\begin{aligned} \mathbf{E} &= \begin{bmatrix} \mathbf{Ĉ W}(0)\mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ W}(1)\mathbf{B̂_u} & \mathbf{Ĉ W}(0)\mathbf{B̂_u} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ W}(H_p-1)\mathbf{B̂_u} & \mathbf{Ĉ W}(H_p-2)\mathbf{B̂_u} & \cdots & \mathbf{Ĉ W}(H_p-H_c+1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{G} &= \begin{bmatrix} \mathbf{Ĉ}\mathbf{Â}^{0} \mathbf{B̂_d} \\ \mathbf{Ĉ}\mathbf{Â}^{1} \mathbf{B̂_d} \\ \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p-1} \mathbf{B̂_d} \end{bmatrix} \\ \mathbf{J} &= \begin{bmatrix} \mathbf{D̂_d} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ}\mathbf{Â}^{0} \mathbf{B̂_d} & \mathbf{D̂_d} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p-2} \mathbf{B̂_d} & \mathbf{Ĉ}\mathbf{Â}^{H_p-3} \mathbf{B̂_d} & \cdots & \mathbf{D̂_d} \end{bmatrix} \\ \mathbf{K} &= \begin{bmatrix} \mathbf{Ĉ}\mathbf{Â}^{1} \\ \mathbf{Ĉ}\mathbf{Â}^{2} \\ \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p} \end{bmatrix} \\ \mathbf{V} &= \begin{bmatrix} \mathbf{Ĉ W}(0)\mathbf{B̂_u} \\ \mathbf{Ĉ W}(1)\mathbf{B̂_u} \\ \vdots \\ \mathbf{Ĉ W}(H_p-1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{B} &= \begin{bmatrix} \mathbf{Ĉ W}(0) \\ \mathbf{Ĉ W}(1) \\ \vdots \\ \mathbf{Ĉ W}(H_p-1) \end{bmatrix} \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
For the terminal constraints, the matrices are computed with:
\[\begin{aligned} \mathbf{e_x̂} &= \begin{bmatrix} \mathbf{W}(H_p-1)\mathbf{B̂_u} & \mathbf{W}(H_p-2)\mathbf{B̂_u} & \cdots & \mathbf{W}(H_p-H_c+1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{g_x̂} &= \mathbf{Â}^{H_p-1} \mathbf{B̂_d} \\ \mathbf{j_x̂} &= \begin{bmatrix} \mathbf{Â}^{H_p-2} \mathbf{B̂_d} & \mathbf{Â}^{H_p-3} \mathbf{B̂_d} & \cdots & \mathbf{0} \end{bmatrix} \\ \mathbf{k_x̂} &= \mathbf{Â}^{H_p} \\ \mathbf{v_x̂} &= \mathbf{W}(H_p-1)\mathbf{B̂_u} \\ \mathbf{b_x̂} &= \mathbf{W}(H_p-1) \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
init_predmat(
model::LinModel, estim, transcription::MultipleShooting, Hp, Hc
) -> E, G, J, K, V, B, ex̂, gx̂, jx̂, kx̂, vx̂, bx̂
Construct the prediction matrices for LinModel
and MultipleShooting
.
They are defined in the Extended Help section.
Extended Help
Extended Help
They are all appropriately sized zero matrices $\mathbf{0}$, except for:
\[\begin{aligned} \mathbf{E} &= [\begin{smallmatrix}\mathbf{0} & \mathbf{E^†} \end{smallmatrix}] \\ \mathbf{E^†} &= \text{diag}\mathbf{(Ĉ,Ĉ,...,Ĉ)} \\ \mathbf{J} &= \text{diag}\mathbf{(D̂_d,D̂_d,...,D̂_d)} \\ \mathbf{e_x̂} &= [\begin{smallmatrix}\mathbf{0} & \mathbf{I}\end{smallmatrix}] \end{aligned}\]
init_predmat(model::SimModel, estim, transcription::MultipleShooting, Hp, Hc)
Return the terminal state matrices for SimModel
and MultipleShooting
.
The output prediction matrices are all empty matrices. The terminal state matrices are given in the Extended Help section.
Extended Help
Extended Help
The terminal state matrices all appropriately sized zero matrices $\mathbf{0}$, except for $\mathbf{e_x̂} = [\begin{smallmatrix}\mathbf{0} & \mathbf{I}\end{smallmatrix}]$
init_predmat(model::SimModel, estim, transcription::TranscriptionMethod, Hp, Hc)
Return empty matrices for all other cases.
ModelPredictiveControl.init_defectmat
— Functioninit_defectmat(model::LinModel, estim, transcription::MultipleShooting, Hp, Hc)
Init the matrices for computing the defects over the predicted states.
An equation similar to the prediction matrices (see init_predmat
) computes the defects over the predicted states:
\[\begin{aligned} \mathbf{Ŝ} &= \mathbf{E_ŝ Z} + \mathbf{G_ŝ d_0}(k) + \mathbf{J_ŝ D̂_0} + \mathbf{K_ŝ x̂_0}(k) + \mathbf{V_ŝ u_0}(k-1) + \mathbf{B_ŝ} \\ &= \mathbf{E_ŝ Z} + \mathbf{F_ŝ} \end{aligned}\]
They are forced to be $\mathbf{Ŝ = 0}$ using the optimization equality constraints. The matrices $\mathbf{E_ŝ, G_ŝ, J_ŝ, K_ŝ, V_ŝ, B_ŝ}$ are defined in the Extended Help section.
Extended Help
Extended Help
The matrices are computed by:
\[\begin{aligned} \mathbf{E_ŝ} &= \begin{bmatrix} \mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} & -\mathbf{I} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{B̂_u} & \mathbf{B̂_u} & \cdots & \mathbf{0} & \mathbf{Â} & -\mathbf{I} & \cdots & \mathbf{0} & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ \mathbf{B̂_u} & \mathbf{B̂_u} & \cdots & \mathbf{B̂_u} & \mathbf{0} & \mathbf{0} & \cdots & \mathbf{Â} & -\mathbf{I} \end{bmatrix} \\ \mathbf{G_ŝ} &= \begin{bmatrix} \mathbf{B̂_d} \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{bmatrix} \\ \mathbf{J_ŝ} &= \begin{bmatrix} \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \mathbf{B̂_d} & \mathbf{0} & \cdots & \mathbf{0} & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{B̂_d} & \mathbf{0} \end{bmatrix} \\ \mathbf{K_ŝ} &= \begin{bmatrix} \mathbf{Â} \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{bmatrix} \\ \mathbf{V_ŝ} &= \begin{bmatrix} \mathbf{B̂_u} \\ \mathbf{B̂_u} \\ \vdots \\ \mathbf{B̂_u} \end{bmatrix} \\ \mathbf{B_ŝ} &= \begin{bmatrix} \mathbf{f̂_{op} - x̂_{op}} \\ \mathbf{f̂_{op} - x̂_{op}} \\ \vdots \\ \mathbf{f̂_{op} - x̂_{op}} \end{bmatrix} \end{aligned}\]
init_defectmat(model::SimModel, estim, transcription::TranscriptionMethod, Hp, Hc)
Return empty matrices for all other cases (N/A).
ModelPredictiveControl.relaxU
— FunctionrelaxU(Pu, C_umin, C_umax, nϵ) -> A_Umin, A_Umax, P̃u
Augment manipulated inputs constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{P̃_u}$, similar to the one described at init_ZtoU
. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{U_{min}}} \\ \mathbf{A_{U_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{U_{min} + T_u u}(k-1) \\ + \mathbf{U_{max} - T_u u}(k-1) \end{bmatrix}\]
in which $\mathbf{U_{min}}$ and $\mathbf{U_{max}}$ vectors respectively contains $\mathbf{u_{min}}$ and $\mathbf{u_{max}}$ repeated $H_p$ times.
ModelPredictiveControl.relaxΔU
— FunctionrelaxΔU(PΔu, C_Δumin, C_Δumax, ΔUmin, ΔUmax, nϵ) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax, P̃Δu
Augment input increments constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{P̃_{Δu}}$, similar to the one described at init_ZtoΔU
, but extracting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}] = \mathbf{P̃_{Δu} Z̃}$. Also, knowing that $0 ≤ ϵ ≤ ∞$, it also returns the augmented bounds $\mathbf{ΔŨ_{min}} = [\begin{smallmatrix} \mathbf{ΔU_{min}} \\ 0 \end{smallmatrix}]$ and $\mathbf{ΔŨ_{max}} = [\begin{smallmatrix} \mathbf{ΔU_{min}} \\ ∞ \end{smallmatrix}]$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{ΔŨ_{min}}} \\ \mathbf{A_{ΔŨ_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{ΔŨ_{min}} \\ + \mathbf{ΔŨ_{max}} \end{bmatrix}\]
Note that strictly speaking, the lower bound on the slack variable ϵ is a decision variable bound, which is more precise than a linear inequality constraint. However, it is more convenient to treat it as a linear inequality constraint since the optimizer OSQP.jl
does not support pure bounds on the decision variables.
ModelPredictiveControl.relaxŶ
— FunctionrelaxŶ(E, C_ymin, C_ymax, nϵ) -> A_Ymin, A_Ymax, Ẽ
Augment linear output prediction constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ_0 = Ẽ Z̃ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{Y_{min}}} \\ \mathbf{A_{Y_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{(Y_{min} - Y_{op}) + F} \\ + \mathbf{(Y_{max} - Y_{op}) - F} \end{bmatrix}\]
in which $\mathbf{Y_{min}, Y_{max}}$ and $\mathbf{Y_{op}}$ vectors respectively contains $\mathbf{y_{min}, y_{max}}$ and $\mathbf{y_{op}}$ repeated $H_p$ times.
ModelPredictiveControl.relaxterminal
— Functionrelaxterminal(ex̂, c_x̂min, c_x̂max, nϵ) -> A_x̂min, A_x̂max, ẽx̂
Augment terminal state constraints with slack variable ϵ for softening.
Denoting the decision variables augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} \mathbf{Z} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{ẽ_{x̂}}$ matrix that appears in the terminal state equation $\mathbf{x̂_0}(k + H_p) = \mathbf{ẽ_x̂ Z̃ + f_x̂}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{x̂_{min}}} \\ \mathbf{A_{x̂_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{(x̂_{min} - x̂_{op}) + f_x̂} \\ + \mathbf{(x̂_{max} - x̂_{op}) - f_x̂} \end{bmatrix}\]
ModelPredictiveControl.init_quadprog
— Functioninit_quadprog(model::LinModel, weights::ControllerWeights, Ẽ, P̃Δu, P̃u) -> H̃
Init the quadratic programming Hessian H̃
for MPC.
The matrix appear in the quadratic general form:
\[ J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + r \]
The Hessian matrix is constant if the model and weights are linear and time invariant (LTI):
\[ \mathbf{H̃} = 2 ( \mathbf{Ẽ'} \mathbf{M}_{H_p} \mathbf{Ẽ} + \mathbf{P̃_{Δu}'} \mathbf{Ñ}_{H_c} \mathbf{P̃_{Δu}} + \mathbf{P̃_{u}'} \mathbf{L}_{H_p} \mathbf{P̃_{u}} )\]
in which $\mathbf{Ẽ}$, $\mathbf{P̃_{Δu}}$ and $\mathbf{P̃_{u}}$ matrices are defined at relaxŶ
, relaxΔU
and relaxU
documentation, respectively. The vector $\mathbf{q̃}$ and scalar $r$ need recalculation each control period $k$, see initpred!
. $r$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.
Return empty matrix if model
is not a LinModel
.
ModelPredictiveControl.init_stochpred
— Functioninit_stochpred(estim::InternalModel, Hp) -> Ks, Ps
Init the stochastic prediction matrices for InternalModel
.
Ks
and Ps
matrices are defined as:
\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k) + \mathbf{P_s ŷ_s}(k)\]
Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].
Return empty matrices if estim
is not a InternalModel
.
ModelPredictiveControl.init_matconstraint_mpc
— Functioninit_matconstraint_mpc(
model::LinModel, transcription::TranscriptionMethod, nc::Int,
i_Umin, i_Umax, i_ΔŨmin, i_ΔŨmax, i_Ymin, i_Ymax, i_x̂min, i_x̂max,
args...
) -> i_b, i_g, A, Aeq, neq
Init i_b
, i_g
, neq
, and A
and Aeq
matrices for the all the MPC constraints.
The linear and nonlinear constraints are respectively defined as:
\[\begin{aligned} \mathbf{A Z̃ } &≤ \mathbf{b} \\ \mathbf{A_{eq} Z̃} &= \mathbf{b_{eq}} \\ \mathbf{g(Z̃)} &≤ \mathbf{0} \\ \mathbf{g_{eq}(Z̃)} &= \mathbf{0} \\ \end{aligned}\]
The argument nc
is the number of custom nonlinear inequality constraints in $\mathbf{g_c}$. i_b
is a BitVector
including the indices of $\mathbf{b}$ that are finite numbers. i_g
is a similar vector but for the indices of $\mathbf{g}$. The method also returns the $\mathbf{A, A_{eq}}$ matrices and neq
if args
is provided. In such a case, args
needs to contain all the inequality and equality constraint matrices: A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ymin, A_Ymax, A_x̂min, A_x̂max, A_ŝ
. The integer neq
is the number of nonlinear equality constraints in $\mathbf{g_{eq}}$.
Init i_b
without output constraints if NonLinModel
and not SingleShooting
.
Init i_b
without output & terminal constraints if NonLinModel
and SingleShooting
.
ModelPredictiveControl.init_nonlincon!
— Functioninit_nonlincon!(
mpc::PredictiveController, model::LinModel, transcription::TranscriptionMethod,
gfuncs , ∇gfuncs!,
geqfuncs, ∇geqfuncs!
)
Init nonlinear constraints for LinModel
for all TranscriptionMethod
.
The only nonlinear constraints are the custom inequality constraints gc
.
init_nonlincon!(
mpc::PredictiveController, model::NonLinModel, transcription::MultipleShooting,
gfuncs, ∇gfuncs!,
geqfuncs, ∇geqfuncs!
)
Init nonlinear constraints for NonLinModel
and MultipleShooting
.
The nonlinear constraints are the output prediction Ŷ
bounds, the custom inequality constraints gc
and all the nonlinear equality constraints geq
.
init_nonlincon!(
mpc::PredictiveController, model::NonLinModel, ::SingleShooting,
gfuncs, ∇gfuncs!,
geqfuncs, ∇geqfuncs!
)
Init nonlinear constraints for NonLinModel
and SingleShooting
.
The nonlinear constraints are the custom inequality constraints gc
, the output prediction Ŷ
bounds and the terminal state x̂end
bounds.
Update Quadratic Optimization
ModelPredictiveControl.initpred!
— Methodinitpred!(mpc::PredictiveController, model::LinModel, d, D̂, R̂y, R̂u) -> nothing
Init linear model prediction matrices F, q̃, r
and current estimated output ŷ
.
See init_predmat
and init_quadprog
for the definition of the matrices. They are computed with these equations using in-place operations:
\[\begin{aligned} \mathbf{F} &= \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ \mathbf{C_y} &= \mathbf{F} + \mathbf{Y_{op}} - \mathbf{R̂_y} \\ \mathbf{C_u} &= \mathbf{T_u}\mathbf{u}(k-1) - \mathbf{R̂_u} \\ \mathbf{q̃} &= 2[ (\mathbf{M}_{H_p} \mathbf{Ẽ})' \mathbf{C_y} + (\mathbf{L}_{H_p} \mathbf{P̃_U})' \mathbf{C_u} ] \\ r &= \mathbf{C_y'} \mathbf{M}_{H_p} \mathbf{C_y} + \mathbf{C_u'} \mathbf{L}_{H_p} \mathbf{C_u} \end{aligned}\]
ModelPredictiveControl.linconstraint!
— Methodlinconstraint!(mpc::PredictiveController, model::LinModel)
Set b
vector for the linear model inequality constraints ($\mathbf{A Z̃ ≤ b}$).
Also init $\mathbf{f_x̂} = \mathbf{g_x̂ d_0}(k) + \mathbf{j_x̂ D̂_0} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u_0}(k-1) + \mathbf{b_x̂}$ vector for the terminal constraints, see init_predmat
.
ModelPredictiveControl.linconstrainteq!
— Functionlinconstrainteq!(
mpc::PredictiveController, model::LinModel, transcription::MultipleShooting
)
Set beq
vector for the linear model equality constraints ($\mathbf{A_{eq} Z̃ = b_{eq}}$).
Also init $\mathbf{F_ŝ} = \mathbf{G_ŝ d_0}(k) + \mathbf{J_ŝ D̂_0} + \mathbf{K_ŝ x̂_0}(k) + \mathbf{V_ŝ u_0}(k-1) + \mathbf{B_ŝ}$, see init_defectmat
.
Solve Optimization Problem
ModelPredictiveControl.optim_objective!
— Methodoptim_objective!(mpc::PredictiveController) -> Z̃
Optimize the objective function of mpc
PredictiveController
and return the solution Z̃
.
If first warm-starts the solver with set_warmstart!
. It then calls JuMP.optimize!(mpc.optim)
and extract the solution. A failed optimization prints an @error
log in the REPL and returns the warm-start value. A failed optimization also prints getinfo
results in the debug log if activated.
ModelPredictiveControl.set_warmstart!
— Functionset_warmstart!(mpc::PredictiveController, transcription::SingleShooting, Z̃var) -> Z̃0
Set and return the warm start value of Z̃var
for SingleShooting
transcription.
If supported by mpc.optim
, it warm-starts the solver at:
\[\mathbf{ΔŨ} = \begin{bmatrix} \mathbf{Δu}(k+0|k-1) \\ \mathbf{Δu}(k+1|k-1) \\ \vdots \\ \mathbf{Δu}(k+H_c-2|k-1) \\ \mathbf{0} \\ ϵ(k-1) \end{bmatrix}\]
where $\mathbf{Δu}(k+j|k-1)$ is the input increment for time $k+j$ computed at the last control period $k-1$, and $ϵ(k-1)$, the slack variable of the last control period.
set_warmstart!(mpc::PredictiveController, transcription::MultipleShooting, Z̃var) -> Z̃0
Set and return the warm start value of Z̃var
for MultipleShooting
transcription.
It warm-starts the solver at:
\[\mathbf{ΔŨ} = \begin{bmatrix} \mathbf{Δu}(k+0|k-1) \\ \mathbf{Δu}(k+1|k-1) \\ \vdots \\ \mathbf{Δu}(k+H_c-2|k-1) \\ \mathbf{0} \\ \mathbf{x̂_0}(k+1|k-1) \\ \mathbf{x̂_0}(k+2|k-1) \\ \vdots \\ \mathbf{x̂_0}(k+H_p-1|k-1) \\ \mathbf{x̂_0}(k+H_p-1|k-1) \\ ϵ(k-1) \end{bmatrix}\]
where $\mathbf{x̂_0}(k+j|k-1)$ is the predicted state for time $k+j$ computed at the last control period $k-1$, expressed as a deviation from the operating point $\mathbf{x̂_{op}}$.
ModelPredictiveControl.getinput
— Functiongetinput(mpc::PredictiveController, Z̃) -> u
Get current manipulated input u
from a PredictiveController
solution Z̃
.
The first manipulated input $\mathbf{u}(k)$ is extracted from the decision vector $\mathbf{Z̃}$ and applied on the plant (from the receding horizon principle).
- 1Maciejowski, J. 2000, "Predictive control : with constraints", 1st ed., Prentice Hall, ISBN 978-0201398236.
- 2Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control: a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.