Functions: PredictiveController Internals
The prediction methodology of this module is mainly based on Maciejowski textbook [1].
Controller Construction
ModelPredictiveControl.init_ΔUtoU
— Functioninit_ΔUtoU(model, Hp, Hc) -> S, T
Init manipulated input increments to inputs conversion matrices.
The conversion from the input increments $\mathbf{ΔU}$ to manipulated inputs over $H_p$ are calculated by:
\[\mathbf{U} = \mathbf{S} \mathbf{ΔU} + \mathbf{T} \mathbf{u}(k-1)\]
The $\mathbf{S}$ and $\mathbf{T}$ matrices are defined in the Extended Help section.
Extended Help
Extended Help
The $\mathbf{U}$ vector and the two conversion matrices are defined as:
\[\mathbf{U} = \begin{bmatrix} \mathbf{u}(k + 0) \\ \mathbf{u}(k + 1) \\ \vdots \\ \mathbf{u}(k + H_c - 1) \\ \vdots \\ \mathbf{u}(k + H_p - 1) \end{bmatrix} , \quad \mathbf{S} = \begin{bmatrix} \mathbf{I} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{I} & \mathbf{I} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{I} & \mathbf{I} & \cdots & \mathbf{I} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{I} & \mathbf{I} & \cdots & \mathbf{I} \end{bmatrix} , \quad \mathbf{T} = \begin{bmatrix} \mathbf{I} \\ \mathbf{I} \\ \vdots \\ \mathbf{I} \\ \vdots \\ \mathbf{I} \end{bmatrix}\]
ModelPredictiveControl.init_predmat
— Functioninit_predmat(estim, model::LinModel, Hp, Hc) -> E, G, J, K, V, ex̂, gx̂, jx̂, kx̂, vx̂
Construct the prediction matrices for LinModel
model
.
The model predictions are evaluated from the deviation vectors (see setop!
) and:
\[\begin{aligned} \mathbf{Ŷ_0} &= \mathbf{E ΔU} + \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ &= \mathbf{E ΔU} + \mathbf{F} \end{aligned}\]
in which $\mathbf{x̂_0}(k) = \mathbf{x̂}_i(k) - \mathbf{x̂_{op}}$, with $i = k$ if estim.direct==true
, otherwise $i = k - 1$. The predicted outputs $\mathbf{Ŷ_0}$ and measured disturbances $\mathbf{D̂_0}$ respectively include $\mathbf{ŷ_0}(k+j)$ and $\mathbf{d̂_0}(k+j)$ values with $j=1$ to $H_p$, and input increments $\mathbf{ΔU}$, $\mathbf{Δu}(k+j)$ from $j=0$ to $H_c-1$. The vector $\mathbf{B}$ contains the contribution for non-zero state $\mathbf{x̂_{op}}$ and state update $\mathbf{f̂_{op}}$ operating points (for linearization at non-equilibrium point, see linearize
). The stochastic predictions $\mathbf{Ŷ_s=0}$ if estim
is not a InternalModel
, see init_stochpred
. The method also computes similar matrices for the predicted terminal states at $k+H_p$:
\[\begin{aligned} \mathbf{x̂_0}(k+H_p) &= \mathbf{e_x̂ ΔU} + \mathbf{g_x̂ d_0}(k) + \mathbf{j_x̂ D̂_0} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u_0}(k-1) + \mathbf{b_x̂} \\ &= \mathbf{e_x̂ ΔU} + \mathbf{f_x̂} \end{aligned}\]
The matrices $\mathbf{E, G, J, K, V, B, e_x̂, g_x̂, j_x̂, k_x̂, v_x̂, b_x̂}$ are defined in the Extended Help section. The $\mathbf{F}$ and $\mathbf{f_x̂}$ vectors are recalculated at each control period $k$, see initpred!
and linconstraint!
.
Extended Help
Extended Help
Using the augmented matrices $\mathbf{Â, B̂_u, Ĉ, B̂_d, D̂_d}$ in estim
(see augment_model
), and the function $\mathbf{W}(j) = ∑_{i=0}^j \mathbf{Â}^i$, the prediction matrices are computed by :
\[\begin{aligned} \mathbf{E} &= \begin{bmatrix} \mathbf{Ĉ W}(0)\mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ W}(1)\mathbf{B̂_u} & \mathbf{Ĉ W}(0)\mathbf{B̂_u} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ W}(H_p-1)\mathbf{B̂_u} & \mathbf{Ĉ W}(H_p-2)\mathbf{B̂_u} & \cdots & \mathbf{Ĉ W}(H_p-H_c+1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{G} &= \begin{bmatrix} \mathbf{Ĉ}\mathbf{Â}^{0} \mathbf{B̂_d} \\ \mathbf{Ĉ}\mathbf{Â}^{1} \mathbf{B̂_d} \\ \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p-1} \mathbf{B̂_d} \end{bmatrix} \\ \mathbf{J} &= \begin{bmatrix} \mathbf{D̂_d} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ}\mathbf{Â}^{0} \mathbf{B̂_d} & \mathbf{D̂_d} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p-2} \mathbf{B̂_d} & \mathbf{Ĉ}\mathbf{Â}^{H_p-3} \mathbf{B̂_d} & \cdots & \mathbf{D̂_d} \end{bmatrix} \\ \mathbf{K} &= \begin{bmatrix} \mathbf{Ĉ}\mathbf{Â}^{1} \\ \mathbf{Ĉ}\mathbf{Â}^{2} \\ \vdots \\ \mathbf{Ĉ}\mathbf{Â}^{H_p} \end{bmatrix} \\ \mathbf{V} &= \begin{bmatrix} \mathbf{Ĉ W}(0)\mathbf{B̂_u} \\ \mathbf{Ĉ W}(1)\mathbf{B̂_u} \\ \vdots \\ \mathbf{Ĉ W}(H_p-1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{B} &= \begin{bmatrix} \mathbf{Ĉ W}(0) \\ \mathbf{Ĉ W}(1) \\ \vdots \\ \mathbf{Ĉ W}(H_p-1) \end{bmatrix} \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
For the terminal constraints, the matrices are computed with:
\[\begin{aligned} \mathbf{e_x̂} &= \begin{bmatrix} \mathbf{W}(H_p-1)\mathbf{B̂_u} & \mathbf{W}(H_p-2)\mathbf{B̂_u} & \cdots & \mathbf{W}(H_p-H_c+1)\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{g_x̂} &= \mathbf{Â}^{H_p-1} \mathbf{B̂_d} \\ \mathbf{j_x̂} &= \begin{bmatrix} \mathbf{Â}^{H_p-2} \mathbf{B̂_d} & \mathbf{Â}^{H_p-3} \mathbf{B̂_d} & \cdots & \mathbf{0} \end{bmatrix} \\ \mathbf{k_x̂} &= \mathbf{Â}^{H_p} \\ \mathbf{v_x̂} &= \mathbf{W}(H_p-1)\mathbf{B̂_u} \\ \mathbf{b_x̂} &= \mathbf{W}(H_p-1) \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
Return empty matrices if model
is not a LinModel
ModelPredictiveControl.relaxU
— FunctionrelaxU(model, nϵ, C_umin, C_umax, S) -> A_Umin, A_Umax, S̃
Augment manipulated inputs constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented conversion matrix $\mathbf{S̃}$, similar to the one described at init_ΔUtoU
. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{U_{min}}} \\ \mathbf{A_{U_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{U_{min} + T} \mathbf{u}(k-1) \\ + \mathbf{U_{max} - T} \mathbf{u}(k-1) \end{bmatrix}\]
in which $\mathbf{U_{min}}$ and $\mathbf{U_{max}}$ vectors respectively contains $\mathbf{u_{min}}$ and $\mathbf{u_{max}}$ repeated $H_p$ times.
ModelPredictiveControl.relaxΔU
— FunctionrelaxΔU(model, nϵ, C, C_Δumin, C_Δumax, ΔUmin, ΔUmax) -> A_ΔŨmin, A_ΔŨmax, ΔŨmin, ΔŨmax
Augment input increments constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the augmented constraints $\mathbf{ΔŨ_{min}}$ and $\mathbf{ΔŨ_{max}}$ and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{ΔŨ_{min}}} \\ \mathbf{A_{ΔŨ_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{ΔŨ_{min}} \\ + \mathbf{ΔŨ_{max}} \end{bmatrix}\]
ModelPredictiveControl.relaxŶ
— FunctionrelaxŶ(::LinModel, nϵ, C_ymin, C_ymax, E) -> A_Ymin, A_Ymax, Ẽ
Augment linear output prediction constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in the linear model prediction equation $\mathbf{Ŷ_0 = Ẽ ΔŨ + F}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{Y_{min}}} \\ \mathbf{A_{Y_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{(Y_{min} - Y_{op}) + F} \\ + \mathbf{(Y_{max} - Y_{op}) - F} \end{bmatrix}\]
in which $\mathbf{Y_{min}, Y_{max}}$ and $\mathbf{Y_{op}}$ vectors respectively contains $\mathbf{y_{min}, y_{max}}$ and $\mathbf{y_{op}}$ repeated $H_p$ times.
Return empty matrices if model is not a LinModel
ModelPredictiveControl.relaxterminal
— Functionrelaxterminal(::LinModel, nϵ, c_x̂min, c_x̂max, ex̂) -> A_x̂min, A_x̂max, ẽx̂
Augment terminal state constraints with slack variable ϵ for softening.
Denoting the input increments augmented with the slack variable $\mathbf{ΔŨ} = [\begin{smallmatrix} \mathbf{ΔU} \\ ϵ \end{smallmatrix}]$, it returns the $\mathbf{ẽ_{x̂}}$ matrix that appears in the terminal state equation $\mathbf{x̂_0}(k + H_p) = \mathbf{ẽ_x̂ ΔŨ + f_x̂}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{x̂_{min}}} \\ \mathbf{A_{x̂_{max}}} \end{bmatrix} \mathbf{ΔŨ} ≤ \begin{bmatrix} - \mathbf{(x̂_{min} - x̂_{op}) + f_x̂} \\ + \mathbf{(x̂_{max} - x̂_{op}) - f_x̂} \end{bmatrix}\]
Return empty matrices if model is not a LinModel
ModelPredictiveControl.init_quadprog
— Functioninit_quadprog(model::LinModel, weights::ControllerWeights, Ẽ, S) -> H̃
Init the quadratic programming Hessian H̃
for MPC.
The matrix appear in the quadratic general form:
\[ J = \min_{\mathbf{ΔŨ}} \frac{1}{2}\mathbf{(ΔŨ)'H̃(ΔŨ)} + \mathbf{q̃'(ΔŨ)} + r \]
The Hessian matrix is constant if the model and weights are linear and time invariant (LTI):
\[ \mathbf{H̃} = 2 ( \mathbf{Ẽ}'\mathbf{M}_{H_p}\mathbf{Ẽ} + \mathbf{Ñ}_{H_c} + \mathbf{S̃}'\mathbf{L}_{H_p}\mathbf{S̃} )\]
The vector $\mathbf{q̃}$ and scalar $r$ need recalculation each control period $k$, see initpred!
. $r$ does not impact the minima position. It is thus useless at optimization but required to evaluate the minimal $J$ value.
Return empty matrix if model
is not a LinModel
.
ModelPredictiveControl.init_stochpred
— Functioninit_stochpred(estim::InternalModel, Hp) -> Ks, Ps
Init the stochastic prediction matrices for InternalModel
.
Ks
and Ps
matrices are defined as:
\[ \mathbf{Ŷ_s} = \mathbf{K_s x̂_s}(k) + \mathbf{P_s ŷ_s}(k)\]
Current stochastic outputs $\mathbf{ŷ_s}(k)$ comprises the measured outputs $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u}(k) = \mathbf{0}$. See [2].
Return empty matrices if estim
is not a InternalModel
.
ModelPredictiveControl.init_matconstraint_mpc
— Functioninit_matconstraint_mpc(
model::LinModel, nc::Int,
i_Umin, i_Umax, i_ΔŨmin, i_ΔŨmax, i_Ymin, i_Ymax, i_x̂min, i_x̂max,
args...
) -> i_b, i_g, A
Init i_b
, i_g
and A
matrices for the linear and nonlinear inequality constraints.
The linear and nonlinear inequality constraints are respectively defined as:
\[\begin{aligned} \mathbf{A ΔŨ } &≤ \mathbf{b} \\ \mathbf{g(ΔŨ)} &≤ \mathbf{0} \end{aligned}\]
The argument nc
is the number of custom nonlinear constraints in $\mathbf{g_c}$. i_b
is a BitVector
including the indices of $\mathbf{b}$ that are finite numbers. i_g
is a similar vector but for the indices of $\mathbf{g}$. The method also returns the $\mathbf{A}$ matrix if args
is provided. In such a case, args
needs to contain all the inequality constraint matrices: A_Umin, A_Umax, A_ΔŨmin, A_ΔŨmax, A_Ymin, A_Ymax, A_x̂min, A_x̂max
.
Init i_b, A
without outputs and terminal constraints if model
is not a LinModel
.
Update Quadratic Optimization
ModelPredictiveControl.initpred!
— Methodinitpred!(mpc::PredictiveController, model::LinModel, d, D̂, R̂y, R̂u) -> nothing
Init linear model prediction matrices F, q̃, r
and current estimated output ŷ
.
See init_predmat
and init_quadprog
for the definition of the matrices. They are computed with these equations using in-place operations:
\[\begin{aligned} \mathbf{F} &= \mathbf{G d_0}(k) + \mathbf{J D̂_0} + \mathbf{K x̂_0}(k) + \mathbf{V u_0}(k-1) + \mathbf{B} + \mathbf{Ŷ_s} \\ \mathbf{C_y} &= \mathbf{F} + \mathbf{Y_{op}} - \mathbf{R̂_y} \\ \mathbf{C_u} &= \mathbf{T}\mathbf{u}(k-1) - \mathbf{R̂_u} \\ \mathbf{q̃} &= 2[(\mathbf{M}_{H_p} \mathbf{Ẽ})' \mathbf{C_y} + (\mathbf{L}_{H_p} \mathbf{S̃})' \mathbf{C_u}] \\ r &= \mathbf{C_y}' \mathbf{M}_{H_p} \mathbf{C_y} + \mathbf{C_u}' \mathbf{L}_{H_p} \mathbf{C_u} \end{aligned}\]
ModelPredictiveControl.linconstraint!
— Methodlinconstraint!(mpc::PredictiveController, model::LinModel)
Set b
vector for the linear model inequality constraints ($\mathbf{A ΔŨ ≤ b}$).
Also init $\mathbf{f_x̂} = \mathbf{g_x̂ d}(k) + \mathbf{j_x̂ D̂} + \mathbf{k_x̂ x̂_0}(k) + \mathbf{v_x̂ u}(k-1) + \mathbf{b_x̂}$ vector for the terminal constraints, see init_predmat
.
Solve Optimization Problem
ModelPredictiveControl.optim_objective!
— Methodoptim_objective!(mpc::PredictiveController) -> ΔŨ
Optimize the objective function of mpc
PredictiveController
and return the solution ΔŨ
.
If supported by mpc.optim
, it warm-starts the solver at:
\[\mathbf{ΔŨ} = \begin{bmatrix} \mathbf{Δu}_{k-1}(k+0) \\ \mathbf{Δu}_{k-1}(k+1) \\ \vdots \\ \mathbf{Δu}_{k-1}(k+H_c-2) \\ \mathbf{0} \\ ϵ_{k-1} \end{bmatrix}\]
where $\mathbf{Δu}_{k-1}(k+j)$ is the input increment for time $k+j$ computed at the last control period $k-1$. It then calls JuMP.optimize!(mpc.optim)
and extract the solution. A failed optimization prints an @error
log in the REPL and returns the warm-start value.
ModelPredictiveControl.getinput
— Functiongetinput(mpc::PredictiveController, ΔŨ) -> u
Get current manipulated input u
from a PredictiveController
solution ΔŨ
.
The first manipulated input $\mathbf{u}(k)$ is extracted from the input increments vector $\mathbf{ΔŨ}$ and applied on the plant (from the receding horizon principle).
- 1Maciejowski, J. 2000, "Predictive control : with constraints", 1st ed., Prentice Hall, ISBN 978-0201398236.
- 2Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control: a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.