Functions: StateEstimator Internals
Augmented Model
ModelPredictiveControl.f̂!
— Functionf̂!(x̂next0, û0, estim::StateEstimator, model::SimModel, x̂0, u0, d0) -> nothing
Mutating state function $\mathbf{f̂}$ of the augmented model.
By introducing an augmented state vector $\mathbf{x̂_0}$ like in augment_model
, the function returns the next state of the augmented model, defined as:
\[\begin{aligned} \mathbf{x̂_0}(k+1) &= \mathbf{f̂}\Big(\mathbf{x̂_0}(k), \mathbf{u_0}(k), \mathbf{d_0}(k)\Big) \\ \mathbf{ŷ_0}(k) &= \mathbf{ĥ}\Big(\mathbf{x̂_0}(k), \mathbf{d_0}(k)\Big) \end{aligned}\]
where $\mathbf{x̂_0}(k+1)$ is stored in x̂next0
argument. The method mutates x̂next0
and û0
in place, the latter stores the input vector of the augmented model $\mathbf{u_0 + ŷ_{s_u}}$. The model parameter vector model.p
is not included in the function signature for conciseness.
f̂!(x̂next0, _ , estim::StateEstimator, model::LinModel, x̂0, u0, d0) -> nothing
Use the augmented model matrices if model
is a LinModel
.
f̂!(x̂next0, _ , estim::InternalModel, model::NonLinModel, x̂0, u0, d0)
State function $\mathbf{f̂}$ of InternalModel
for NonLinModel
.
It calls model.f!(x̂next0, x̂0, u0 ,d0, model.p)
since this estimator does not augment the states.
ModelPredictiveControl.ĥ!
— Functionĥ!(ŷ0, estim::StateEstimator, model::SimModel, x̂0, d0) -> nothing
Mutating output function $\mathbf{ĥ}$ of the augmented model, see f̂!
.
ĥ!(ŷ0, estim::StateEstimator, model::LinModel, x̂0, d0) -> nothing
Use the augmented model matrices if model
is a LinModel
.
ĥ!(ŷ0, estim::InternalModel, model::NonLinModel, x̂0, d0)
Output function $\mathbf{ĥ}$ of InternalModel
, it calls model.h!
.
Estimator Construction
ModelPredictiveControl.init_estimstoch
— Functioninit_estimstoch(model, i_ym, nint_u, nint_ym) -> As, Cs_u, Cs_y, nxs, nint_u, nint_ym
Init stochastic model matrices from integrator specifications for state estimation.
The arguments nint_u
and nint_ym
specify how many integrators are added to each manipulated input and measured outputs. The function returns the state-space matrices As
, Cs_u
and Cs_y
of the stochastic model:
\[\begin{aligned} \mathbf{x_{s}}(k+1) &= \mathbf{A_s x_s}(k) + \mathbf{B_s e}(k) \\ \mathbf{y_{s_{u}}}(k) &= \mathbf{C_{s_{u}} x_s}(k) \\ \mathbf{y_{s_{y}}}(k) &= \mathbf{C_{s_{y}} x_s}(k) \end{aligned}\]
where $\mathbf{e}(k)$ is an unknown zero mean white noise and $\mathbf{A_s} = \mathrm{diag}(\mathbf{A_{s_{u}}, A_{s_{y}}})$. The estimations does not use $\mathbf{B_s}$, it is thus ignored. The function init_integrators
builds the state-space matrices.
ModelPredictiveControl.init_integrators
— Functioninit_integrators(nint, ny, varname::String) -> A, C, nint
Calc A, C
state-space matrices from integrator specifications nint
.
This function is used to initialize the stochastic part of the augmented model for the design of state estimators. The vector nint
provides how many integrators (in series) should be incorporated for each output. The argument should have ny
element, except for nint=0
which is an alias for no integrator at all. The specific case of one integrator per output results in A = I
and C = I
. The estimation does not use the B
matrix, it is thus ignored. This function is called twice :
- for the unmeasured disturbances at manipulated inputs $\mathbf{u}$
- for the unmeasured disturbances at measured outputs $\mathbf{y^m}$
ModelPredictiveControl.augment_model
— Functionaugment_model(
model::LinModel, As, Cs_u, Cs_y; verify_obsv=true
) -> Â, B̂u, Ĉ, B̂d, D̂d, x̂op, f̂op
Augment LinModel
state-space matrices with stochastic ones As
, Cs_u
, Cs_y
.
If $\mathbf{x_0}$ are model.x0
states, and $\mathbf{x_s}$, the states defined at init_estimstoch
, we define an augmented state vector $\mathbf{x̂} = [ \begin{smallmatrix} \mathbf{x_0} \\ \mathbf{x_s} \end{smallmatrix} ]$. The method returns the augmented matrices Â
, B̂u
, Ĉ
, B̂d
and D̂d
:
\[\begin{aligned} \mathbf{x̂_0}(k+1) &= \mathbf{Â x̂_0}(k) + \mathbf{B̂_u u_0}(k) + \mathbf{B̂_d d_0}(k) \\ \mathbf{ŷ_0}(k) &= \mathbf{Ĉ x̂_0}(k) + \mathbf{D̂_d d_0}(k) \end{aligned}\]
An error is thrown if the augmented model is not observable and verify_obsv == true
. The augmented operating points x̂op
and f̂op
are simply $\mathbf{x_{op}}$ and $\mathbf{f_{op}}$ vectors appended with zeros (see setop!
). See Extended Help for a detailed definition of the augmented matrices.
Extended Help
Extended Help
Using the As
, Cs_u
and Cs_y
matrices of the stochastic model provided in argument and the model.A
, model.Bu
, model.Bd
, model.C
, model.Dd
matrices, the state-space matrices of the augmented model are defined as follows:
\[\begin{aligned} \mathbf{Â} &= \begin{bmatrix} \mathbf{A} & \mathbf{B_u C_{s_u}} \\ \mathbf{0} & \mathbf{A_s} \end{bmatrix} \\ \mathbf{B̂_u} &= \begin{bmatrix} \mathbf{B_u} \\ \mathbf{0} \end{bmatrix} \\ \mathbf{Ĉ} &= \begin{bmatrix} \mathbf{C} & \mathbf{C_{s_y}} \end{bmatrix} \\ \mathbf{B̂_d} &= \begin{bmatrix} \mathbf{B_d} \\ \mathbf{0} \end{bmatrix} \\ \mathbf{D̂_d} &= \mathbf{D_d} \end{aligned}\]
augment_model(
model::SimModel, As, Cs_u, Cs_y; verify_obsv=false
) -> Â, B̂u, Ĉ, B̂d, D̂d, x̂op, f̂op
Return empty matrices, and x̂op
& f̂op
vectors, if model
is not a LinModel
.
ModelPredictiveControl.init_ukf
— Functioninit_ukf(model, nx̂, α, β, κ) -> nσ, γ, m̂, Ŝ
Compute the UnscentedKalmanFilter
constants from $α, β$ and $κ$.
With $n_\mathbf{x̂}$ elements in the state vector $\mathbf{x̂}$ and $n_σ = 2 n_\mathbf{x̂} + 1$ sigma points, the scaling factor applied on standard deviation matrices $\sqrt{\mathbf{P̂}}$ is:
\[ γ = α \sqrt{ n_\mathbf{x̂} + κ }\]
The weight vector $(n_σ × 1)$ for the mean and the weight matrix $(n_σ × n_σ)$ for the covariance are respectively:
\[\begin{aligned} \mathbf{m̂} &= \begin{bmatrix} 1 - \tfrac{n_\mathbf{x̂}}{γ^2} & \tfrac{1}{2γ^2} & \tfrac{1}{2γ^2} & \cdots & \tfrac{1}{2γ^2} \end{bmatrix}' \\ \mathbf{Ŝ} &= \mathrm{diag}\big( 2 - α^2 + β - \tfrac{n_\mathbf{x̂}}{γ^2} \:,\; \tfrac{1}{2γ^2} \:,\; \tfrac{1}{2γ^2} \:,\; \cdots \:,\; \tfrac{1}{2γ^2} \big) \end{aligned}\]
See update_estimate!(::UnscentedKalmanFilter)
for other details.
ModelPredictiveControl.init_internalmodel
— Functioninit_internalmodel(As, Bs, Cs, Ds) -> Âs, B̂s
Calc stochastic model update matrices Âs
and B̂s
for InternalModel
estimator.
As
, Bs
, Cs
and Ds
are the stochastic model matrices :
\[\begin{aligned} \mathbf{x_s}(k+1) &= \mathbf{A_s x_s}(k) + \mathbf{B_s e}(k) \\ \mathbf{y_s}(k) &= \mathbf{C_s x_s}(k) + \mathbf{D_s e}(k) \end{aligned}\]
where $\mathbf{e}(k)$ is conceptual and unknown zero mean white noise. Its optimal estimation is $\mathbf{ê=0}$, the expected value. Thus, the Âs
and B̂s
matrices that optimally update the stochastic estimate $\mathbf{x̂_s}$ are:
\[\begin{aligned} \mathbf{x̂_s}(k+1) &= \mathbf{(A_s - B_s D_s^{-1} C_s) x̂_s}(k) + \mathbf{(B_s D_s^{-1}) ŷ_s}(k) \\ &= \mathbf{Â_s x̂_s}(k) + \mathbf{B̂_s ŷ_s}(k) \end{aligned}\]
with current stochastic outputs estimation $\mathbf{ŷ_s}(k)$, composed of the measured $\mathbf{ŷ_s^m}(k) = \mathbf{y^m}(k) - \mathbf{ŷ_d^m}(k)$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs. See [1].
ModelPredictiveControl.init_predmat_mhe
— Functioninit_predmat_mhe(
model::LinModel, He, i_ym, Â, B̂u, Ĉm, B̂d, D̂dm, x̂op, f̂op, p
) -> E, G, J, B, ex̄, Ex̂, Gx̂, Jx̂, Bx̂
Construct the MovingHorizonEstimator
prediction matrices for LinModel
model
.
We first introduce the deviation vector of the estimated state at arrival $\mathbf{x̂_0}(k-N_k+p) = \mathbf{x̂}_k(k-N_k+p) - \mathbf{x̂_{op}}$ (see setop!
), and the vector $\mathbf{Z} = [\begin{smallmatrix} \mathbf{x̂_0}(k-N_k+p) \\ \mathbf{Ŵ} \end{smallmatrix}]$ with the decision variables. Setting the constant $p=0$ produces an estimator in the current form, while the prediction form is obtained with $p=1$. The estimated sensor noises from time $k-N_k+1$ to $k$ are computed by:
\[\begin{aligned} \mathbf{V̂} = \mathbf{Y_0^m - Ŷ_0^m} &= \mathbf{E Z + G U_0 + J D_0 + Y_0^m + B} \\ &= \mathbf{E Z + F} \end{aligned}\]
in which $\mathbf{U_0}$ and $\mathbf{Y_0^m}$ respectively include the deviation values of the manipulated inputs $\mathbf{u_0}(k-j+p)$ from $j=N_k$ to $1$ and measured outputs $\mathbf{y_0^m}(k-j+1)$ from $j=N_k$ to $1$. The vector $\mathbf{D_0}$ comprises one additional measured disturbance if $p=0$, that is, it includes the deviation vectors $\mathbf{d_0}(k-j+1)$ from $j=N_k+1-p$ to $1$. The constant $\mathbf{B}$ is the contribution for non-zero state $\mathbf{x̂_{op}}$ and state update $\mathbf{f̂_{op}}$ operating points (for linearization, see augment_model
and linearize
). The method also returns the matrices for the estimation error at arrival:
\[ \mathbf{x̄} = \mathbf{x̂_0^†}(k-N_k+p) - \mathbf{x̂_0}(k-N_k+p) = \mathbf{e_x̄ Z + f_x̄}\]
in which $\mathbf{e_x̄} = [\begin{smallmatrix} -\mathbf{I} & \mathbf{0} & \cdots & \mathbf{0} \end{smallmatrix}]$, and $\mathbf{f_x̄} = \mathbf{x̂_0^†}(k-N_k+p)$. The latter is the deviation vector of the state at arrival, estimated at time $k-N_k$, i.e. $\mathbf{x̂_0^†}(k-N_k+p) = \mathbf{x̂}_{k-N_k}(k-N_k+p) - \mathbf{x̂_{op}}$. Lastly, the estimates $\mathbf{x̂_0}(k-j+p)$ from $j=N_k-1$ to $0$, also in deviation form, are computed with:
\[\begin{aligned} \mathbf{X̂_0} &= \mathbf{E_x̂ Z + G_x̂ U_0 + J_x̂ D_0 + B_x̂} \\ &= \mathbf{E_x̂ Z + F_x̂} \end{aligned}\]
The matrices $\mathbf{E, G, J, B, E_x̂, G_x̂, J_x̂, B_x̂}$ are defined in the Extended Help section. The vectors $\mathbf{F, F_x̂, f_x̄}$ are recalculated at each discrete time step, see initpred!(::MovingHorizonEstimator, ::LinModel)
and linconstraint!(::MovingHorizonEstimator, ::LinModel)
.
Extended Help
Extended Help
Using the augmented process model matrices $\mathbf{Â, B̂_u, Ĉ^m, B̂_d, D̂_d^m}$, and the function $\mathbf{S}(j) = ∑_{i=0}^j \mathbf{Â}^i$, the prediction matrices for the sensor noises depend on the constant $p$. For $p=0$, the matrices are computed by (notice the minus signs after the equalities):
\[\begin{aligned} \mathbf{E} &= - \begin{bmatrix} \mathbf{Ĉ^m}\mathbf{Â}^{1} & \mathbf{Ĉ^m}\mathbf{Â}^{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ^m}\mathbf{Â}^{2} & \mathbf{Ĉ^m}\mathbf{Â}^{1} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ^m}\mathbf{Â}^{H_e} & \mathbf{Ĉ^m}\mathbf{Â}^{H_e-1} & \cdots & \mathbf{Ĉ^m}\mathbf{Â}^{0} \end{bmatrix} \\ \mathbf{G} &= - \begin{bmatrix} \mathbf{Ĉ^m}\mathbf{Â}^{0}\mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ^m}\mathbf{Â}^{1}\mathbf{B̂_u} & \mathbf{Ĉ^m}\mathbf{Â}^{0}\mathbf{B̂_u} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ^m}\mathbf{Â}^{H_e-1}\mathbf{B̂_u} & \mathbf{Ĉ^m}\mathbf{Â}^{H_e-2}\mathbf{B̂_u} & \cdots & \mathbf{Ĉ^m}\mathbf{Â}^{0}\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{J} &= - \begin{bmatrix} \mathbf{Ĉ^m}\mathbf{Â}^{0}\mathbf{B̂_d} & \mathbf{D̂_d^m} & \cdots & \mathbf{0} \\ \mathbf{Ĉ^m}\mathbf{Â}^{1}\mathbf{B̂_d} & \mathbf{Ĉ^m}\mathbf{Â}^{0}\mathbf{B̂_d} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ^m}\mathbf{Â}^{H_e-1}\mathbf{B̂_d} & \mathbf{Ĉ^m}\mathbf{Â}^{H_e-2}\mathbf{B̂_d} & \cdots & \mathbf{D̂_d^m} \end{bmatrix} \\ \mathbf{B} &= - \begin{bmatrix} \mathbf{Ĉ^m S}(0) \\ \mathbf{Ĉ^m S}(1) \\ \vdots \\ \mathbf{Ĉ^m S}(H_e-1) \end{bmatrix} \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
or, for $p=1$, the matrices are given by:
\[\begin{aligned} \mathbf{E} &= - \begin{bmatrix} \mathbf{Ĉ^m}\mathbf{Â}^{0} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ^m}\mathbf{Â}^{1} & \mathbf{Ĉ^m}\mathbf{Â}^{0} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ^m}\mathbf{Â}^{H_e-1} & \mathbf{Ĉ^m}\mathbf{Â}^{H_e-2} & \cdots & \mathbf{0} \end{bmatrix} \\ \mathbf{G} &= - \begin{bmatrix} \mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ^m}\mathbf{Â}^{0}\mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ^m}\mathbf{Â}^{H_e-2}\mathbf{B̂_u} & \mathbf{Ĉ^m}\mathbf{Â}^{H_e-3}\mathbf{B̂_u} & \cdots & \mathbf{0} \end{bmatrix} \\ \mathbf{J} &= - \begin{bmatrix} \mathbf{D̂_d^m} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Ĉ^m}\mathbf{Â}^{0}\mathbf{B̂_d} & \mathbf{D̂_d^m} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Ĉ^m}\mathbf{Â}^{H_e-2}\mathbf{B̂_d} & \mathbf{Ĉ^m}\mathbf{Â}^{H_e-3}\mathbf{B̂_d} & \cdots & \mathbf{D̂_d^m} \end{bmatrix} \\ \mathbf{B} &= - \begin{bmatrix} \mathbf{0} \\ \mathbf{Ĉ^m S}(0) \\ \vdots \\ \mathbf{Ĉ^m S}(H_e-2) \end{bmatrix} \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
The matrices for the estimated states are computed by:
\[\begin{aligned} \mathbf{E_x̂} &= \begin{bmatrix} \mathbf{Â}^{1} & \mathbf{A}^{0} & \cdots & \mathbf{0} \\ \mathbf{Â}^{2} & \mathbf{Â}^{1} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Â}^{H_e} & \mathbf{Â}^{H_e-1} & \cdots & \mathbf{Â}^{0} \end{bmatrix} \\ \mathbf{G_x̂} &= \begin{bmatrix} \mathbf{Â}^{0}\mathbf{B̂_u} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Â}^{1}\mathbf{B̂_u} & \mathbf{Â}^{0}\mathbf{B̂_u} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Â}^{H_e-1}\mathbf{B̂_u} & \mathbf{Â}^{H_e-2}\mathbf{B̂_u} & \cdots & \mathbf{Â}^{0}\mathbf{B̂_u} \end{bmatrix} \\ \mathbf{J_x̂^†} &= \begin{bmatrix} \mathbf{Â}^{0}\mathbf{B̂_d} & \mathbf{0} & \cdots & \mathbf{0} \\ \mathbf{Â}^{1}\mathbf{B̂_d} & \mathbf{Â}^{0}\mathbf{B̂_d} & \cdots & \mathbf{0} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{Â}^{H_e-1}\mathbf{B̂_d} & \mathbf{Â}^{H_e-2}\mathbf{B̂_d} & \cdots & \mathbf{Â}^{0}\mathbf{B̂_d} \end{bmatrix} \ , \quad \mathbf{J_x̂} = \begin{cases} [\begin{smallmatrix} \mathbf{J_x̂^†} & \mathbf{0} \end{smallmatrix}] & p=0 \\ \mathbf{J_x̂^†} & p=1 \end{cases} \\ \mathbf{B_x̂} &= \begin{bmatrix} \mathbf{S}(0) \\ \mathbf{S}(1) \\ \vdots \\ \mathbf{S}(H_e-1) \end{bmatrix} \mathbf{\big(f̂_{op} - x̂_{op}\big)} \end{aligned}\]
All these matrices are truncated when $N_k < H_e$ (at the beginning).
Return empty matrices if model
is not a LinModel
, except for ex̄
.
ModelPredictiveControl.relaxarrival
— Functionrelaxarrival(
model::SimModel, nϵ, c_x̂min, c_x̂max, x̂min, x̂max, ex̄
) -> A_x̃min, A_x̃max, x̃min, x̃max, ẽx̄
Augment arrival state constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{ẽ_x̄}$ matrix that appears in the estimation error at arrival equation $\mathbf{x̄} = \mathbf{ẽ_x̄ Z̃ + f_x̄}$. It also returns the augmented constraints $\mathbf{x̃_{min}}$ and $\mathbf{x̃_{max}}$, and the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{x̃_{min}}} \\ \mathbf{A_{x̃_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{(x̃_{min} - x̃_{op})} \\ + \mathbf{(x̃_{max} - x̃_{op})} \end{bmatrix}\]
in which $\mathbf{x̃_{min}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{min}} \end{smallmatrix}]$, $\mathbf{x̃_{max}} = [\begin{smallmatrix} ∞ \\ \mathbf{x̂_{max}} \end{smallmatrix}]$ and $\mathbf{x̃_{op}} = [\begin{smallmatrix} 0 \\ \mathbf{x̂_{op}} \end{smallmatrix}]$
ModelPredictiveControl.relaxX̂
— FunctionrelaxX̂(model::SimModel, nϵ, C_x̂min, C_x̂max, Ex̂) -> A_X̂min, A_X̂max, Ẽx̂
Augment estimated state constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ_x̂}$ matrix that appears in estimated states equation $\mathbf{X̂} = \mathbf{Ẽ_x̂ Z̃ + F_x̂}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{X̂_{min}}} \\ \mathbf{A_{X̂_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{(X̂_{min} - X̂_{op}) + F_x̂} \\ + \mathbf{(X̂_{max} - X̂_{op}) - F_x̂} \end{bmatrix}\]
in which $\mathbf{X̂_{min}, X̂_{max}}$ and $\mathbf{X̂_{op}}$ vectors respectively contains $\mathbf{x̂_{min}, x̂_{max}}$ and $\mathbf{x̂_{op}}$ repeated $H_e$ times.
Return empty matrices if model is not a LinModel
ModelPredictiveControl.relaxŴ
— FunctionrelaxŴ(model::SimModel, nϵ, C_ŵmin, C_ŵmax, nx̂) -> A_Ŵmin, A_Ŵmax
Augment estimated process noise constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{Ŵ_{min}}} \\ \mathbf{A_{Ŵ_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{Ŵ_{min}} \\ + \mathbf{Ŵ_{max}} \end{bmatrix}\]
ModelPredictiveControl.relaxV̂
— FunctionrelaxV̂(model::SimModel, nϵ, C_v̂min, C_v̂max, E) -> A_V̂min, A_V̂max, Ẽ
Augment estimated sensor noise constraints with slack variable ϵ for softening the MHE.
Denoting the MHE decision variable augmented with the slack variable $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$, it returns the $\mathbf{Ẽ}$ matrix that appears in estimated sensor noise equation $\mathbf{V̂} = \mathbf{Ẽ Z̃ + F}$. It also returns the $\mathbf{A}$ matrices for the inequality constraints:
\[\begin{bmatrix} \mathbf{A_{V̂_{min}}} \\ \mathbf{A_{V̂_{max}}} \end{bmatrix} \mathbf{Z̃} ≤ \begin{bmatrix} - \mathbf{V̂_{min} + F} \\ + \mathbf{V̂_{max} - F} \end{bmatrix}\]
Return empty matrices if model is not a LinModel
ModelPredictiveControl.init_matconstraint_mhe
— Functioninit_matconstraint_mhe(model::LinModel,
i_x̃min, i_x̃max, i_X̂min, i_X̂max, i_Ŵmin, i_Ŵmax, i_V̂min, i_V̂max, args...
) -> i_b, i_g, A
Init i_b
, i_g
and A
matrices for the MHE linear inequality constraints.
The linear and nonlinear inequality constraints are respectively defined as:
\[\begin{aligned} \mathbf{A Z̃ } &≤ \mathbf{b} \\ \mathbf{g(Z̃)} &≤ \mathbf{0} \end{aligned}\]
i_b
is a BitVector
including the indices of $\mathbf{b}$ that are finite numbers. i_g
is a similar vector but for the indices of $\mathbf{g}$ (empty if model
is a LinModel
). The method also returns the $\mathbf{A}$ matrix if args
is provided. In such a case, args
needs to contain all the inequality constraint matrices: A_x̃min, A_x̃max, A_X̂min, A_X̂max, A_Ŵmin, A_Ŵmax, A_V̂min, A_V̂max
.
Init i_b, A
without state and sensor noise constraints if model
is not a LinModel
.
Update Quadratic Optimization
ModelPredictiveControl.initpred!
— Methodinitpred!(estim::MovingHorizonEstimator, model::LinModel) -> nothing
Init quadratic optimization matrices F, fx̄, H̃, q̃, r
for MovingHorizonEstimator
.
See init_predmat_mhe
for the definition of the vectors $\mathbf{F, f_x̄}$. It also inits estim.optim
objective function, expressed as the quadratic general form:
\[ J = \min_{\mathbf{Z̃}} \frac{1}{2}\mathbf{Z̃' H̃ Z̃} + \mathbf{q̃' Z̃} + r \]
in which $\mathbf{Z̃} = [\begin{smallmatrix} ϵ \\ \mathbf{Z} \end{smallmatrix}]$. Note that $r$ is useless at optimization but required to evaluate the objective minima $J$. The Hessian $\mathbf{H̃}$ matrix of the quadratic general form is not constant here because of the time-varying $\mathbf{P̄}$ covariance . The computed variables are:
\[\begin{aligned} \mathbf{F} &= \mathbf{G U_0} + \mathbf{J D_0} + \mathbf{Y_0^m} + \mathbf{B} \\ \mathbf{f_x̄} &= \mathbf{x̂_0^†}(k-N_k+1) \\ \mathbf{F_Z̃} &= [\begin{smallmatrix}\mathbf{f_x̄} \\ \mathbf{F} \end{smallmatrix}] \\ \mathbf{Ẽ_Z̃} &= [\begin{smallmatrix}\mathbf{ẽ_x̄} \\ \mathbf{Ẽ} \end{smallmatrix}] \\ \mathbf{M}_{N_k} &= \mathrm{diag}(\mathbf{P̄}^{-1}, \mathbf{R̂}_{N_k}^{-1}) \\ \mathbf{Ñ}_{N_k} &= \mathrm{diag}(C, \mathbf{0}, \mathbf{Q̂}_{N_k}^{-1}) \\ \mathbf{H̃} &= 2(\mathbf{Ẽ_Z̃}' \mathbf{M}_{N_k} \mathbf{Ẽ_Z̃} + \mathbf{Ñ}_{N_k}) \\ \mathbf{q̃} &= 2(\mathbf{M}_{N_k} \mathbf{Ẽ_Z̃})' \mathbf{F_Z̃} \\ r &= \mathbf{F_Z̃}' \mathbf{M}_{N_k} \mathbf{F_Z̃} \end{aligned}\]
ModelPredictiveControl.linconstraint!
— Methodlinconstraint!(estim::MovingHorizonEstimator, model::LinModel)
Set b
vector for the linear model inequality constraints ($\mathbf{A Z̃ ≤ b}$) of MHE.
Also init $\mathbf{F_x̂ = G_x̂ U_0 + J_x̂ D_0 + B_x̂}$ vector for the state constraints, see init_predmat_mhe
.
Solve Optimization Problem
ModelPredictiveControl.optim_objective!
— Methodoptim_objective!(estim::MovingHorizonEstimator) -> Z̃
Optimize objective of estim
MovingHorizonEstimator
and return the solution Z̃
.
If supported by estim.optim
, it warm-starts the solver at:
\[\mathbf{Z̃} = \begin{bmatrix} ϵ_{k-1} \\ \mathbf{x̂}_{k-1}(k-N_k+p) \\ \mathbf{ŵ}_{k-1}(k-N_k+p+0) \\ \mathbf{ŵ}_{k-1}(k-N_k+p+1) \\ \vdots \\ \mathbf{ŵ}_{k-1}(k-p-2) \\ \mathbf{0} \\ \end{bmatrix}\]
where $\mathbf{ŵ}_{k-1}(k-j)$ is the input increment for time $k-j$ computed at the last time step $k-1$. It then calls JuMP.optimize!(estim.optim)
and extract the solution. A failed optimization prints an @error
log in the REPL and returns the warm-start value.
Remove Operating Points
ModelPredictiveControl.remove_op!
— Functionremove_op!(estim::StateEstimator, ym, d, u=nothing) -> y0m, d0, u0
Remove operating pts on measured outputs ym
, disturbances d
and inputs u
(if provided).
If u
is provided, also store current inputs without operating points u0
in estim.lastu0
. This field is used for PredictiveController
computations.
Init Estimate
ModelPredictiveControl.init_estimate!
— Functioninit_estimate!(estim::StateEstimator, model::LinModel, y0m, d0, u0)
Init estim.x̂0
estimate with the steady-state solution if model
is a LinModel
.
Using u0
, y0m
and d0
arguments (deviation values, see setop!
), the steadystate problem combined to the equality constraint $\mathbf{ŷ_0^m} = \mathbf{y_0^m}$ engenders the following system to solve:
\[\begin{bmatrix} \mathbf{I} - \mathbf{Â} \\ \mathbf{Ĉ^m} \end{bmatrix} \mathbf{x̂_0} = \begin{bmatrix} \mathbf{B̂_u u_0 + B̂_d d_0 + f̂_{op} - x̂_{op}} \\ \mathbf{y_0^m - D̂_d^m d_0} \end{bmatrix}\]
in which $\mathbf{Ĉ^m, D̂_d^m}$ are the rows of estim.Ĉ, estim.D̂d
that correspond to measured outputs $\mathbf{y^m}$.
init_estimate!(estim::StateEstimator, model::SimModel, _ , _ , _ )
Left estim.x̂0
estimate unchanged if model
is not a LinModel
.
init_estimate!(estim::InternalModel, model::LinModel, y0m, d0, u0)
Init estim.x̂0
/x̂d
/x̂s
estimate at steady-state for InternalModel
.
The deterministic estimates estim.x̂d
start at steady-state using u0
and d0
arguments:
\[ \mathbf{x̂_d} = \mathbf{(I - A)^{-1} (B_u u_0 + B_d d_0 + f_{op} - x_{op})}\]
Based on y0m
argument and current stochastic outputs estimation $\mathbf{ŷ_s}$, composed of the measured $\mathbf{ŷ_s^m} = \mathbf{y_0^m} - \mathbf{ŷ_{d0}^m}$ and unmeasured $\mathbf{ŷ_s^u = 0}$ outputs, the stochastic estimates also start at steady-state:
\[ \mathbf{x̂_s} = \mathbf{(I - Â_s)^{-1} B̂_s ŷ_s}\]
This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel
for details.
Correct Estimate
ModelPredictiveControl.correct_estimate!
— Functioncorrect_estimate!(estim::SteadyKalmanFilter, y0m, d0)
Correct estim.x̂0
with measured outputs y0m
and disturbances d0
for current time step.
It computes the corrected state estimate $\mathbf{x̂}_{k}(k)$. See the docstring of update_estimate!(::SteadyKalmanFilter, ::Any, ::Any)
for the equations.
correct_estimate!(estim::KalmanFilter, y0m, d0)
Correct estim.x̂0
and estim.P̂
using the time-varying KalmanFilter
.
It computes the corrected state estimate $\mathbf{x̂}_{k}(k)$ estimation covariance $\mathbf{P̂}_{k}(k)$.
correct_estimate!(estim::UnscentedKalmanFilter, y0m, d0)
Do the same but for the UnscentedKalmanFilter
.
correct_estimate!(estim::ExtendedKalmanFilter, y0m, d0)
Do the same but for the ExtendedKalmanFilter
.
correct_estimate!(estim::Luenberger, y0m, d0, _ )
Identical to correct_estimate!(::SteadyKalmanFilter)
but using Luenberger
.
correct_estimate!(estim::MovingHorizonEstimator, y0m, d0)
Do the same but for MovingHorizonEstimator
objects.
correct_estimate!(estim::InternalModel, y0m, d0)
Compute the current stochastic output estimation ŷs
for InternalModel
.
Update Estimate
All these methods assume that the u0
, y0m
and d0
arguments are deviation vectors from their respective operating points (see setop!
). The associated equations in the documentation drops the $\mathbf{0}$ in subscript to simplify the notation. Strictly speaking, the manipulated inputs, measured outputs, measured disturbances and estimated states should be denoted with $\mathbf{u_0, y_0^m, d_0}$ and $\mathbf{x̂_0}$, respectively.
ModelPredictiveControl.update_estimate!
— Functionupdate_estimate!(estim::SteadyKalmanFilter, y0m, d0, u0)
Update estim.x̂0
estimate with current inputs u0
, measured outputs y0m
and dist. d0
.
If estim.direct == false
, the SteadyKalmanFilter
first corrects the state estimate with the precomputed Kalman gain $\mathbf{K̂}$. Afterward, it predicts the next state with the augmented process model. The correction step is skipped if direct == true
since it is already done by the user through the preparestate!
function (that calls correct_estimate!
). The correction and prediction step equations are provided below.
Correction Step
\[\mathbf{x̂}_k(k) = \mathbf{x̂}_{k-1}(k) + \mathbf{K̂}[\mathbf{y^m}(k) - \mathbf{Ĉ^m x̂}_{k-1}(k) - \mathbf{D̂_d^m d}(k) ]\]
Prediction Step
\[\mathbf{x̂}_{k}(k+1) = \mathbf{Â x̂}_{k}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) \]
update_estimate!(estim::KalmanFilter, y0m, d0, u0)
Update KalmanFilter
state estim.x̂0
and estimation error covariance estim.P̂
.
It implements the classical time-varying Kalman Filter based on the process model described in SteadyKalmanFilter
. If estim.direct == false
, it first corrects the estimate before predicting the next state. The correction step is skipped if estim.direct == true
since it's already done by the user. The correction and prediction step equations are provided below, see [2] for details.
Correction Step
\[\begin{aligned} \mathbf{M̂}(k) &= \mathbf{Ĉ^m P̂}_{k-1}(k)\mathbf{Ĉ^m}' + \mathbf{R̂} \\ \mathbf{K̂}(k) &= \mathbf{P̂}_{k-1}(k)\mathbf{Ĉ^m}'\mathbf{M̂^{-1}}(k) \\ \mathbf{ŷ^m}(k) &= \mathbf{Ĉ^m x̂}_{k-1}(k) + \mathbf{D̂_d^m d}(k) \\ \mathbf{x̂}_{k}(k) &= \mathbf{x̂}_{k-1}(k) + \mathbf{K̂}(k)[\mathbf{y^m}(k) - \mathbf{ŷ^m}(k)] \\ \mathbf{P̂}_{k}(k) &= [\mathbf{I - K̂}(k)\mathbf{Ĉ^m}]\mathbf{P̂}_{k-1}(k) \end{aligned}\]
Prediction Step
\[\begin{aligned} \mathbf{x̂}_{k}(k+1) &= \mathbf{Â x̂}_{k}(k) + \mathbf{B̂_u u}(k) + \mathbf{B̂_d d}(k) \\ \mathbf{P̂}_{k}(k+1) &= \mathbf{Â P̂}_{k}(k)\mathbf{Â}' + \mathbf{Q̂} \end{aligned}\]
update_estimate!(estim::UnscentedKalmanFilter, y0m, d0, u0)
Update UnscentedKalmanFilter
state estim.x̂0
and covariance estimate estim.P̂
.
It implements the unscented Kalman Filter based on the generalized unscented transform[3]. See init_ukf
for the definition of the constants $\mathbf{m̂, Ŝ}$ and $γ$. The superscript in e.g. $\mathbf{X̂}_{k-1}^j(k)$ refers the vector at the $j$th column of $\mathbf{X̂}_{k-1}(k)$. The symbol $\mathbf{0}$ is a vector with zeros. The number of sigma points is $n_σ = 2 n_\mathbf{x̂} + 1$. The matrices $\sqrt{\mathbf{P̂}_{k-1}(k)}$ and $\sqrt{\mathbf{P̂}_{k}(k)}$ are the the lower triangular factors of cholesky
results. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true
since it's already done by the user.
Correction Step
\[\begin{aligned} \mathbf{X̂}_{k-1}(k) &= \bigg[\begin{matrix} \mathbf{x̂}_{k-1}(k) & \mathbf{x̂}_{k-1}(k) & \cdots & \mathbf{x̂}_{k-1}(k) \end{matrix}\bigg] + \bigg[\begin{matrix} \mathbf{0} & γ \sqrt{\mathbf{P̂}_{k-1}(k)} & -γ \sqrt{\mathbf{P̂}_{k-1}(k)} \end{matrix}\bigg] \\ \mathbf{Ŷ^m}(k) &= \bigg[\begin{matrix} \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{1}(k) \Big) & \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{2}(k) \Big) & \cdots & \mathbf{ĥ^m}\Big( \mathbf{X̂}_{k-1}^{n_σ}(k) \Big) \end{matrix}\bigg] \\ \mathbf{ŷ^m}(k) &= \mathbf{Ŷ^m}(k) \mathbf{m̂} \\ \mathbf{X̄}_{k-1}(k) &= \begin{bmatrix} \mathbf{X̂}_{k-1}^{1}(k) - \mathbf{x̂}_{k-1}(k) & \mathbf{X̂}_{k-1}^{2}(k) - \mathbf{x̂}_{k-1}(k) & \cdots & \mathbf{X̂}_{k-1}^{n_σ}(k) - \mathbf{x̂}_{k-1}(k) \end{bmatrix} \\ \mathbf{Ȳ^m}(k) &= \begin{bmatrix} \mathbf{Ŷ^m}^{1}(k) - \mathbf{ŷ^m}(k) & \mathbf{Ŷ^m}^{2}(k) - \mathbf{ŷ^m}(k) & \cdots & \mathbf{Ŷ^m}^{n_σ}(k) - \mathbf{ŷ^m}(k) \end{bmatrix} \\ \mathbf{M̂}(k) &= \mathbf{Ȳ^m}(k) \mathbf{Ŝ} \mathbf{Ȳ^m}'(k) + \mathbf{R̂} \\ \mathbf{K̂}(k) &= \mathbf{X̄}_{k-1}(k) \mathbf{Ŝ} \mathbf{Ȳ^m}'(k) \mathbf{M̂^{-1}}(k) \\ \mathbf{x̂}_k(k) &= \mathbf{x̂}_{k-1}(k) + \mathbf{K̂}(k) \big[ \mathbf{y^m}(k) - \mathbf{ŷ^m}(k) \big] \\ \mathbf{P̂}_k(k) &= \mathbf{P̂}_{k-1}(k) - \mathbf{K̂}(k) \mathbf{M̂}(k) \mathbf{K̂}'(k) \\ \end{aligned} \]
Prediction Step
\[\begin{aligned} \mathbf{X̂}_k(k) &= \bigg[\begin{matrix} \mathbf{x̂}_{k}(k) & \mathbf{x̂}_{k}(k) & \cdots & \mathbf{x̂}_{k}(k) \end{matrix}\bigg] + \bigg[\begin{matrix} \mathbf{0} & \gamma \sqrt{\mathbf{P̂}_{k}(k)} & - \gamma \sqrt{\mathbf{P̂}_{k}(k)} \end{matrix}\bigg] \\ \mathbf{X̂}_{k}(k+1) &= \bigg[\begin{matrix} \mathbf{f̂}\Big( \mathbf{X̂}_{k}^{1}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) & \mathbf{f̂}\Big( \mathbf{X̂}_{k}^{2}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) & \cdots & \mathbf{f̂}\Big( \mathbf{X̂}_{k}^{n_σ}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \end{matrix}\bigg] \\ \mathbf{x̂}_{k}(k+1) &= \mathbf{X̂}_{k}(k+1)\mathbf{m̂} \\ \mathbf{X̄}_k(k+1) &= \begin{bmatrix} \mathbf{X̂}_{k}^{1}(k+1) - \mathbf{x̂}_{k}(k+1) & \mathbf{X̂}_{k}^{2}(k+1) - \mathbf{x̂}_{k}(k+1) & \cdots &\, \mathbf{X̂}_{k}^{n_σ}(k+1) - \mathbf{x̂}_{k}(k+1) \end{bmatrix} \\ \mathbf{P̂}_k(k+1) &= \mathbf{X̄}_k(k+1) \mathbf{Ŝ} \mathbf{X̄}_k'(k+1) + \mathbf{Q̂} \end{aligned}\]
update_estimate!(estim::ExtendedKalmanFilter, y0m, d0, u0)
Update ExtendedKalmanFilter
state estim.x̂0
and covariance estim.P̂
.
The equations are similar to update_estimate!(::KalmanFilter)
but with the substitutions $\mathbf{Ĉ^m = Ĥ^m}(k)$ and $\mathbf{Â = F̂}(k)$, the Jacobians of the augmented process model:
\[\begin{aligned} \mathbf{Ĥ}(k) &= \left. \frac{∂\mathbf{ĥ}(\mathbf{x̂}, \mathbf{d})}{∂\mathbf{x̂}} \right|_{\mathbf{x̂ = x̂}_{k-1}(k),\, \mathbf{d = d}(k)} \\ \mathbf{F̂}(k) &= \left. \frac{∂\mathbf{f̂}(\mathbf{x̂}, \mathbf{u}, \mathbf{d})}{∂\mathbf{x̂}} \right|_{\mathbf{x̂ = x̂}_{k}(k), \, \mathbf{u = u}(k),\, \mathbf{d = d}(k)} \end{aligned}\]
The matrix $\mathbf{Ĥ^m}$ is the rows of $\mathbf{Ĥ}$ that are measured outputs. The function ForwardDiff.jacobian
automatically computes them. The correction and prediction step equations are provided below. The correction step is skipped if estim.direct == true
since it's already done by the user.
Correction Step
\[\begin{aligned} \mathbf{Ŝ}(k) &= \mathbf{Ĥ^m}(k)\mathbf{P̂}_{k-1}(k)\mathbf{Ĥ^m}'(k) + \mathbf{R̂} \\ \mathbf{K̂}(k) &= \mathbf{P̂}_{k-1}(k)\mathbf{Ĥ^m}'(k)\mathbf{Ŝ^{-1}}(k) \\ \mathbf{ŷ^m}(k) &= \mathbf{ĥ^m}\Big( \mathbf{x̂}_{k-1}(k), \mathbf{d}(k) \Big) \\ \mathbf{x̂}_{k}(k) &= \mathbf{x̂}_{k-1}(k) + \mathbf{K̂}(k)[\mathbf{y^m}(k) - \mathbf{ŷ^m}(k)] \\ \mathbf{P̂}_{k}(k) &= [\mathbf{I - K̂}(k)\mathbf{Ĥ^m}(k)]\mathbf{P̂}_{k-1}(k) \end{aligned}\]
Prediction Step
\[\begin{aligned} \mathbf{x̂}_{k}(k+1) &= \mathbf{f̂}\Big( \mathbf{x̂}_{k}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \\ \mathbf{P̂}_{k}(k+1) &= \mathbf{F̂}(k)\mathbf{P̂}_{k}(k)\mathbf{F̂}'(k) + \mathbf{Q̂} \end{aligned}\]
update_estimate!(estim::Luenberger, y0m, d0, u0)
Same than update_estimate!(::SteadyKalmanFilter)
but using Luenberger
.
update_estimate!(estim::MovingHorizonEstimator, y0m, d0, u0)
Update MovingHorizonEstimator
state estim.x̂0
.
The optimization problem of MovingHorizonEstimator
documentation is solved at each discrete time $k$. The prediction matrices are provided at init_predmat_mhe
documentation. Once solved, the optimal estimate $\mathbf{x̂}_k(k+p)$ is computed by inserting the optimal values of $\mathbf{x̂}_k(k-N_k+p)$ and $\mathbf{Ŵ}$ in the augmented model from $j = N_k-1$ to $0$ inclusively. Afterward, if $N_k = H_e$, the arrival covariance for the next time step $\mathbf{P̂}_{k-N_k}(k-N_k+1)$ is estimated using estim.covestim
object.
update_estimate!(estim::InternalModel, _ , d0, u0)
Update estim.x̂0
/x̂d
/x̂s
with current inputs u0
, measured outputs y0m
and dist. d0
.
The InternalModel
updates the deterministic x̂d
and stochastic x̂s
estimates with:
\[\begin{aligned} \mathbf{x̂_d}(k+1) &= \mathbf{f}\Big( \mathbf{x̂_d}(k), \mathbf{u}(k), \mathbf{d}(k) \Big) \\ \mathbf{x̂_s}(k+1) &= \mathbf{Â_s x̂_s}(k) + \mathbf{B̂_s ŷ_s}(k) \end{aligned}\]
This estimator does not augment the state vector, thus $\mathbf{x̂ = x̂_d}$. See init_internalmodel
for details.
- 1Desbiens, A., D. Hodouin & É. Plamondon. 2000, "Global predictive control : a unified control structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics", IEE Proceedings - Control Theory and Applications, vol. 147, no 4, https://doi.org/10.1049/ip-cta:20000443, p. 465–475, ISSN 1350-2379.
- 2"Kalman Filter", Wikipedia: The Free Encyclopedia, https://en.wikipedia.org/wiki/Kalman_filter, Accessed 2024-08-08.
- 3Simon, D. 2006, "Chapter 14: The unscented Kalman filter" in "Optimal State Estimation: Kalman, H∞, and Nonlinear Approaches", John Wiley & Sons, p. 433–459, https://doi.org/10.1002/0470045345.ch14, ISBN9780470045343.