For robust analysis, see RobustAndOptimalControl.jl.

Analysis

ControlSystemsBase.dampMethod
Wn, zeta, ps = damp(sys)

Compute the natural frequencies, Wn, and damping ratios, zeta, of the poles, ps, of sys

source
ControlSystemsBase.dcgainFunction
dcgain(sys, ϵ=0)

Compute the dcgain of system sys.

equal to G(0) for continuous-time systems and G(1) for discrete-time systems.

ϵ can be provided to evaluate the dcgain with a small perturbation into the stability region of the complex plane.

source
ControlSystemsBase.delaymarginMethod
dₘ = delaymargin(G::LTISystem)

Return the delay margin, dₘ. For discrete-time systems, the delay margin is normalized by the sample time, i.e., the value represents the margin in number of sample times. Only supports SISO systems.

source
ControlSystemsBase.gangoffourMethod
S, PS, CS, T = gangoffour(P, C; minimal=true)
gangoffour(P::AbstractVector, C::AbstractVector; minimal=true)

Given a transfer function describing the plant P and a transfer function describing the controller C, computes the four transfer functions in the Gang-of-Four.

  • S = 1/(1+PC) Sensitivity function
  • PS = (1+PC)\P Load disturbance to measurement signal
  • CS = (1+PC)\C Measurement noise to control signal
  • T = PC/(1+PC) Complementary sensitivity function

If minimal=true, minreal will be applied to all transfer functions.

source
ControlSystemsBase.gangofsevenMethod
S, PS, CS, T, RY, RU, RE = gangofseven(P,C,F)

Given transfer functions describing the Plant P, the controller C and a feed forward block F, computes the four transfer functions in the Gang-of-Four and the transferfunctions corresponding to the feed forward.

  • S = 1/(1+PC) Sensitivity function
  • PS = P/(1+PC)
  • CS = C/(1+PC)
  • T = PC/(1+PC) Complementary sensitivity function
  • RY = PCF/(1+PC)
  • RU = CF/(1+P*C)
  • RE = F/(1+P*C)
source
ControlSystemsBase.markovparamMethod
markovparam(sys, n)

Compute the nth markov parameter of discrete-time state-space system sys. This is defined as the following:

h(0) = D

h(n) = C*A^(n-1)*B

source
ControlSystemsBase.reduce_sysMethod
reduce_sys(A::AbstractMatrix, B::AbstractMatrix, C::AbstractMatrix, D::AbstractMatrix, meps::AbstractFloat)

Implements REDUCE in the Emami-Naeini & Van Dooren paper. Returns transformed A, B, C, D matrices. These are empty if there are no zeros.

source
ControlSystemsBase.relative_gain_arrayMethod
relative_gain_array(A::AbstractMatrix; tol = 1.0e-15)

Reference: "On the Relative Gain Array (RGA) with Singular and Rectangular Matrices" Jeffrey Uhlmann https://arxiv.org/pdf/1805.10312.pdf

source
ControlSystemsBase.relative_gain_arrayMethod
relative_gain_array(G, w::AbstractVector)
relative_gain_array(G, w::Number)

Calculate the relative gain array of G at frequencies w. G(iω) .* pinv(tranpose(G(iω)))

The RGA can be used to find input-output pairings for MIMO control using individually tuned loops. Pair the inputs and outputs such that the RGA(ωc) at the crossover frequency becomes as close to diagonal as possible. Avoid pairings such that RGA(0) contains negative diagonal elements.

  • The sum of the absolute values of the entries in the RGA is a good measure of the "true condition number" of G, the best condition number that can be achieved by input/output scaling of G, -Glad, Ljung.
  • The RGA is invariant to input/output scaling of G.
  • If the RGA contains large entries, the system may be sensitive to model errors, -Skogestad, "Multivariable Feedback Control: Analysis and Design":
    • Uncertainty in the input channels (diagonal input uncertainty). Plants with
    large RGA-elements around the crossover frequency are fundamentally difficult to control because of sensitivity to input uncertainty (e.g. caused by uncertain or neglected actuator dynamics). In particular, decouplers or other inverse-based controllers should not be used for plants with large RGAeleme
    • Element uncertainty. Large RGA-elements imply sensitivity to element-by-element uncertainty.
    However, this kind of uncertainty may not occur in practice due to physical couplings between the transfer function elements. Therefore, diagonal input uncertainty (which is always present) is usually of more concern for plants with large RGA elements.

The relative gain array is computed using the The unit-consistent (UC) generalized inverse Reference: "On the Relative Gain Array (RGA) with Singular and Rectangular Matrices" Jeffrey Uhlmann https://arxiv.org/pdf/1805.10312.pdf

source
ControlSystemsBase.tzerosMethod
tzeros(sys)

Compute the invariant zeros of the system sys. If sys is a minimal realization, these are also the transmission zeros.

source
ControlSystemsBase.zpkdataMethod
z, p, k = zpkdata(sys)

Compute the zeros, poles, and gains of system sys.

Returns

  • z : Matrix{Vector{ComplexF64}}, (ny × nu)
  • p : Matrix{Vector{ComplexF64}}, (ny × nu)
  • k : Matrix{Float64}, (ny × nu)
source
ControlSystemsBase.areMethod
are(::Continuous, A, B, Q, R)

Compute 'X', the solution to the continuous-time algebraic Riccati equation, defined as A'X + XA - (XB)R^-1(B'X) + Q = 0, where R is non-singular.

In an LQR problem, Q is associated with the state penalty $x'Qx$ while R is associated with the control penalty $u'Ru$. See lqr for more details.

Uses MatrixEquations.arec. For keyword arguments, see the docstring of ControlSystemsBase.MatrixEquations.arec, note that they define the input arguments in a different order.

source
ControlSystemsBase.areMethod
are(::Discrete, A, B, Q, R; kwargs...)

Compute X, the solution to the discrete-time algebraic Riccati equation, defined as A'XA - X - (A'XB)(B'XB + R)^-1(B'XA) + Q = 0, where Q>=0 and R>0

In an LQR problem, Q is associated with the state penalty $x'Qx$ while R is associated with the control penalty $u'Ru$. See lqr for more details.

Uses MatrixEquations.ared. For keyword arguments, see the docstring of ControlSystemsBase.MatrixEquations.ared, note that they define the input arguments in a different order.

source
ControlSystemsBase.balanceFunction
S, P, B = balance(A[, perm=true])

Compute a similarity transform T = S*P resulting in B = T\A*T such that the row and column norms of B are approximately equivalent. If perm=false, the transformation will only scale A using diagonal S, and not permute A (i.e., set P=I).

source
ControlSystemsBase.balrealMethod

sysr, G, T = balreal(sys::StateSpace)

Calculates a balanced realization of the system sys, such that the observability and reachability gramians of the balanced system are equal and diagonal diagm(G). T is the similarity transform between the old state x and the new state z such that z = Tx.

See also gram, baltrunc.

Reference: Varga A., Balancing-free square-root algorithm for computing singular perturbation approximations.

source
ControlSystemsBase.baltruncMethod
sysr, G, T = baltrunc(sys::StateSpace; atol = √ϵ, rtol=1e-3, n = nothing, residual = false)

Reduces the state dimension by calculating a balanced realization of the system sys, such that the observability and reachability gramians of the balanced system are equal and diagonal diagm(G), and truncating it to order n. If n is not provided, it's chosen such that all states corresponding to singular values less than atol and less that rtol σmax are removed.

T is the projection matrix between the old state x and the newstate z such that z = Tx. T will in general be a non-square matrix.

If residual = true, matched static gain is achieved through "residualization", i.e., setting

\[0 = A_{21}x_{1} + A_{22}x_{2} + B_{2}u\]

where indices 1/2 correspond to the remaining/truncated states respectively.

See also gram, balreal

Glad, Ljung, Reglerteori: Flervariabla och Olinjära metoder.

For more advanced model reduction, see RobustAndOptimalControl.jl - Model Reduction.

Extended help

Note: Gramian computations are sensitive to input-output scaling. For the result of a numerical balancing, gramian computation or truncation of MIMO systems to be meaningful, the inputs and outputs of the system must thus be scaled in a meaningful way. A common (but not the only) approach is:

  • The outputs are scaled such that the maximum allowed control error, the maximum expected reference variation, or the maximum expected variation, is unity.
  • The input variables are scaled to have magnitude one. This is done by dividing each variable by its maximum expected or allowed change, i.e., $u_{scaled} = u / u_{max}$

Without such scaling, the result of balancing will depend on the units used to measure the input and output signals, e.g., a change of unit for one output from meter to millimeter will make this output 1000x more important.

source
ControlSystemsBase.controllabilityMethod
controllability(A, B; atol, rtol)
controllability(sys; atol, rtol)

Check for controllability of the pair (A, B) or sys using the PHB test.

The return value contains the field iscontrollable which is true if the rank condition is met at all eigenvalues of A, and false otherwise. The returned structure also contains the rank and smallest singular value at each individual eigenvalue of A in the fields ranks and sigma_min.

Technically, this function checks for controllability from the origin, also called reachability.

source
ControlSystemsBase.covarMethod
P = covar(sys, W)

Calculate the stationary covariance P = E[y(t)y(t)'] of the output y of a StateSpace model sys driven by white Gaussian noise w with covariance E[w(t)w(τ)]=W*δ(t-τ) (δ is the Dirac delta).

Remark: If sys is unstable then the resulting covariance is a matrix of Infs. Entries corresponding to direct feedthrough (DWD' .!= 0) will equal Inf for continuous-time systems.

source
ControlSystemsBase.ctrbMethod
ctrb(A, B)
ctrb(sys)

Compute the controllability matrix for the system described by (A, B) or sys.

Note that checking for controllability by computing the rank from ctrb is not the most numerically accurate way, a better method is checking if gram(sys, :c) is positive definite or to call the function controllability.

source
ControlSystemsBase.gramMethod
gram(sys, opt; kwargs...)

Compute the grammian of system sys. If opt is :c, computes the controllability grammian. If opt is :o, computes the observability grammian.

See also grampd For keyword arguments, see grampd.

Extended help

Note: Gramian computations are sensitive to input-output scaling. For the result of a numerical balancing, gramian computation or truncation of MIMO systems to be meaningful, the inputs and outputs of the system must thus be scaled in a meaningful way. A common (but not the only) approach is:

  • The outputs are scaled such that the maximum allowed control error, the maximum expected reference variation, or the maximum expected variation, is unity.
  • The input variables are scaled to have magnitude one. This is done by dividing each variable by its maximum expected or allowed change, i.e., $u_{scaled} = u / u_{max}$

Without such scaling, the result of balancing will depend on the units used to measure the input and output signals, e.g., a change of unit for one output from meter to millimeter will make this output 1000x more important.

source
ControlSystemsBase.grampdMethod
U = grampd(sys, opt; kwargs...)

Return a Cholesky factor U of the grammian of system sys. If opt is :c, computes the controllability grammian G = U*U'. If opt is :o, computes the observability grammian G = U'U.

Obtain a Cholesky object by Cholesky(U) for observability grammian

Uses MatrixEquations.plyapc/plyapd. For keyword arguments, see the docstring of ControlSystemsBase.MatrixEquations.plyapc/plyapd

source
ControlSystemsBase.hinfnormMethod
Ninf, ω_peak = hinfnorm(sys; tol=1e-6)

Compute the H∞ norm Ninf of the LTI system sys, together with a frequency ω_peak at which the gain Ninf is achieved.

Ninf := sup_ω σ_max[sys(iω)] if G is stable (σ_max = largest singular value) := Inf' ifG` is unstable

tol is an optional keyword argument for the desired relative accuracy for the computed H∞ norm (not an absolute certificate).

sys is first converted to a state space model if needed.

The continuous-time L∞ norm computation implements the 'two-step algorithm' in:
N.A. Bruinsma and M. Steinbuch, 'A fast algorithm to compute the H∞-norm of a transfer function matrix', Systems and Control Letters (1990), pp. 287-293.

For the discrete-time version, see:
P. Bongers, O. Bosgra, M. Steinbuch, 'L∞-norm calculation for generalized state space systems in continuous and discrete time', American Control Conference, 1991.

See also linfnorm.

source
ControlSystemsBase.innovation_formMethod
sysi = innovation_form(sys, R1, R2[, R12])
sysi = innovation_form(sys; sysw=I, syse=I, R1=I, R2=I)

Takes a system

x' = Ax + Bu + w ~ R1
y  = Cx + Du + e ~ R2

and returns the system

x' = Ax + Kv
y  = Cx + v

where v is the innovation sequence.

If sysw (syse) is given, the covariance resulting in filtering noise with R1 (R2) through sysw (syse) is used as covariance.

See Stochastic Control, Chapter 4, Åström

source
ControlSystemsBase.innovation_formMethod
sysi = innovation_form(sys, K)

Takes a system

x' = Ax + Bu + Kv
y  = Cx + Du + v

and returns the system

x' = Ax + Kv
y  = Cx + v

where v is the innovation sequence.

See Stochastic Control, Chapter 4, Åström

source
ControlSystemsBase.linfnormMethod
Ninf, ω_peak = linfnorm(sys; tol=1e-6)

Compute the L∞ norm Ninf of the LTI system sys, together with a frequency ω_peak at which the gain Ninf is achieved.

Ninf := sup_ω σ_max[sys(iω)] (σ_max denotes the largest singular value)

tol is an optional keyword argument representing the desired relative accuracy for the computed L∞ norm (this is not an absolute certificate however).

sys is first converted to a state space model if needed.

The continuous-time L∞ norm computation implements the 'two-step algorithm' in:
N.A. Bruinsma and M. Steinbuch, 'A fast algorithm to compute the H∞-norm of a transfer function matrix', Systems and Control Letters (1990), pp. 287-293.

For the discrete-time version, see:
P. Bongers, O. Bosgra, M. Steinbuch, 'L∞-norm calculation for generalized state space systems in continuous and discrete time', American Control Conference, 1991.

See also hinfnorm.

source
ControlSystemsBase.observabilityMethod
observability(A, C; atol, rtol)

Check for observability of the pair (A, C) or sys using the PHB test.

The return value contains the field isobservable which is true if the rank condition is met at all eigenvalues of A, and false otherwise. The returned structure also contains the rank and smallest singular value at each individual eigenvalue of A in the fields ranks and sigma_min.

source
ControlSystemsBase.observer_controllerMethod
cont = observer_controller(sys, L::AbstractMatrix, K::AbstractMatrix; direct=false)

If direct = false

Return the observer_controller cont that is given by ss(A - B*L - K*C + K*D*L, K, L, 0) such that feedback(sys, cont) produces a closed-loop system with eigenvalues given by A-KC and A-BL.

This controller does not have a direct term, and corresponds to state feedback operating on state estimated by observer_predictor. Use this form if the computed control signal is applied at the next sampling instant, or with an otherwise large delay in relation to the measurement fed into the controller.

Ref: "Computer-Controlled Systems" Eq 4.37

If direct = true

Return the observer controller cont that is given by ss((I-KC)(A-BL), (I-KC)(A-BL)K, L, LK) such that feedback(sys, cont) produces a closed-loop system with eigenvalues given by A-BL and A-BL-KC. This controller has a direct term, and corresponds to state feedback operating on state estimated by observer_filter. Use this form if the computed control signal is applied immediately after receiveing a measurement. This version typically has better performance than the one without a direct term.

Note

To use this formulation, the observer gain K should have been designed for the pair (A, CA) rather than (A, C). To do this, pass direct = true when calling place or kalman.

Ref: Ref: "Computer-Controlled Systems" pp 140 and "Computer-Controlled Systems" pp 162 prob 4.7

Arguments:

  • sys: Model of system
  • L: State-feedback gain u = -Lx
  • K: Observer gain

See also observer_predictor and innovation_form.

source
ControlSystemsBase.observer_filterMethod
observer_filter(sys, K; output_state = false)

Return the observer filter

\[\begin{aligned} x̂(k|k) &= (I - KC)Ax̂(k-1|k-1) + (I - KC)Bu(k-1) + Ky(k) \\ \end{aligned}\]

with the input equation [(I - KC)B K] * [u(k-1); y(k)].

Note the time indices in the equations, the filter assumes that the user passes the current $y(k)$, but the past $u(k-1)$, that is, this filter is used to estimate the state before the current control input has been applied. This causes a state-feedback controller acting on the estimate produced by this observer to have a direct term.

This is similar to observer_predictor, but in contrast to the predictor, the filter output depends on the current measurement, whereas the predictor output only depend on past measurements.

The observer filter is equivalent to the observer_predictor for continuous-time systems.

Note

To use this formulation, the observer gain K should have been designed for the pair (A, CA) rather than (A, C). To do this, pass direct = true when calling place or kalman.

Ref: "Computer-Controlled Systems" Eq 4.32

source
ControlSystemsBase.observer_predictorMethod
observer_predictor(sys::AbstractStateSpace, K; h::Int = 1, output_state = false)
observer_predictor(sys::AbstractStateSpace, R1, R2[, R12]; output_state = false)

If sys is continuous, return the observer predictor system

\[\begin{aligned} x̂' &= (A - KC)x̂ + (B-KD)u + Ky \\ ŷ &= Cx + Du \end{aligned}\]

with the input equation [B-KD K] * [u; y]

If sys is discrete, the prediction horizon h may be specified, in which case measurements up to and including time t-h and inputs up to and including time t are used to predict y(t).

If covariance matrices R1, R2 are given, the kalman gain K is calculated using kalman.

If output_state is true, the output is the state estimate instead of the output estimate .

See also innovation_form, observer_controller and observer_filter.

source
ControlSystemsBase.obsvFunction
obsv(A, C, n=size(A,1))
obsv(sys, n=sys.nx)

Compute the observability matrix with n rows for the system described by (A, C) or sys. Providing the optional n > sys.nx returns an extended observability matrix.

Note that checking for observability by computing the rank from obsv is not the most numerically accurate way, a better method is checking if gram(sys, :o) is positive definite or to call the function observability.

source
ControlSystemsBase.plyapMethod
Xc = plyap(sys::AbstractStateSpace, Ql; kwargs...)

Lyapunov solver that takes the L Cholesky factor of Q and returns a triangular matrix Xc such that Xc*Xc' = X.

source
ControlSystemsBase.similarity_transformMethod
syst = similarity_transform(sys, T; unitary=false)

Perform a similarity transform T : Tx̃ = x on sys such that

à = T⁻¹AT
B̃ = T⁻¹ B
C̃ = CT
D̃ = D

If unitary=true, T is assumed unitary and the matrix adjoint is used instead of the inverse. See also balance_statespace.

source
ControlSystemsBase.time_scaleMethod
time_scale(sys::AbstractStateSpace{Continuous}, a; balanced = false)
time_scale(G::TransferFunction{Continuous},     a; balanced = true)

Rescale the time axis (change time unit) of sys.

For systems where the dominant time constants are very far from 1, e.g., in electronics, rescaling the time axis may be beneficial for numerical performance, in particular for continuous-time simulations.

Scaling of time for a function $f(t)$ with Laplace transform $F(s)$ can be stated as

\[f(at) \leftrightarrow \dfrac{1}{a} F\big(\dfrac{s}{a}\big)\]

The keyword argument balanced indicates whether or not to apply a balanced scaling on the B and C matrices. For statespace systems, this defaults to false since it changes the state representation, only B will be scaled. For transfer functions, it defaults to true.

Example:

The following example show how a system with a time constant on the order of one micro-second is rescaled such that the time constant becomes 1, i.e., the time unit is changed from seconds to micro-seconds.

Gs  = tf(1, [1e-6, 1])     # micro-second time scale modeled in seconds
Gms = time_scale(Gs, 1e-6) # Change to micro-second time scale
Gms == tf(1, [1, 1])       # Gms now has micro-seconds as time unit

The next example illustrates how the time axis of a time-domain simulation changes by time scaling

t = 0:0.1:50 # original time axis
a = 10       # Scaling factor
sys1 = ssrand(1,1,5)
res1 = step(sys1, t)      # Perform original simulation
sys2 = time_scale(sys, a) # Scale time
res2 = step(sys2, t ./ a) # Simulate on scaled time axis, note the `1/a`
isapprox(res1.y, res2.y, rtol=1e-3, atol=1e-3)
source
LinearAlgebra.lyapMethod
lyap(A, Q; kwargs...)

Compute the solution X to the discrete Lyapunov equation AXA' - X + Q = 0.

Uses MatrixEquations.lyapc / MatrixEquations.lyapd. For keyword arguments, see the docstring of ControlSystemsBase.MatrixEquations.lyapc / ControlSystemsBase.MatrixEquations.lyapd

source
LinearAlgebra.normFunction
norm(sys, p=2; tol=1e-6)

norm(sys) or norm(sys,2) computes the H2 norm of the LTI system sys.

norm(sys, Inf) computes the H∞ norm of the LTI system sys. The H∞ norm is the same as the L∞ for stable systems, and Inf for unstable systems. If the peak gain frequency is required as well, use the function hinfnorm instead. See hinfnorm for further documentation.

tol is an optional keyword argument, used only for the computation of L∞ norms. It represents the desired relative accuracy for the computed L∞ norm (this is not an absolute certificate however).

sys is first converted to a StateSpace model if needed.

source
ControlSystemsBase.balance_statespaceFunction
A, B, C, T = balance_statespace{S}(A::Matrix{S}, B::Matrix{S}, C::Matrix{S}, perm::Bool=false)
sys, T = balance_statespace(sys::StateSpace, perm::Bool=false)

Computes a balancing transformation T that attempts to scale the system so that the row and column norms of [TA/T TB; C/T 0] are approximately equal. If perm=true, the states in A are allowed to be reordered.

The inverse of sysb, T = balance_statespace(sys) is given by similarity_transform(sysb, T)

This is not the same as finding a balanced realization with equal and diagonal observability and reachability gramians, see balreal

source

Videos

Basic usage of robustness analysis with JuliaControl