jagomart
digital resources
picture1_Matrix Pdf 174090 | Matrixeqs


 145x       Filetype PDF       File size 0.30 MB       Source: people.kth.se


File: Matrix Pdf 174090 | Matrixeqs
lecture notes in numerical linear algebra numerical methods for lyapunov equations methods for lyapunov equations this chapter is about numerical methods for a particular type of equa tion expressed as ...

icon picture PDF Filetype PDF | Posted on 27 Jan 2023 | 2 years ago
Partial capture of text on file.
                    Lecture notes in numerical linear algebra
                    Numerical methods for Lyapunov equations
                    Methods for Lyapunov equations
                    This chapter is about numerical methods for a particular type of equa-
                    tion expressed as a matrix equality.
                                                                                                             TheLyapunovequationisthemostcom-
                    Definition 4.0.1. Consider two square matrices A,W ∈ Rn×n. The problem                    mon problem in the class of problems
                    to find a square matrix X ∈ Rn×n such that                                                called matrix equations. Other examples
                                                                                                             of matrix equations: Sylvester equation,
                                                                                                             Stein equation, Riccati equation.
                                                    AX+XAT=W                                     (4.1)
                    is called the Lyapunov equation.
                       Different algorithms are suitable for different situations, depending                 Traditionally driven by certain prob-
                    on the properties of A and W, and we work out two algorithms. For                        lems in system and control, the Lya-
                                                                                                             punov equation now appears in very
                    dense A, the Bartels-Stewart algorithm is one of the most efficient ap-                   large number of fields.   The develop-
                    proaches. The Bartels-Stewart algorithm is derived in Section 4.2. For                   ments and improvements of numerical
                    sparse and large-scale A and if W is of low-rank, Krylov-type methods                    methods have (and continues to be) rec-
                                                                                                             ognizedasimportantinnumericallinear
                    maybemoreefficient (Section 4.3).                                                         algebra.
                       Someapplications are given in Section 4.4 and in the exercises.
                    Anaive approach                                                                          In the field of systems and control
                                                                                                             the equation (4.1) is sometimes called
                    Equation (4.1) is a linear system of equations, expressed in a some-                     the continuous-time Lyapunov equation,
                    what unusual form. We will now see that (4.1) can be reformulated                        for disambiguation with a different
                                                                                                             matrix equation called the discrete-time
                    as a linear system of equations in standard form, by using techniques                    Lyapunov equation. Many of the algo-
                    called vectorization and Kroneckerproducts. If B ∈ Rn×m withcolumns                      rithms we present here can be adapted
                    ˆb ,...,b  = B, the vectorization operation is defined as the stacking                   for discrete-time Lyapunov equations.
                      1       m
                    of the columns into a vector,
                                                            ⎡b ⎤
                                                            ⎢ 1⎥
                                                            ⎢   ⎥     nm
                                                 vecˆB∶=⎢ ⋮ ⎥ ∈ R       .
                                                            ⎢   ⎥
                                                            ⎢   ⎥
                                                            ⎢bm⎥
                                                            ⎣   ⎦
                    For two matrices A ∈ Rn×m and B ∈ Rj×k, the Kronecker product (⊗) is                     In matlab, the vec-operation can be
                    defined as                     ⎡a B ⋯ a B⎤                                                computed with b=B(:)      and the in-
                                                  ⎢ 11           1m ⎥                                        verse operation can be computed with
                                                  ⎢                   ⎥    nj×mk
                                        A⊗B=⎢ ⋮            ⋱      ⋮   ⎥∈R        .                           B=reshape(b,n,length(b)/n). The Kro-
                                                  ⎢                   ⎥
                                                  ⎢                   ⎥                                      necker  product   is  implemented in
                                                  ⎢an1B    ⋯ anmB⎥
                                                  ⎣                   ⎦                                      kron(A,B).
                                                             Lecture notes - Elias Jarlebring - 2017             version:2017-03-06, Elias Jarlebring - copyright 2015-2017
                                                                                   1
                    Lecture notes in numerical linear algebra
                    Numerical methods for Lyapunov equations
                    With this notation we can derive a very useful identity. For matrices
                    A,B,X of matching size, we have
                                            vecˆAXB=ˆBT⊗AvecˆX                                (4.2)
                    Byvectorizing the Lyapunov equation (4.1) we have
                                                vecˆAX+XAT = vecˆW                                         Apply (4.2) twice, once with A = I
                                        ˆI ⊗ A+A⊗IvecˆX = vecˆW                               (4.3)       and once with B = I.
                    The equation (4.3) is a linear system of equations on the standard
                    matrix-times-vector form. The connection between (4.1) and (4.3) is
                    useful, mostly for theoretical purposes. For instance, we can easily
                    characterize the existance and uniqueness of a solution.
                    Theorem4.1.1(Existanceanduniquenessofsolution). Letλi,i = 1,...,n
                    be the eigenvalues of A.
                    (i) The equation (4.1) has a unique solution X ∈ Rn×n if and only if
                                              λi ≠ −λj, for all i, j = 1,...,n.
                    (ii) In particular, if A is strictly stable (that is λi < 0, for all i = 1,...,n),
                       then (4.1) has a unique solution.
                                                                                                             The eigenvalues of a strictly stable ma-
                    Proof. This is an exercise.                                                              trix have negative real part.
                    Anaive computational approach based on (4.3)
                    It is tempting to approach the problem of solving the Lyapunov equa-
                    tion (4.1) by applying a generic method for linear systems of equations
                    Bz=wto(4.3),                                     −1
                                         vecˆX=ˆI⊗A+A⊗I vecˆW.                                (4.4)
                       Suppose for simplicity that we have a numerical method that can
                    solve a linear system of equations with N with OˆN3 operations (such
                    as a primitive implementation of Gaussian elimination). Since (4.3) is
                    a linear system with N = n2, the computational complexity of such an
                    approach is
                                                   t     ˆn = Oˆn6.                            (4.5)       Ageneric method for linear systems ap-
                                                    naive                                                    plied to (4.3), will have high computa-
                    Some gains in complexity are feasable by using more advanced ver-                        tional complexity.
                    sions of Gaussian elimination to solve (4.3), or exploiting sparsity in
                    I ⊗ A+ A⊗I. These improved variants will not be computationally
                    competitive for large n in comparison to the algorithms in the follow-
                    ing sections.
                       Although this approach is not competitive for large n, it can be
                    useful for small n, and (as we shall see below) as a component in a
                    numerical method for large n.
                                                             Lecture notes - Elias Jarlebring - 2017             version:2017-03-06, Elias Jarlebring - copyright 2015-2017
                                                                                   2
                             Lecture notes in numerical linear algebra
                             Numerical methods for Lyapunov equations
                             Bartels-Stewart algorithm
                             Frompreviouslecturesweknowthatthereareefficientalgorithmsthat                                                                       The Bartels-Stewart algorithm, initially
                             can compute the Schur decomposition of a matrix. It turns out that                                                                 presented for slightly more general
                             this can be used as a precomputation such that we obtain a triangular                                                              problems in [1] and is one of the lead-
                                                                                                                                                                ing methods for dense Lyapunov equa-
                             Lyapunov equation. The resulting equation can be efficiently solved                                                                 tions. It is implemented in matlab in the
                             in direct way with a finite number of operations, by using a type of                                                                commandlyap.
                             substitution.
                                 Recall that all real matrices have a real Schur decomposition: There                                                           For efficiency reasons, we here work
                                                                                                                                                                with the real Schur form, where in con-
                             exists matrices Q and T such that                                                                                                  trast to the complex Schur form, the com-
                                                                                                                                                                plex triangular matrix is replaced by a
                                                                                A=QTQT,                                                                         real block triangular matrix with blocks
                                                                                                                                                                of size one or two as in (4.6)
                             where Q is an orthogonal matrix and T ∈ Rn×n a block-triangular ma-
                             trix, where                                        ⎡T         ⋯ T ⎤
                                                                                ⎢ 11                1,r⎥
                                                                                ⎢                       ⎥
                                                                         T=⎢               ⋱         ⋮  ⎥                                      (4.6)
                                                                                ⎢                       ⎥
                                                                                ⎢                       ⎥
                                                                                ⎢                 T ⎥
                                                                                ⎣                    r,r⎦
                             and T ∈Rnj×nj, n ∈˜1,2, j = 1,...,r and ∑r                                      n =n.
                                       jj                   j                                            j=1    j
                                 We multiply the Lyapunov equation (4.1) from the right and left
                             with Q and QT respectively,
                                                   QTWQ = QTAXQ+QTXAQ=                                                                       (4.7a)             Use QQT = I.
                                                                    = QTAQQTXQ+QTXQQTAQ=                                                     (4.7b)
                                                                    = TY+YTT                                                                 (4.7c)
                             where Y = QTXQ. We introduce matrices and corresponding blocks
                             such that
                                                         C        C                  Z        Z                    R         R
                                      QTWQ= 11                      12	,Y =  11                12	, T =  11                 12	,            (4.8)
                                                         C        C                  Z        Z                      0       R
                                                           21        22                21        22                            22
                             where the blocks are such that Z ,C ,T = R                                             ∈ Rnr×nr (the size of
                                                                                       22     22     rr        22
                             the last block of T). This triangularized problem can now be solved
                             with (what we call) backward substitution, similar to backward sub-
                             stitution in Gaussian elimination. By separating the four blocks in the
                             equation (4.7), we have four equations
                                                                                                                                                                General idea: Solve (4.9b)-(4.9d) explic-
                                                     C        = R Z +R Z +Z RT +Z RT                                                         (4.9a)             itly.   Insert the solution into (4.9a) and
                                                       11              11 11           12 21          11    11        12   12                                   repeat the process for the smaller equa-
                                                     C        = R Z +R Z +Z RT                                                               (4.9b)             tion which is a Lyapunov equation with
                                                       12              11 12           12 22          12    22                                                                           ˆn−nr×ˆn−nr
                                                                                                                                                                unknown Z ∈R                              .
                                                     C        = R Z +Z RT +Z RT                                                              (4.9c)                             11
                                                       21              22 21           21    11       22    12
                                                     C        = R Z +Z RT.                                                                   (4.9d)
                                                       22              22 22           22    22
                             Duetothechoiceofblocksizes, the last equation of size nr×nr, which
                             can be solved explicitly since nr ∈ ˜1,2:
                                                                                          Lecture notes - Elias Jarlebring - 2017                                     version:2017-03-06, Elias Jarlebring - copyright 2015-2017
                                                                                                                          3
                    Lecture notes in numerical linear algebra
                    Numerical methods for Lyapunov equations
                    • If nr = 1, the equation (4.9d) is scalar and we obviously have
                                                                C
                                                        Z = 22 .                                (4.10)
                                                          22   2R
                                                                  22
                                                                                                             The eigenvalues of R22 are eigenvalues
                    • If nr = 2 we can still solve (4.9d) cheaply since it is a 2×2 Lyapunov                 of A.   Therefore the small Lyapunov
                       equation. For instance, we can use the naive approach (4.4), i.e.,                    equation (4.9d) has a unique solution if
                                                                                                             (4.1) has a unique solution.
                                       vecˆZ =ˆI⊗R +R ⊗I−1vecˆC .                            (4.11)
                                              22           22    22               22
                    Insert the now known matrix Z             into (4.9b) and transposed (4.9c),
                                                           22
                    yields
                                      ˜                                           T
                                     C ∶=C −R Z               = R Z +Z R                       (4.12a)
                                       12     12     12 22          11 12     12  22
                                      ˜       T          T              T     T   T
                                     C ∶=C −R Z               = R Z +Z R .                     (4.12b)
                                       21     21     12 22          11 21     21  22
                    The equations (4.12) have a particular structure that can be used to
                    directly compute the solutions Z         and Z . An explicit procedure for
                                                          12        21
                    the construction of the solution (4.12) is given by the following result.
                    Lemma4.2.1. Consider two matrices C,D partitioned in blocks of size n1 ×
                    np, n2 ×np, ..., np−1 ×np according to
                                          ⎡ X ⎤                    ⎡ D ⎤
                                          ⎢   1 ⎥                  ⎢    1 ⎥
                                          ⎢      ⎥     N×nr        ⎢      ⎥     N×np
                                     X=⎢ ⋮ ⎥∈R              , D=⎢ ⋮ ⎥∈R
                                          ⎢      ⎥                 ⎢      ⎥
                                          ⎢      ⎥                 ⎢      ⎥
                                          ⎢Xp−1⎥                   ⎢Dp−1⎥
                                          ⎣      ⎦       p−1       ⎣      ⎦
                    where Cj,Dj ∈ Rnj×np and N = ∑j=1 nj. Let U ∈ Rn×n be a block triangular                 The block triangular matrix U is
                    matrix partitioned as X and D. For any R22 ∈ Rnp×np, we that if X satisfies                          ⎡                  ⎤
                                                                                                                         U     ⋯    U
                    the equation                                                                                        ⎢ 11          1,p−1 ⎥
                                                                                                                        ⎢                  ⎥
                                                                                                                    U=⎢        ⋱       ⋮   ⎥
                                                                    T                                                   ⎢                  ⎥
                                                    D=UX+XR22,                                  (4.13)                  ⎢          Up−1,p−1⎥
                                                                                                                        ⎣ ni×nj            ⎦
                                                                                                             where U ∈R       .
                    then Xj, p−1,p−2,...,1 satisfy                                                                   i,j
                                                                T     ˜
                                         p−1      UjjXj+XjR22 = Dj                              (4.14)
                            ˜
                    where Dj ∶= Dj −∑i=j+1UjiXi.
                       Similar to the approach we used to compute Z , X can expressed
                                                                               22   j
                    and computed explicitly from a small linear system
                                                                       −1       ˜
                                       vecˆX  = ˆI ⊗T +R22⊗I            vecˆW                (4.15)
                                              j           jj                     j
                    By solving (4.15) for j = p −1,...,1, for both equations (4.12)a and
                    (4.12)b we obtain solutions Z        and Z . Insertion of (the now known
                                                      12        21
                    solutions) Z , Z      and Z     into (4.9a) gives a new Lyapunov equation
                                  12   21        22
                    of size n−np and the process can be repeated for the smaller matrix.
                                                             Lecture notes - Elias Jarlebring - 2017             version:2017-03-06, Elias Jarlebring - copyright 2015-2017
                                                                                   4
The words contained in this file might help you see if this file matches what you are looking for:

...Lecture notes in numerical linear algebra methods for lyapunov equations this chapter is about a particular type of equa tion expressed as matrix equality thelyapunovequationisthemostcom denition consider two square matrices w rn n the problem mon class problems to nd x such that called other examples sylvester equation stein riccati ax xat different algorithms are suitable situations depending traditionally driven by certain prob on properties and we work out lems system control lya punov now appears very dense bartels stewart algorithm one most efcient ap large number elds develop proaches derived section ments improvements sparse scale if low rank krylov have continues be rec ognizedasimportantinnumericallinear maybemoreefcient someapplications given exercises anaive approach eld systems sometimes some continuous time what unusual form will see can reformulated disambiguation with discrete standard using techniques many algo vectorization kroneckerproducts b m withcolumns rithms pre...

no reviews yet
Please Login to review.