lapack/complex16

Click here to see the number of accesses to this library.


# ---------------------------------
# Available SIMPLE DRIVER routines:
# ---------------------------------
file zcgesv.f  zcgesv.f plus dependencies
prec complex16
        ZCGESV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N matrix and X and B are N-by-NRHS matrices.
      
        ZCGESV first attempts to factorize the matrix in COMPLEX and use this
        factorization within an iterative refinement procedure to produce a
        solution with COMPLEX*16 normwise backward error quality (see below).
        If the approach fails the method switches to a COMPLEX*16
        factorization and solve.
      
        The iterative refinement is not going to be a winning strategy if
        the ratio COMPLEX performance over COMPLEX*16 performance is too
        small. A reasonable strategy should take the number of right-hand
        sides and the size of the matrix into account. This might be done
        with a call to ILAENV in the future. Up to now, we always try
        iterative refinement.
      
        The iterative refinement process is stopped if
            ITER > ITERMAX
        or for all the RHS we have:
            RNRM < SQRT(N)*XNRM*ANRM*EPS*BWDMAX
        where
            o ITER is the number of the current iteration in the iterative
              refinement process
            o RNRM is the infinity-norm of the residual
            o XNRM is the infinity-norm of the solution
            o ANRM is the infinity-operator-norm of the matrix A
            o EPS is the machine epsilon returned by DLAMCH('Epsilon')
        The value ITERMAX and BWDMAX are fixed to 30 and 1.0D+00
        respectively.

file zcposv.f  zcposv.f plus dependencies
prec complex16
        ZCPOSV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian positive definite matrix and X and B
        are N-by-NRHS matrices.
      
        ZCPOSV first attempts to factorize the matrix in COMPLEX and use this
        factorization within an iterative refinement procedure to produce a
        solution with COMPLEX*16 normwise backward error quality (see below).
        If the approach fails the method switches to a COMPLEX*16
        factorization and solve.
      
        The iterative refinement is not going to be a winning strategy if
        the ratio COMPLEX performance over COMPLEX*16 performance is too
        small. A reasonable strategy should take the number of right-hand
        sides and the size of the matrix into account. This might be done
        with a call to ILAENV in the future. Up to now, we always try
        iterative refinement.
      
        The iterative refinement process is stopped if
            ITER > ITERMAX
        or for all the RHS we have:
            RNRM < SQRT(N)*XNRM*ANRM*EPS*BWDMAX
        where
            o ITER is the number of the current iteration in the iterative
              refinement process
            o RNRM is the infinity-norm of the residual
            o XNRM is the infinity-norm of the solution
            o ANRM is the infinity-operator-norm of the matrix A
            o EPS is the machine epsilon returned by DLAMCH('Epsilon')
        The value ITERMAX and BWDMAX are fixed to 30 and 1.0D+00
        respectively.

file zgbsv.f  zgbsv.f plus dependencies
prec complex16
        ZGBSV computes the solution to a complex system of linear equations
        A * X = B, where A is a band matrix of order N with KL subdiagonals
        and KU superdiagonals, and X and B are N-by-NRHS matrices.
      
        The LU decomposition with partial pivoting and row interchanges is
        used to factor A as A = L * U, where L is a product of permutation
        and unit lower triangular matrices with KL subdiagonals, and U is
        upper triangular with KL+KU superdiagonals.  The factored form of A
        is then used to solve the system of equations A * X = B.

file zgees.f  zgees.f plus dependencies
prec complex16
        ZGEES computes for an N-by-N complex nonsymmetric matrix A, the
        eigenvalues, the Schur form T, and, optionally, the matrix of Schur
        vectors Z.  This gives the Schur factorization A = Z*T*(Z**H).
      
        Optionally, it also orders the eigenvalues on the diagonal of the
        Schur form so that selected eigenvalues are at the top left.
        The leading columns of Z then form an orthonormal basis for the
        invariant subspace corresponding to the selected eigenvalues.
      
        A complex matrix is in Schur form if it is upper triangular.

file zgeev.f  zgeev.f plus dependencies
prec complex16
        ZGEEV computes for an N-by-N complex nonsymmetric matrix A, the
        eigenvalues and, optionally, the left and/or right eigenvectors.
      
        The right eigenvector v(j) of A satisfies
                         A * v(j) = lambda(j) * v(j)
        where lambda(j) is its eigenvalue.
        The left eigenvector u(j) of A satisfies
                      u(j)**H * A = lambda(j) * u(j)**H
        where u(j)**H denotes the conjugate transpose of u(j).
      
        The computed eigenvectors are normalized to have Euclidean norm
        equal to 1 and largest component real.

file zgegs.f  zgegs.f plus dependencies
prec complex16
        This routine is deprecated and has been replaced by routine ZGGES.
      
        ZGEGS computes the eigenvalues, Schur form, and, optionally, the
        left and or/right Schur vectors of a complex matrix pair (A,B).
        Given two square matrices A and B, the generalized Schur
        factorization has the form
        
           A = Q*S*Z**H,  B = Q*T*Z**H
        
        where Q and Z are unitary matrices and S and T are upper triangular.
        The columns of Q are the left Schur vectors
        and the columns of Z are the right Schur vectors.
        
        If only the eigenvalues of (A,B) are needed, the driver routine
        ZGEGV should be used instead.  See ZGEGV for a description of the
        eigenvalues of the generalized nonsymmetric eigenvalue problem
        (GNEP).

file zgegv.f  zgegv.f plus dependencies
prec complex16
        This routine is deprecated and has been replaced by routine ZGGEV.
      
        ZGEGV computes the eigenvalues and, optionally, the left and/or right
        eigenvectors of a complex matrix pair (A,B).
        Given two square matrices A and B,
        the generalized nonsymmetric eigenvalue problem (GNEP) is to find the
        eigenvalues lambda and corresponding (non-zero) eigenvectors x such
        that
           A*x = lambda*B*x.
      
        An alternate form is to find the eigenvalues mu and corresponding
        eigenvectors y such that
           mu*A*y = B*y.
      
        These two forms are equivalent with mu = 1/lambda and x = y if
        neither lambda nor mu is zero.  In order to deal with the case that
        lambda or mu is zero or small, two values alpha and beta are returned
        for each eigenvalue, such that lambda = alpha/beta and
        mu = beta/alpha.
      
        The vectors x and y in the above equations are right eigenvectors of
        the matrix pair (A,B).  Vectors u and v satisfying
           u**H*A = lambda*u**H*B  or  mu*v**H*A = v**H*B
        are left eigenvectors of (A,B).
      
        Note: this routine performs "full balancing" on A and B -- see
        "Further Details", below.

file zgels.f  zgels.f plus dependencies
prec complex16
        ZGELS solves overdetermined or underdetermined complex linear systems
        involving an M-by-N matrix A, or its conjugate-transpose, using a QR
        or LQ factorization of A.  It is assumed that A has full rank.
      
        The following options are provided:
      
        1. If TRANS = 'N' and m >= n:  find the least squares solution of
           an overdetermined system, i.e., solve the least squares problem
                        minimize || B - A*X ||.
      
        2. If TRANS = 'N' and m < n:  find the minimum norm solution of
           an underdetermined system A * X = B.
      
        3. If TRANS = 'C' and m >= n:  find the minimum norm solution of
           an undetermined system A**H * X = B.
      
        4. If TRANS = 'C' and m < n:  find the least squares solution of
           an overdetermined system, i.e., solve the least squares problem
                        minimize || B - A**H * X ||.
      
        Several right hand side vectors b and solution vectors x can be
        handled in a single call; they are stored as the columns of the
        M-by-NRHS right hand side matrix B and the N-by-NRHS solution
        matrix X.

file zgelsd.f  zgelsd.f plus dependencies
prec complex16
        ZGELSD computes the minimum-norm solution to a real linear least
        squares problem:
            minimize 2-norm(| b - A*x |)
        using the singular value decomposition (SVD) of A. A is an M-by-N
        matrix which may be rank-deficient.
      
        Several right hand side vectors b and solution vectors x can be
        handled in a single call; they are stored as the columns of the
        M-by-NRHS right hand side matrix B and the N-by-NRHS solution
        matrix X.
      
        The problem is solved in three steps:
        (1) Reduce the coefficient matrix A to bidiagonal form with
            Householder tranformations, reducing the original problem
            into a "bidiagonal least squares problem" (BLS)
        (2) Solve the BLS using a divide and conquer approach.
        (3) Apply back all the Householder tranformations to solve
            the original least squares problem.
      
        The effective rank of A is determined by treating as zero those
        singular values which are less than RCOND times the largest singular
        value.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zgelss.f  zgelss.f plus dependencies
prec complex16
        ZGELSS computes the minimum norm solution to a complex linear
        least squares problem:
      
        Minimize 2-norm(| b - A*x |).
      
        using the singular value decomposition (SVD) of A. A is an M-by-N
        matrix which may be rank-deficient.
      
        Several right hand side vectors b and solution vectors x can be
        handled in a single call; they are stored as the columns of the
        M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix
        X.
      
        The effective rank of A is determined by treating as zero those
        singular values which are less than RCOND times the largest singular
        value.

file zgelsy.f  zgelsy.f plus dependencies
prec complex16
        ZGELSY computes the minimum-norm solution to a complex linear least
        squares problem:
            minimize || A * X - B ||
        using a complete orthogonal factorization of A.  A is an M-by-N
        matrix which may be rank-deficient.
      
        Several right hand side vectors b and solution vectors x can be
        handled in a single call; they are stored as the columns of the
        M-by-NRHS right hand side matrix B and the N-by-NRHS solution
        matrix X.
      
        The routine first computes a QR factorization with column pivoting:
            A * P = Q * [ R11 R12 ]
                        [  0  R22 ]
        with R11 defined as the largest leading submatrix whose estimated
        condition number is less than 1/RCOND.  The order of R11, RANK,
        is the effective rank of A.
      
        Then, R22 is considered to be negligible, and R12 is annihilated
        by unitary transformations from the right, arriving at the
        complete orthogonal factorization:
           A * P = Q * [ T11 0 ] * Z
                       [  0  0 ]
        The minimum-norm solution is then
           X = P * Z' [ inv(T11)*Q1'*B ]
                      [        0       ]
        where Q1 consists of the first RANK columns of Q.
      
        This routine is basically identical to the original xGELSX except
        three differences:
          o The permutation of matrix B (the right hand side) is faster and
            more simple.
          o The call to the subroutine xGEQPF has been substituted by the
            the call to the subroutine xGEQP3. This subroutine is a Blas-3
            version of the QR factorization with column pivoting.
          o Matrix B (the right hand side) is updated with Blas-3.

file zgesdd.f  zgesdd.f plus dependencies
prec complex16
        ZGESDD computes the singular value decomposition (SVD) of a complex
        M-by-N matrix A, optionally computing the left and/or right singular
        vectors, by using divide-and-conquer method. The SVD is written
      
             A = U * SIGMA * conjugate-transpose(V)
      
        where SIGMA is an M-by-N matrix which is zero except for its
        min(m,n) diagonal elements, U is an M-by-M unitary matrix, and
        V is an N-by-N unitary matrix.  The diagonal elements of SIGMA
        are the singular values of A; they are real and non-negative, and
        are returned in descending order.  The first min(m,n) columns of
        U and V are the left and right singular vectors of A.
      
        Note that the routine returns VT = V**H, not V.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zgesv.f  zgesv.f plus dependencies
prec complex16
        ZGESV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N matrix and X and B are N-by-NRHS matrices.
      
        The LU decomposition with partial pivoting and row interchanges is
        used to factor A as
           A = P * L * U,
        where P is a permutation matrix, L is unit lower triangular, and U is
        upper triangular.  The factored form of A is then used to solve the
        system of equations A * X = B.

file zgesvd.f  zgesvd.f plus dependencies
prec complex16
        ZGESVD computes the singular value decomposition (SVD) of a complex
        M-by-N matrix A, optionally computing the left and/or right singular
        vectors. The SVD is written
      
             A = U * SIGMA * conjugate-transpose(V)
      
        where SIGMA is an M-by-N matrix which is zero except for its
        min(m,n) diagonal elements, U is an M-by-M unitary matrix, and
        V is an N-by-N unitary matrix.  The diagonal elements of SIGMA
        are the singular values of A; they are real and non-negative, and
        are returned in descending order.  The first min(m,n) columns of
        U and V are the left and right singular vectors of A.
      
        Note that the routine returns V**H, not V.

file zgges.f  zgges.f plus dependencies
prec complex16
        ZGGES computes for a pair of N-by-N complex nonsymmetric matrices
        (A,B), the generalized eigenvalues, the generalized complex Schur
        form (S, T), and optionally left and/or right Schur vectors (VSL
        and VSR). This gives the generalized Schur factorization
      
                (A,B) = ( (VSL)*S*(VSR)**H, (VSL)*T*(VSR)**H )
      
        where (VSR)**H is the conjugate-transpose of VSR.
      
        Optionally, it also orders the eigenvalues so that a selected cluster
        of eigenvalues appears in the leading diagonal blocks of the upper
        triangular matrix S and the upper triangular matrix T. The leading
        columns of VSL and VSR then form an unitary basis for the
        corresponding left and right eigenspaces (deflating subspaces).
      
        (If only the generalized eigenvalues are needed, use the driver
        ZGGEV instead, which is faster.)
      
        A generalized eigenvalue for a pair of matrices (A,B) is a scalar w
        or a ratio alpha/beta = w, such that  A - w*B is singular.  It is
        usually represented as the pair (alpha,beta), as there is a
        reasonable interpretation for beta=0, and even for both being zero.
      
        A pair of matrices (S,T) is in generalized complex Schur form if S
        and T are upper triangular and, in addition, the diagonal elements
        of T are non-negative real numbers.

file zggev.f  zggev.f plus dependencies
prec complex16
        ZGGEV computes for a pair of N-by-N complex nonsymmetric matrices
        (A,B), the generalized eigenvalues, and optionally, the left and/or
        right generalized eigenvectors.
      
        A generalized eigenvalue for a pair of matrices (A,B) is a scalar
        lambda or a ratio alpha/beta = lambda, such that A - lambda*B is
        singular. It is usually represented as the pair (alpha,beta), as
        there is a reasonable interpretation for beta=0, and even for both
        being zero.
      
        The right generalized eigenvector v(j) corresponding to the
        generalized eigenvalue lambda(j) of (A,B) satisfies
      
                     A * v(j) = lambda(j) * B * v(j).
      
        The left generalized eigenvector u(j) corresponding to the
        generalized eigenvalues lambda(j) of (A,B) satisfies
      
                     u(j)**H * A = lambda(j) * u(j)**H * B
      
        where u(j)**H is the conjugate-transpose of u(j).

file zggglm.f  zggglm.f plus dependencies
prec complex16
        ZGGGLM solves a general Gauss-Markov linear model (GLM) problem:
      
                minimize || y ||_2   subject to   d = A*x + B*y
                    x
      
        where A is an N-by-M matrix, B is an N-by-P matrix, and d is a
        given N-vector. It is assumed that M <= N <= M+P, and
      
                   rank(A) = M    and    rank( A B ) = N.
      
        Under these assumptions, the constrained equation is always
        consistent, and there is a unique solution x and a minimal 2-norm
        solution y, which is obtained using a generalized QR factorization
        of the matrices (A, B) given by
      
           A = Q*(R),   B = Q*T*Z.
                 (0)
      
        In particular, if matrix B is square nonsingular, then the problem
        GLM is equivalent to the following weighted linear least squares
        problem
      
                     minimize || inv(B)*(d-A*x) ||_2
                         x
      
        where inv(B) denotes the inverse of B.

file zgglse.f  zgglse.f plus dependencies
prec complex16
        ZGGLSE solves the linear equality-constrained least squares (LSE)
        problem:
      
                minimize || c - A*x ||_2   subject to   B*x = d
      
        where A is an M-by-N matrix, B is a P-by-N matrix, c is a given
        M-vector, and d is a given P-vector. It is assumed that
        P <= N <= M+P, and
      
                 rank(B) = P and  rank( ( A ) ) = N.
                                      ( ( B ) )
      
        These conditions ensure that the LSE problem has a unique solution,
        which is obtained using a generalized RQ factorization of the
        matrices (B, A) given by
      
           B = (0 R)*Q,   A = Z*T*Q.

file zggsvd.f  zggsvd.f plus dependencies
prec complex16
        ZGGSVD computes the generalized singular value decomposition (GSVD)
        of an M-by-N complex matrix A and P-by-N complex matrix B:
      
              U'*A*Q = D1*( 0 R ),    V'*B*Q = D2*( 0 R )
      
        where U, V and Q are unitary matrices, and Z' means the conjugate
        transpose of Z.  Let K+L = the effective numerical rank of the
        matrix (A',B')', then R is a (K+L)-by-(K+L) nonsingular upper
        triangular matrix, D1 and D2 are M-by-(K+L) and P-by-(K+L) "diagonal"
        matrices and of the following structures, respectively:
      
        If M-K-L >= 0,
      
                            K  L
               D1 =     K ( I  0 )
                        L ( 0  C )
                    M-K-L ( 0  0 )
      
                          K  L
               D2 =   L ( 0  S )
                    P-L ( 0  0 )
      
                        N-K-L  K    L
          ( 0 R ) = K (  0   R11  R12 )
                    L (  0    0   R22 )
        where
      
          C = diag( ALPHA(K+1), ... , ALPHA(K+L) ),
          S = diag( BETA(K+1),  ... , BETA(K+L) ),
          C**2 + S**2 = I.
      
          R is stored in A(1:K+L,N-K-L+1:N) on exit.
      
        If M-K-L < 0,
      
                          K M-K K+L-M
               D1 =   K ( I  0    0   )
                    M-K ( 0  C    0   )
      
                            K M-K K+L-M
               D2 =   M-K ( 0  S    0  )
                    K+L-M ( 0  0    I  )
                      P-L ( 0  0    0  )
      
                           N-K-L  K   M-K  K+L-M
          ( 0 R ) =     K ( 0    R11  R12  R13  )
                      M-K ( 0     0   R22  R23  )
                    K+L-M ( 0     0    0   R33  )
      
        where
      
          C = diag( ALPHA(K+1), ... , ALPHA(M) ),
          S = diag( BETA(K+1),  ... , BETA(M) ),
          C**2 + S**2 = I.
      
          (R11 R12 R13 ) is stored in A(1:M, N-K-L+1:N), and R33 is stored
          ( 0  R22 R23 )
          in B(M-K+1:L,N+M-K-L+1:N) on exit.
      
        The routine computes C, S, R, and optionally the unitary
        transformation matrices U, V and Q.
      
        In particular, if B is an N-by-N nonsingular matrix, then the GSVD of
        A and B implicitly gives the SVD of A*inv(B):
                             A*inv(B) = U*(D1*inv(D2))*V'.
        If ( A',B')' has orthnormal columns, then the GSVD of A and B is also
        equal to the CS decomposition of A and B. Furthermore, the GSVD can
        be used to derive the solution of the eigenvalue problem:
                             A'*A x = lambda* B'*B x.
        In some literature, the GSVD of A and B is presented in the form
                         U'*A*X = ( 0 D1 ),   V'*B*X = ( 0 D2 )
        where U and V are orthogonal and X is nonsingular, and D1 and D2 are
        ``diagonal''.  The former GSVD form can be converted to the latter
        form by taking the nonsingular matrix X as
      
                              X = Q*(  I   0    )
                                    (  0 inv(R) )

file zhbev.f  zhbev.f plus dependencies
prec complex16
        ZHBEV computes all the eigenvalues and, optionally, eigenvectors of
        a complex Hermitian band matrix A.

file zhbevd.f  zhbevd.f plus dependencies
prec complex16
        ZHBEVD computes all the eigenvalues and, optionally, eigenvectors of
        a complex Hermitian band matrix A.  If eigenvectors are desired, it
        uses a divide and conquer algorithm.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zhbgv.f  zhbgv.f plus dependencies
prec complex16
        ZHBGV computes all the eigenvalues, and optionally, the eigenvectors
        of a complex generalized Hermitian-definite banded eigenproblem, of
        the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian
        and banded, and B is also positive definite.

file zhbgvd.f  zhbgvd.f plus dependencies
prec complex16
        ZHBGVD computes all the eigenvalues, and optionally, the eigenvectors
        of a complex generalized Hermitian-definite banded eigenproblem, of
        the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian
        and banded, and B is also positive definite.  If eigenvectors are
        desired, it uses a divide and conquer algorithm.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zheev.f  zheev.f plus dependencies
prec complex16
        ZHEEV computes all eigenvalues and, optionally, eigenvectors of a
        complex Hermitian matrix A.

file zheevd.f  zheevd.f plus dependencies
prec complex16
        ZHEEVD computes all eigenvalues and, optionally, eigenvectors of a
        complex Hermitian matrix A.  If eigenvectors are desired, it uses a
        divide and conquer algorithm.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zheevr.f  zheevr.f plus dependencies
prec complex16
        ZHEEVR computes selected eigenvalues and, optionally, eigenvectors
        of a complex Hermitian matrix A.  Eigenvalues and eigenvectors can
        be selected by specifying either a range of values or a range of
        indices for the desired eigenvalues.
      
        ZHEEVR first reduces the matrix A to tridiagonal form T with a call
        to ZHETRD.  Then, whenever possible, ZHEEVR calls ZSTEMR to compute
        eigenspectrum using Relatively Robust Representations.  ZSTEMR
        computes eigenvalues by the dqds algorithm, while orthogonal
        eigenvectors are computed from various "good" L D L^T representations
        (also known as Relatively Robust Representations). Gram-Schmidt
        orthogonalization is avoided as far as possible. More specifically,
        the various steps of the algorithm are as follows.
      
        For each unreduced block (submatrix) of T,
           (a) Compute T - sigma I  = L D L^T, so that L and D
               define all the wanted eigenvalues to high relative accuracy.
               This means that small relative changes in the entries of D and L
               cause only small relative changes in the eigenvalues and
               eigenvectors. The standard (unfactored) representation of the
               tridiagonal matrix T does not have this property in general.
           (b) Compute the eigenvalues to suitable accuracy.
               If the eigenvectors are desired, the algorithm attains full
               accuracy of the computed eigenvalues only right before
               the corresponding vectors have to be computed, see steps c) and d).
           (c) For each cluster of close eigenvalues, select a new
               shift close to the cluster, find a new factorization, and refine
               the shifted eigenvalues to suitable accuracy.
           (d) For each eigenvalue with a large enough relative separation compute
               the corresponding eigenvector by forming a rank revealing twisted
               factorization. Go back to (c) for any clusters that remain.
      
        The desired accuracy of the output can be specified by the input
        parameter ABSTOL.
      
        For more details, see DSTEMR's documentation and:
        - Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations
          to compute orthogonal eigenvectors of symmetric tridiagonal matrices,"
          Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004.
        - Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and
          Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25,
          2004.  Also LAPACK Working Note 154.
        - Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric
          tridiagonal eigenvalue/eigenvector problem",
          Computer Science Division Technical Report No. UCB/CSD-97-971,
          UC Berkeley, May 1997.
      
      
        Note 1 : ZHEEVR calls ZSTEMR when the full spectrum is requested
        on machines which conform to the ieee-754 floating point standard.
        ZHEEVR calls DSTEBZ and ZSTEIN on non-ieee machines and
        when partial spectrum requests are made.
      
        Normal execution of ZSTEMR may create NaNs and infinities and
        hence may abort due to a floating point exception in environments
        which do not handle NaNs and infinities in the ieee standard default
        manner.

file zhegv.f  zhegv.f plus dependencies
prec complex16
        ZHEGV computes all the eigenvalues, and optionally, the eigenvectors
        of a complex generalized Hermitian-definite eigenproblem, of the form
        A*x=(lambda)*B*x,  A*Bx=(lambda)*x,  or B*A*x=(lambda)*x.
        Here A and B are assumed to be Hermitian and B is also
        positive definite.

file zhegvd.f  zhegvd.f plus dependencies
prec complex16
        ZHEGVD computes all the eigenvalues, and optionally, the eigenvectors
        of a complex generalized Hermitian-definite eigenproblem, of the form
        A*x=(lambda)*B*x,  A*Bx=(lambda)*x,  or B*A*x=(lambda)*x.  Here A and
        B are assumed to be Hermitian and B is also positive definite.
        If eigenvectors are desired, it uses a divide and conquer algorithm.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zhesv.f  zhesv.f plus dependencies
prec complex16
        ZHESV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS
        matrices.
      
        The diagonal pivoting method is used to factor A as
           A = U * D * U**H,  if UPLO = 'U', or
           A = L * D * L**H,  if UPLO = 'L',
        where U (or L) is a product of permutation and unit upper (lower)
        triangular matrices, and D is Hermitian and block diagonal with
        1-by-1 and 2-by-2 diagonal blocks.  The factored form of A is then
        used to solve the system of equations A * X = B.

file zhpev.f  zhpev.f plus dependencies
prec complex16
        ZHPEV computes all the eigenvalues and, optionally, eigenvectors of a
        complex Hermitian matrix in packed storage.

file zhpevd.f  zhpevd.f plus dependencies
prec complex16
        ZHPEVD computes all the eigenvalues and, optionally, eigenvectors of
        a complex Hermitian matrix A in packed storage.  If eigenvectors are
        desired, it uses a divide and conquer algorithm.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zhpgv.f  zhpgv.f plus dependencies
prec complex16
        ZHPGV computes all the eigenvalues and, optionally, the eigenvectors
        of a complex generalized Hermitian-definite eigenproblem, of the form
        A*x=(lambda)*B*x,  A*Bx=(lambda)*x,  or B*A*x=(lambda)*x.
        Here A and B are assumed to be Hermitian, stored in packed format,
        and B is also positive definite.

file zhpgvd.f  zhpgvd.f plus dependencies
prec complex16
        ZHPGVD computes all the eigenvalues and, optionally, the eigenvectors
        of a complex generalized Hermitian-definite eigenproblem, of the form
        A*x=(lambda)*B*x,  A*Bx=(lambda)*x,  or B*A*x=(lambda)*x.  Here A and
        B are assumed to be Hermitian, stored in packed format, and B is also
        positive definite.
        If eigenvectors are desired, it uses a divide and conquer algorithm.
      
        The divide and conquer algorithm makes very mild assumptions about
        floating point arithmetic. It will work on machines with a guard
        digit in add/subtract, or on those binary machines without guard
        digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or
        Cray-2. It could conceivably fail on hexadecimal or decimal machines
        without guard digits, but we know of none.

file zhpsv.f  zhpsv.f plus dependencies
prec complex16
        ZHPSV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian matrix stored in packed format and X
        and B are N-by-NRHS matrices.
      
        The diagonal pivoting method is used to factor A as
           A = U * D * U**H,  if UPLO = 'U', or
           A = L * D * L**H,  if UPLO = 'L',
        where U (or L) is a product of permutation and unit upper (lower)
        triangular matrices, D is Hermitian and block diagonal with 1-by-1
        and 2-by-2 diagonal blocks.  The factored form of A is then used to
        solve the system of equations A * X = B.

file zpbsv.f  zpbsv.f plus dependencies
prec complex16
        ZPBSV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian positive definite band matrix and X
        and B are N-by-NRHS matrices.
      
        The Cholesky decomposition is used to factor A as
           A = U**H * U,  if UPLO = 'U', or
           A = L * L**H,  if UPLO = 'L',
        where U is an upper triangular band matrix, and L is a lower
        triangular band matrix, with the same number of superdiagonals or
        subdiagonals as A.  The factored form of A is then used to solve the
        system of equations A * X = B.

file zposv.f  zposv.f plus dependencies
prec complex16
        ZPOSV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian positive definite matrix and X and B
        are N-by-NRHS matrices.
      
        The Cholesky decomposition is used to factor A as
           A = U**H* U,  if UPLO = 'U', or
           A = L * L**H,  if UPLO = 'L',
        where U is an upper triangular matrix and  L is a lower triangular
        matrix.  The factored form of A is then used to solve the system of
        equations A * X = B.

file zppsv.f  zppsv.f plus dependencies
prec complex16
        ZPPSV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian positive definite matrix stored in
        packed format and X and B are N-by-NRHS matrices.
      
        The Cholesky decomposition is used to factor A as
           A = U**H* U,  if UPLO = 'U', or
           A = L * L**H,  if UPLO = 'L',
        where U is an upper triangular matrix and L is a lower triangular
        matrix.  The factored form of A is then used to solve the system of
        equations A * X = B.

file zspsv.f  zspsv.f plus dependencies
prec complex16
        ZSPSV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N symmetric matrix stored in packed format and X
        and B are N-by-NRHS matrices.
      
        The diagonal pivoting method is used to factor A as
           A = U * D * U**T,  if UPLO = 'U', or
           A = L * D * L**T,  if UPLO = 'L',
        where U (or L) is a product of permutation and unit upper (lower)
        triangular matrices, D is symmetric and block diagonal with 1-by-1
        and 2-by-2 diagonal blocks.  The factored form of A is then used to
        solve the system of equations A * X = B.

file zstemr.f  zstemr.f plus dependencies
prec complex16
        ZSTEMR computes selected eigenvalues and, optionally, eigenvectors
        of a real symmetric tridiagonal matrix T. Any such unreduced matrix has
        a well defined set of pairwise different real eigenvalues, the corresponding
        real eigenvectors are pairwise orthogonal.
      
        The spectrum may be computed either completely or partially by specifying
        either an interval (VL,VU] or a range of indices IL:IU for the desired
        eigenvalues.
      
        Depending on the number of desired eigenvalues, these are computed either
        by bisection or the dqds algorithm. Numerically orthogonal eigenvectors are
        computed by the use of various suitable L D L^T factorizations near clusters
        of close eigenvalues (referred to as RRRs, Relatively Robust
        Representations). An informal sketch of the algorithm follows.
      
        For each unreduced block (submatrix) of T,
           (a) Compute T - sigma I  = L D L^T, so that L and D
               define all the wanted eigenvalues to high relative accuracy.
               This means that small relative changes in the entries of D and L
               cause only small relative changes in the eigenvalues and
               eigenvectors. The standard (unfactored) representation of the
               tridiagonal matrix T does not have this property in general.
           (b) Compute the eigenvalues to suitable accuracy.
               If the eigenvectors are desired, the algorithm attains full
               accuracy of the computed eigenvalues only right before
               the corresponding vectors have to be computed, see steps c) and d).
           (c) For each cluster of close eigenvalues, select a new
               shift close to the cluster, find a new factorization, and refine
               the shifted eigenvalues to suitable accuracy.
           (d) For each eigenvalue with a large enough relative separation compute
               the corresponding eigenvector by forming a rank revealing twisted
               factorization. Go back to (c) for any clusters that remain.
      
        For more details, see:
        - Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations
          to compute orthogonal eigenvectors of symmetric tridiagonal matrices,"
          Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004.
        - Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and
          Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25,
          2004.  Also LAPACK Working Note 154.
        - Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric
          tridiagonal eigenvalue/eigenvector problem",
          Computer Science Division Technical Report No. UCB/CSD-97-971,
          UC Berkeley, May 1997.
      
        Further Details
        1.ZSTEMR works only on machines which follow IEEE-754
        floating-point standard in their handling of infinities and NaNs.
        This permits the use of efficient inner loops avoiding a check for
        zero divisors.
      
        2. LAPACK routines can be used to reduce a complex Hermitean matrix to
        real symmetric tridiagonal form.
      
        (Any complex Hermitean tridiagonal matrix has real values on its diagonal
        and potentially complex numbers on its off-diagonals. By applying a
        similarity transform with an appropriate diagonal matrix
        diag(1,e^{i \phy_1}, ... , e^{i \phy_{n-1}}), the complex Hermitean
        matrix can be transformed into a real symmetric matrix and complex
        arithmetic can be entirely avoided.)
      
        While the eigenvectors of the real symmetric tridiagonal matrix are real,
        the eigenvectors of original complex Hermitean matrix have complex entries
        in general.
        Since LAPACK drivers overwrite the matrix data with the eigenvectors,
        ZSTEMR accepts complex workspace to facilitate interoperability
        with ZUNMTR or ZUPMTR.

file zsysv.f  zsysv.f plus dependencies
prec complex16
        ZSYSV computes the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N symmetric matrix and X and B are N-by-NRHS
        matrices.
      
        The diagonal pivoting method is used to factor A as
           A = U * D * U**T,  if UPLO = 'U', or
           A = L * D * L**T,  if UPLO = 'L',
        where U (or L) is a product of permutation and unit upper (lower)
        triangular matrices, and D is symmetric and block diagonal with
        1-by-1 and 2-by-2 diagonal blocks.  The factored form of A is then
        used to solve the system of equations A * X = B.

# ---------------------------------
# Available EXPERT DRIVER routines:
# ---------------------------------
file zgbsvx.f  zgbsvx.f plus dependencies
prec complex16
        ZGBSVX uses the LU factorization to compute the solution to a complex
        system of linear equations A * X = B, A**T * X = B, or A**H * X = B,
        where A is a band matrix of order N with KL subdiagonals and KU
        superdiagonals, and X and B are N-by-NRHS matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed by this subroutine:
      
        1. If FACT = 'E', real scaling factors are computed to equilibrate
           the system:
              TRANS = 'N':  diag(R)*A*diag(C)     *inv(diag(C))*X = diag(R)*B
              TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
              TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
           or diag(C)*B (if TRANS = 'T' or 'C').
      
        2. If FACT = 'N' or 'E', the LU decomposition is used to factor the
           matrix A (after equilibration if FACT = 'E') as
              A = L * U,
           where L is a product of permutation and unit lower triangular
           matrices with KL subdiagonals, and U is upper triangular with
           KL+KU superdiagonals.
      
        3. If some U(i,i)=0, so that U is exactly singular, then the routine
           returns with INFO = i. Otherwise, the factored form of A is used
           to estimate the condition number of the matrix A.  If the
           reciprocal of the condition number is less than machine precision,
           INFO = N+1 is returned as a warning, but the routine still goes on
           to solve for X and compute error bounds as described below.
      
        4. The system of equations is solved for X using the factored form
           of A.
      
        5. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.
      
        6. If equilibration was used, the matrix X is premultiplied by
           diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
           that it solves the original system before equilibration.

file zgbsvxx.f  zgbsvxx.f plus dependencies
prec complex16
           ZGBSVXX uses the LU factorization to compute the solution to a
           complex*16 system of linear equations  A * X = B,  where A is an
           N-by-N matrix and X and B are N-by-NRHS matrices.
      
           If requested, both normwise and maximum componentwise error bounds
           are returned. ZGBSVXX will return a solution with a tiny
           guaranteed error (O(eps) where eps is the working machine
           precision) unless the matrix is very ill-conditioned, in which
           case a warning is returned. Relevant condition numbers also are
           calculated and returned.
      
           ZGBSVXX accepts user-provided factorizations and equilibration
           factors; see the definitions of the FACT and EQUED options.
           Solving with refinement and using a factorization from a previous
           ZGBSVXX call will also produce a solution with either O(eps)
           errors or warnings, but we cannot make that claim for general
           user-provided factorizations and equilibration factors if they
           differ from what ZGBSVXX would itself produce.
      
           Description
           ===========
      
           The following steps are performed:
      
           1. If FACT = 'E', double precision scaling factors are computed to equilibrate
           the system:
      
             TRANS = 'N':  diag(R)*A*diag(C)     *inv(diag(C))*X = diag(R)*B
             TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
             TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
      
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
           or diag(C)*B (if TRANS = 'T' or 'C').
      
           2. If FACT = 'N' or 'E', the LU decomposition is used to factor
           the matrix A (after equilibration if FACT = 'E') as
      
             A = P * L * U,
      
           where P is a permutation matrix, L is a unit lower triangular
           matrix, and U is upper triangular.
      
           3. If some U(i,i)=0, so that U is exactly singular, then the
           routine returns with INFO = i. Otherwise, the factored form of A
           is used to estimate the condition number of the matrix A (see
           argument RCOND). If the reciprocal of the condition number is less
           than machine precision, the routine still goes on to solve for X
           and compute error bounds as described below.
      
           4. The system of equations is solved for X using the factored form
           of A.
      
           5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
           the routine will use iterative refinement to try to get a small
           error and error bounds.  Refinement calculates the residual to at
           least twice the working precision.
      
           6. If equilibration was used, the matrix X is premultiplied by
           diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
           that it solves the original system before equilibration.

file zgeesx.f  zgeesx.f plus dependencies
prec complex16
        ZGEESX computes for an N-by-N complex nonsymmetric matrix A, the
        eigenvalues, the Schur form T, and, optionally, the matrix of Schur
        vectors Z.  This gives the Schur factorization A = Z*T*(Z**H).
      
        Optionally, it also orders the eigenvalues on the diagonal of the
        Schur form so that selected eigenvalues are at the top left;
        computes a reciprocal condition number for the average of the
        selected eigenvalues (RCONDE); and computes a reciprocal condition
        number for the right invariant subspace corresponding to the
        selected eigenvalues (RCONDV).  The leading columns of Z form an
        orthonormal basis for this invariant subspace.
      
        For further explanation of the reciprocal condition numbers RCONDE
        and RCONDV, see Section 4.10 of the LAPACK Users' Guide (where
        these quantities are called s and sep respectively).
      
        A complex matrix is in Schur form if it is upper triangular.

file zgeevx.f  zgeevx.f plus dependencies
prec complex16
        ZGEEVX computes for an N-by-N complex nonsymmetric matrix A, the
        eigenvalues and, optionally, the left and/or right eigenvectors.
      
        Optionally also, it computes a balancing transformation to improve
        the conditioning of the eigenvalues and eigenvectors (ILO, IHI,
        SCALE, and ABNRM), reciprocal condition numbers for the eigenvalues
        (RCONDE), and reciprocal condition numbers for the right
        eigenvectors (RCONDV).
      
        The right eigenvector v(j) of A satisfies
                         A * v(j) = lambda(j) * v(j)
        where lambda(j) is its eigenvalue.
        The left eigenvector u(j) of A satisfies
                      u(j)**H * A = lambda(j) * u(j)**H
        where u(j)**H denotes the conjugate transpose of u(j).
      
        The computed eigenvectors are normalized to have Euclidean norm
        equal to 1 and largest component real.
      
        Balancing a matrix means permuting the rows and columns to make it
        more nearly upper triangular, and applying a diagonal similarity
        transformation D * A * D**(-1), where D is a diagonal matrix, to
        make its rows and columns closer in norm and the condition numbers
        of its eigenvalues and eigenvectors smaller.  The computed
        reciprocal condition numbers correspond to the balanced matrix.
        Permuting rows and columns will not change the condition numbers
        (in exact arithmetic) but diagonal scaling will.  For further
        explanation of balancing, see section 4.10.2 of the LAPACK
        Users' Guide.

file zgelsx.f  zgelsx.f plus dependencies
prec complex16
        This routine is deprecated and has been replaced by routine ZGELSY.
      
        ZGELSX computes the minimum-norm solution to a complex linear least
        squares problem:
            minimize || A * X - B ||
        using a complete orthogonal factorization of A.  A is an M-by-N
        matrix which may be rank-deficient.
      
        Several right hand side vectors b and solution vectors x can be
        handled in a single call; they are stored as the columns of the
        M-by-NRHS right hand side matrix B and the N-by-NRHS solution
        matrix X.
      
        The routine first computes a QR factorization with column pivoting:
            A * P = Q * [ R11 R12 ]
                        [  0  R22 ]
        with R11 defined as the largest leading submatrix whose estimated
        condition number is less than 1/RCOND.  The order of R11, RANK,
        is the effective rank of A.
      
        Then, R22 is considered to be negligible, and R12 is annihilated
        by unitary transformations from the right, arriving at the
        complete orthogonal factorization:
           A * P = Q * [ T11 0 ] * Z
                       [  0  0 ]
        The minimum-norm solution is then
           X = P * Z' [ inv(T11)*Q1'*B ]
                      [        0       ]
        where Q1 consists of the first RANK columns of Q.

file zgesvx.f  zgesvx.f plus dependencies
prec complex16
        ZGESVX uses the LU factorization to compute the solution to a complex
        system of linear equations
           A * X = B,
        where A is an N-by-N matrix and X and B are N-by-NRHS matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'E', real scaling factors are computed to equilibrate
           the system:
              TRANS = 'N':  diag(R)*A*diag(C)     *inv(diag(C))*X = diag(R)*B
              TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
              TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
           or diag(C)*B (if TRANS = 'T' or 'C').
      
        2. If FACT = 'N' or 'E', the LU decomposition is used to factor the
           matrix A (after equilibration if FACT = 'E') as
              A = P * L * U,
           where P is a permutation matrix, L is a unit lower triangular
           matrix, and U is upper triangular.
      
        3. If some U(i,i)=0, so that U is exactly singular, then the routine
           returns with INFO = i. Otherwise, the factored form of A is used
           to estimate the condition number of the matrix A.  If the
           reciprocal of the condition number is less than machine precision,
           INFO = N+1 is returned as a warning, but the routine still goes on
           to solve for X and compute error bounds as described below.
      
        4. The system of equations is solved for X using the factored form
           of A.
      
        5. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.
      
        6. If equilibration was used, the matrix X is premultiplied by
           diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
           that it solves the original system before equilibration.

file zgesvxx.f  zgesvxx.f plus dependencies
prec complex16
           ZGESVXX uses the LU factorization to compute the solution to a
           complex*16 system of linear equations  A * X = B,  where A is an
           N-by-N matrix and X and B are N-by-NRHS matrices.
      
           If requested, both normwise and maximum componentwise error bounds
           are returned. ZGESVXX will return a solution with a tiny
           guaranteed error (O(eps) where eps is the working machine
           precision) unless the matrix is very ill-conditioned, in which
           case a warning is returned. Relevant condition numbers also are
           calculated and returned.
      
           ZGESVXX accepts user-provided factorizations and equilibration
           factors; see the definitions of the FACT and EQUED options.
           Solving with refinement and using a factorization from a previous
           ZGESVXX call will also produce a solution with either O(eps)
           errors or warnings, but we cannot make that claim for general
           user-provided factorizations and equilibration factors if they
           differ from what ZGESVXX would itself produce.
      
           Description
           ===========
      
           The following steps are performed:
      
           1. If FACT = 'E', double precision scaling factors are computed to equilibrate
           the system:
      
             TRANS = 'N':  diag(R)*A*diag(C)     *inv(diag(C))*X = diag(R)*B
             TRANS = 'T': (diag(R)*A*diag(C))**T *inv(diag(R))*X = diag(C)*B
             TRANS = 'C': (diag(R)*A*diag(C))**H *inv(diag(R))*X = diag(C)*B
      
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(R)*A*diag(C) and B by diag(R)*B (if TRANS='N')
           or diag(C)*B (if TRANS = 'T' or 'C').
      
           2. If FACT = 'N' or 'E', the LU decomposition is used to factor
           the matrix A (after equilibration if FACT = 'E') as
      
             A = P * L * U,
      
           where P is a permutation matrix, L is a unit lower triangular
           matrix, and U is upper triangular.
      
           3. If some U(i,i)=0, so that U is exactly singular, then the
           routine returns with INFO = i. Otherwise, the factored form of A
           is used to estimate the condition number of the matrix A (see
           argument RCOND). If the reciprocal of the condition number is less
           than machine precision, the routine still goes on to solve for X
           and compute error bounds as described below.
      
           4. The system of equations is solved for X using the factored form
           of A.
      
           5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
           the routine will use iterative refinement to try to get a small
           error and error bounds.  Refinement calculates the residual to at
           least twice the working precision.
      
           6. If equilibration was used, the matrix X is premultiplied by
           diag(C) (if TRANS = 'N') or diag(R) (if TRANS = 'T' or 'C') so
           that it solves the original system before equilibration.

file zggesx.f  zggesx.f plus dependencies
prec complex16
        ZGGESX computes for a pair of N-by-N complex nonsymmetric matrices
        (A,B), the generalized eigenvalues, the complex Schur form (S,T),
        and, optionally, the left and/or right matrices of Schur vectors (VSL
        and VSR).  This gives the generalized Schur factorization
      
             (A,B) = ( (VSL) S (VSR)**H, (VSL) T (VSR)**H )
      
        where (VSR)**H is the conjugate-transpose of VSR.
      
        Optionally, it also orders the eigenvalues so that a selected cluster
        of eigenvalues appears in the leading diagonal blocks of the upper
        triangular matrix S and the upper triangular matrix T; computes
        a reciprocal condition number for the average of the selected
        eigenvalues (RCONDE); and computes a reciprocal condition number for
        the right and left deflating subspaces corresponding to the selected
        eigenvalues (RCONDV). The leading columns of VSL and VSR then form
        an orthonormal basis for the corresponding left and right eigenspaces
        (deflating subspaces).
      
        A generalized eigenvalue for a pair of matrices (A,B) is a scalar w
        or a ratio alpha/beta = w, such that  A - w*B is singular.  It is
        usually represented as the pair (alpha,beta), as there is a
        reasonable interpretation for beta=0 or for both being zero.
      
        A pair of matrices (S,T) is in generalized complex Schur form if T is
        upper triangular with non-negative diagonal and S is upper
        triangular.

file zggevx.f  zggevx.f plus dependencies
prec complex16
        ZGGEVX computes for a pair of N-by-N complex nonsymmetric matrices
        (A,B) the generalized eigenvalues, and optionally, the left and/or
        right generalized eigenvectors.
      
        Optionally, it also computes a balancing transformation to improve
        the conditioning of the eigenvalues and eigenvectors (ILO, IHI,
        LSCALE, RSCALE, ABNRM, and BBNRM), reciprocal condition numbers for
        the eigenvalues (RCONDE), and reciprocal condition numbers for the
        right eigenvectors (RCONDV).
      
        A generalized eigenvalue for a pair of matrices (A,B) is a scalar
        lambda or a ratio alpha/beta = lambda, such that A - lambda*B is
        singular. It is usually represented as the pair (alpha,beta), as
        there is a reasonable interpretation for beta=0, and even for both
        being zero.
      
        The right eigenvector v(j) corresponding to the eigenvalue lambda(j)
        of (A,B) satisfies
                         A * v(j) = lambda(j) * B * v(j) .
        The left eigenvector u(j) corresponding to the eigenvalue lambda(j)
        of (A,B) satisfies
                         u(j)**H * A  = lambda(j) * u(j)**H * B.
        where u(j)**H is the conjugate-transpose of u(j).
      

file zhbevx.f  zhbevx.f plus dependencies
prec complex16
        ZHBEVX computes selected eigenvalues and, optionally, eigenvectors
        of a complex Hermitian band matrix A.  Eigenvalues and eigenvectors
        can be selected by specifying either a range of values or a range of
        indices for the desired eigenvalues.

file zhbgvx.f  zhbgvx.f plus dependencies
prec complex16
        ZHBGVX computes all the eigenvalues, and optionally, the eigenvectors
        of a complex generalized Hermitian-definite banded eigenproblem, of
        the form A*x=(lambda)*B*x. Here A and B are assumed to be Hermitian
        and banded, and B is also positive definite.  Eigenvalues and
        eigenvectors can be selected by specifying either all eigenvalues,
        a range of values or a range of indices for the desired eigenvalues.

file zheevx.f  zheevx.f plus dependencies
prec complex16
        ZHEEVX computes selected eigenvalues and, optionally, eigenvectors
        of a complex Hermitian matrix A.  Eigenvalues and eigenvectors can
        be selected by specifying either a range of values or a range of
        indices for the desired eigenvalues.

file zhegvx.f  zhegvx.f plus dependencies
prec complex16
        ZHEGVX computes selected eigenvalues, and optionally, eigenvectors
        of a complex generalized Hermitian-definite eigenproblem, of the form
        A*x=(lambda)*B*x,  A*Bx=(lambda)*x,  or B*A*x=(lambda)*x.  Here A and
        B are assumed to be Hermitian and B is also positive definite.
        Eigenvalues and eigenvectors can be selected by specifying either a
        range of values or a range of indices for the desired eigenvalues.

file zhesvx.f  zhesvx.f plus dependencies
prec complex16
        ZHESVX uses the diagonal pivoting factorization to compute the
        solution to a complex system of linear equations A * X = B,
        where A is an N-by-N Hermitian matrix and X and B are N-by-NRHS
        matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'N', the diagonal pivoting method is used to factor A.
           The form of the factorization is
              A = U * D * U**H,  if UPLO = 'U', or
              A = L * D * L**H,  if UPLO = 'L',
           where U (or L) is a product of permutation and unit upper (lower)
           triangular matrices, and D is Hermitian and block diagonal with
           1-by-1 and 2-by-2 diagonal blocks.
      
        2. If some D(i,i)=0, so that D is exactly singular, then the routine
           returns with INFO = i. Otherwise, the factored form of A is used
           to estimate the condition number of the matrix A.  If the
           reciprocal of the condition number is less than machine precision,
           INFO = N+1 is returned as a warning, but the routine still goes on
           to solve for X and compute error bounds as described below.
      
        3. The system of equations is solved for X using the factored form
           of A.
      
        4. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.

file zhesvxx.f  zhesvxx.f plus dependencies
prec complex16
           ZHESVXX uses the diagonal pivoting factorization to compute the
           solution to a complex*16 system of linear equations A * X = B, where
           A is an N-by-N symmetric matrix and X and B are N-by-NRHS
           matrices.
      
           If requested, both normwise and maximum componentwise error bounds
           are returned. ZHESVXX will return a solution with a tiny
           guaranteed error (O(eps) where eps is the working machine
           precision) unless the matrix is very ill-conditioned, in which
           case a warning is returned. Relevant condition numbers also are
           calculated and returned.
      
           ZHESVXX accepts user-provided factorizations and equilibration
           factors; see the definitions of the FACT and EQUED options.
           Solving with refinement and using a factorization from a previous
           ZHESVXX call will also produce a solution with either O(eps)
           errors or warnings, but we cannot make that claim for general
           user-provided factorizations and equilibration factors if they
           differ from what ZHESVXX would itself produce.
      
           Description
           ===========
      
           The following steps are performed:
      
           1. If FACT = 'E', double precision scaling factors are computed to equilibrate
           the system:
      
             diag(S)*A*diag(S)     *inv(diag(S))*X = diag(S)*B
      
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
      
           2. If FACT = 'N' or 'E', the LU decomposition is used to factor
           the matrix A (after equilibration if FACT = 'E') as
      
              A = U * D * U**T,  if UPLO = 'U', or
              A = L * D * L**T,  if UPLO = 'L',
      
           where U (or L) is a product of permutation and unit upper (lower)
           triangular matrices, and D is symmetric and block diagonal with
           1-by-1 and 2-by-2 diagonal blocks.
      
           3. If some D(i,i)=0, so that D is exactly singular, then the
           routine returns with INFO = i. Otherwise, the factored form of A
           is used to estimate the condition number of the matrix A (see
           argument RCOND).  If the reciprocal of the condition number is
           less than machine precision, the routine still goes on to solve
           for X and compute error bounds as described below.
      
           4. The system of equations is solved for X using the factored form
           of A.
      
           5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
           the routine will use iterative refinement to try to get a small
           error and error bounds.  Refinement calculates the residual to at
           least twice the working precision.
      
           6. If equilibration was used, the matrix X is premultiplied by
           diag(R) so that it solves the original system before
           equilibration.

file zhpevx.f  zhpevx.f plus dependencies
prec complex16
        ZHPEVX computes selected eigenvalues and, optionally, eigenvectors
        of a complex Hermitian matrix A in packed storage.
        Eigenvalues/vectors can be selected by specifying either a range of
        values or a range of indices for the desired eigenvalues.

file zhpgvx.f  zhpgvx.f plus dependencies
prec complex16
        ZHPGVX computes selected eigenvalues and, optionally, eigenvectors
        of a complex generalized Hermitian-definite eigenproblem, of the form
        A*x=(lambda)*B*x,  A*Bx=(lambda)*x,  or B*A*x=(lambda)*x.  Here A and
        B are assumed to be Hermitian, stored in packed format, and B is also
        positive definite.  Eigenvalues and eigenvectors can be selected by
        specifying either a range of values or a range of indices for the
        desired eigenvalues.

file zhpsvx.f  zhpsvx.f plus dependencies
prec complex16
        ZHPSVX uses the diagonal pivoting factorization A = U*D*U**H or
        A = L*D*L**H to compute the solution to a complex system of linear
        equations A * X = B, where A is an N-by-N Hermitian matrix stored
        in packed format and X and B are N-by-NRHS matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'N', the diagonal pivoting method is used to factor A as
              A = U * D * U**H,  if UPLO = 'U', or
              A = L * D * L**H,  if UPLO = 'L',
           where U (or L) is a product of permutation and unit upper (lower)
           triangular matrices and D is Hermitian and block diagonal with
           1-by-1 and 2-by-2 diagonal blocks.
      
        2. If some D(i,i)=0, so that D is exactly singular, then the routine
           returns with INFO = i. Otherwise, the factored form of A is used
           to estimate the condition number of the matrix A.  If the
           reciprocal of the condition number is less than machine precision,
           INFO = N+1 is returned as a warning, but the routine still goes on
           to solve for X and compute error bounds as described below.
      
        3. The system of equations is solved for X using the factored form
           of A.
      
        4. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.

file zpbsvx.f  zpbsvx.f plus dependencies
prec complex16
        ZPBSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
        compute the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian positive definite band matrix and X
        and B are N-by-NRHS matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'E', real scaling factors are computed to equilibrate
           the system:
              diag(S) * A * diag(S) * inv(diag(S)) * X = diag(S) * B
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
      
        2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
           factor the matrix A (after equilibration if FACT = 'E') as
              A = U**H * U,  if UPLO = 'U', or
              A = L * L**H,  if UPLO = 'L',
           where U is an upper triangular band matrix, and L is a lower
           triangular band matrix.
      
        3. If the leading i-by-i principal minor is not positive definite,
           then the routine returns with INFO = i. Otherwise, the factored
           form of A is used to estimate the condition number of the matrix
           A.  If the reciprocal of the condition number is less than machine
           precision, INFO = N+1 is returned as a warning, but the routine
           still goes on to solve for X and compute error bounds as
           described below.
      
        4. The system of equations is solved for X using the factored form
           of A.
      
        5. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.
      
        6. If equilibration was used, the matrix X is premultiplied by
           diag(S) so that it solves the original system before
           equilibration.

file zposvx.f  zposvx.f plus dependencies
prec complex16
        ZPOSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
        compute the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian positive definite matrix and X and B
        are N-by-NRHS matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'E', real scaling factors are computed to equilibrate
           the system:
              diag(S) * A * diag(S) * inv(diag(S)) * X = diag(S) * B
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
      
        2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
           factor the matrix A (after equilibration if FACT = 'E') as
              A = U**H* U,  if UPLO = 'U', or
              A = L * L**H,  if UPLO = 'L',
           where U is an upper triangular matrix and L is a lower triangular
           matrix.
      
        3. If the leading i-by-i principal minor is not positive definite,
           then the routine returns with INFO = i. Otherwise, the factored
           form of A is used to estimate the condition number of the matrix
           A.  If the reciprocal of the condition number is less than machine
           precision, INFO = N+1 is returned as a warning, but the routine
           still goes on to solve for X and compute error bounds as
           described below.
      
        4. The system of equations is solved for X using the factored form
           of A.
      
        5. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.
      
        6. If equilibration was used, the matrix X is premultiplied by
           diag(S) so that it solves the original system before
           equilibration.

file zposvxx.f  zposvxx.f plus dependencies
prec complex16
           ZPOSVXX uses the Cholesky factorization A = U**T*U or A = L*L**T
           to compute the solution to a complex*16 system of linear equations
           A * X = B, where A is an N-by-N symmetric positive definite matrix
           and X and B are N-by-NRHS matrices.
      
           If requested, both normwise and maximum componentwise error bounds
           are returned. ZPOSVXX will return a solution with a tiny
           guaranteed error (O(eps) where eps is the working machine
           precision) unless the matrix is very ill-conditioned, in which
           case a warning is returned. Relevant condition numbers also are
           calculated and returned.
      
           ZPOSVXX accepts user-provided factorizations and equilibration
           factors; see the definitions of the FACT and EQUED options.
           Solving with refinement and using a factorization from a previous
           ZPOSVXX call will also produce a solution with either O(eps)
           errors or warnings, but we cannot make that claim for general
           user-provided factorizations and equilibration factors if they
           differ from what ZPOSVXX would itself produce.
      
           Description
           ===========
      
           The following steps are performed:
      
           1. If FACT = 'E', double precision scaling factors are computed to equilibrate
           the system:
      
             diag(S)*A*diag(S)     *inv(diag(S))*X = diag(S)*B
      
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
      
           2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
           factor the matrix A (after equilibration if FACT = 'E') as
              A = U**T* U,  if UPLO = 'U', or
              A = L * L**T,  if UPLO = 'L',
           where U is an upper triangular matrix and L is a lower triangular
           matrix.
      
           3. If the leading i-by-i principal minor is not positive definite,
           then the routine returns with INFO = i. Otherwise, the factored
           form of A is used to estimate the condition number of the matrix
           A (see argument RCOND).  If the reciprocal of the condition number
           is less than machine precision, the routine still goes on to solve
           for X and compute error bounds as described below.
      
           4. The system of equations is solved for X using the factored form
           of A.
      
           5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
           the routine will use iterative refinement to try to get a small
           error and error bounds.  Refinement calculates the residual to at
           least twice the working precision.
      
           6. If equilibration was used, the matrix X is premultiplied by
           diag(S) so that it solves the original system before
           equilibration.

file zppsvx.f  zppsvx.f plus dependencies
prec complex16
        ZPPSVX uses the Cholesky factorization A = U**H*U or A = L*L**H to
        compute the solution to a complex system of linear equations
           A * X = B,
        where A is an N-by-N Hermitian positive definite matrix stored in
        packed format and X and B are N-by-NRHS matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'E', real scaling factors are computed to equilibrate
           the system:
              diag(S) * A * diag(S) * inv(diag(S)) * X = diag(S) * B
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
      
        2. If FACT = 'N' or 'E', the Cholesky decomposition is used to
           factor the matrix A (after equilibration if FACT = 'E') as
              A = U'* U ,  if UPLO = 'U', or
              A = L * L',  if UPLO = 'L',
           where U is an upper triangular matrix, L is a lower triangular
           matrix, and ' indicates conjugate transpose.
      
        3. If the leading i-by-i principal minor is not positive definite,
           then the routine returns with INFO = i. Otherwise, the factored
           form of A is used to estimate the condition number of the matrix
           A.  If the reciprocal of the condition number is less than machine
           precision, INFO = N+1 is returned as a warning, but the routine
           still goes on to solve for X and compute error bounds as
           described below.
      
        4. The system of equations is solved for X using the factored form
           of A.
      
        5. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.
      
        6. If equilibration was used, the matrix X is premultiplied by
           diag(S) so that it solves the original system before
           equilibration.

file zspsvx.f  zspsvx.f plus dependencies
prec complex16
        ZSPSVX uses the diagonal pivoting factorization A = U*D*U**T or
        A = L*D*L**T to compute the solution to a complex system of linear
        equations A * X = B, where A is an N-by-N symmetric matrix stored
        in packed format and X and B are N-by-NRHS matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'N', the diagonal pivoting method is used to factor A as
              A = U * D * U**T,  if UPLO = 'U', or
              A = L * D * L**T,  if UPLO = 'L',
           where U (or L) is a product of permutation and unit upper (lower)
           triangular matrices and D is symmetric and block diagonal with
           1-by-1 and 2-by-2 diagonal blocks.
      
        2. If some D(i,i)=0, so that D is exactly singular, then the routine
           returns with INFO = i. Otherwise, the factored form of A is used
           to estimate the condition number of the matrix A.  If the
           reciprocal of the condition number is less than machine precision,
           INFO = N+1 is returned as a warning, but the routine still goes on
           to solve for X and compute error bounds as described below.
      
        3. The system of equations is solved for X using the factored form
           of A.
      
        4. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.

file zsysvx.f  zsysvx.f plus dependencies
prec complex16
        ZSYSVX uses the diagonal pivoting factorization to compute the
        solution to a complex system of linear equations A * X = B,
        where A is an N-by-N symmetric matrix and X and B are N-by-NRHS
        matrices.
      
        Error bounds on the solution and a condition estimate are also
        provided.
      
        Description
        ===========
      
        The following steps are performed:
      
        1. If FACT = 'N', the diagonal pivoting method is used to factor A.
           The form of the factorization is
              A = U * D * U**T,  if UPLO = 'U', or
              A = L * D * L**T,  if UPLO = 'L',
           where U (or L) is a product of permutation and unit upper (lower)
           triangular matrices, and D is symmetric and block diagonal with
           1-by-1 and 2-by-2 diagonal blocks.
      
        2. If some D(i,i)=0, so that D is exactly singular, then the routine
           returns with INFO = i. Otherwise, the factored form of A is used
           to estimate the condition number of the matrix A.  If the
           reciprocal of the condition number is less than machine precision,
           INFO = N+1 is returned as a warning, but the routine still goes on
           to solve for X and compute error bounds as described below.
      
        3. The system of equations is solved for X using the factored form
           of A.
      
        4. Iterative refinement is applied to improve the computed solution
           matrix and calculate error bounds and backward error estimates
           for it.

file zsysvxx.f  zsysvxx.f plus dependencies
prec complex16
           ZSYSVXX uses the diagonal pivoting factorization to compute the
           solution to a complex*16 system of linear equations A * X = B, where
           A is an N-by-N symmetric matrix and X and B are N-by-NRHS
           matrices.
      
           If requested, both normwise and maximum componentwise error bounds
           are returned. ZSYSVXX will return a solution with a tiny
           guaranteed error (O(eps) where eps is the working machine
           precision) unless the matrix is very ill-conditioned, in which
           case a warning is returned. Relevant condition numbers also are
           calculated and returned.
      
           ZSYSVXX accepts user-provided factorizations and equilibration
           factors; see the definitions of the FACT and EQUED options.
           Solving with refinement and using a factorization from a previous
           ZSYSVXX call will also produce a solution with either O(eps)
           errors or warnings, but we cannot make that claim for general
           user-provided factorizations and equilibration factors if they
           differ from what ZSYSVXX would itself produce.
      
           Description
           ===========
      
           The following steps are performed:
      
           1. If FACT = 'E', double precision scaling factors are computed to equilibrate
           the system:
      
             diag(S)*A*diag(S)     *inv(diag(S))*X = diag(S)*B
      
           Whether or not the system will be equilibrated depends on the
           scaling of the matrix A, but if equilibration is used, A is
           overwritten by diag(S)*A*diag(S) and B by diag(S)*B.
      
           2. If FACT = 'N' or 'E', the LU decomposition is used to factor
           the matrix A (after equilibration if FACT = 'E') as
      
              A = U * D * U**T,  if UPLO = 'U', or
              A = L * D * L**T,  if UPLO = 'L',
      
           where U (or L) is a product of permutation and unit upper (lower)
           triangular matrices, and D is symmetric and block diagonal with
           1-by-1 and 2-by-2 diagonal blocks.
      
           3. If some D(i,i)=0, so that D is exactly singular, then the
           routine returns with INFO = i. Otherwise, the factored form of A
           is used to estimate the condition number of the matrix A (see
           argument RCOND).  If the reciprocal of the condition number is
           less than machine precision, the routine still goes on to solve
           for X and compute error bounds as described below.
      
           4. The system of equations is solved for X using the factored form
           of A.
      
           5. By default (unless PARAMS(LA_LINRX_ITREF_I) is set to zero),
           the routine will use iterative refinement to try to get a small
           error and error bounds.  Refinement calculates the residual to at
           least twice the working precision.
      
           6. If equilibration was used, the matrix X is premultiplied by
           diag(R) so that it solves the original system before
           equilibration.