LAPACK 3.3.1
Linear Algebra PACKage
|
00001 SUBROUTINE SLAQR5( WANTT, WANTZ, KACC22, N, KTOP, KBOT, NSHFTS, 00002 $ SR, SI, H, LDH, ILOZ, IHIZ, Z, LDZ, V, LDV, U, 00003 $ LDU, NV, WV, LDWV, NH, WH, LDWH ) 00004 * 00005 * -- LAPACK auxiliary routine (version 3.3.0) -- 00006 * -- LAPACK is a software package provided by Univ. of Tennessee, -- 00007 * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..-- 00008 * November 2010 00009 * 00010 * .. Scalar Arguments .. 00011 INTEGER IHIZ, ILOZ, KACC22, KBOT, KTOP, LDH, LDU, LDV, 00012 $ LDWH, LDWV, LDZ, N, NH, NSHFTS, NV 00013 LOGICAL WANTT, WANTZ 00014 * .. 00015 * .. Array Arguments .. 00016 REAL H( LDH, * ), SI( * ), SR( * ), U( LDU, * ), 00017 $ V( LDV, * ), WH( LDWH, * ), WV( LDWV, * ), 00018 $ Z( LDZ, * ) 00019 * .. 00020 * 00021 * This auxiliary subroutine called by SLAQR0 performs a 00022 * single small-bulge multi-shift QR sweep. 00023 * 00024 * WANTT (input) logical scalar 00025 * WANTT = .true. if the quasi-triangular Schur factor 00026 * is being computed. WANTT is set to .false. otherwise. 00027 * 00028 * WANTZ (input) logical scalar 00029 * WANTZ = .true. if the orthogonal Schur factor is being 00030 * computed. WANTZ is set to .false. otherwise. 00031 * 00032 * KACC22 (input) integer with value 0, 1, or 2. 00033 * Specifies the computation mode of far-from-diagonal 00034 * orthogonal updates. 00035 * = 0: SLAQR5 does not accumulate reflections and does not 00036 * use matrix-matrix multiply to update far-from-diagonal 00037 * matrix entries. 00038 * = 1: SLAQR5 accumulates reflections and uses matrix-matrix 00039 * multiply to update the far-from-diagonal matrix entries. 00040 * = 2: SLAQR5 accumulates reflections, uses matrix-matrix 00041 * multiply to update the far-from-diagonal matrix entries, 00042 * and takes advantage of 2-by-2 block structure during 00043 * matrix multiplies. 00044 * 00045 * N (input) integer scalar 00046 * N is the order of the Hessenberg matrix H upon which this 00047 * subroutine operates. 00048 * 00049 * KTOP (input) integer scalar 00050 * KBOT (input) integer scalar 00051 * These are the first and last rows and columns of an 00052 * isolated diagonal block upon which the QR sweep is to be 00053 * applied. It is assumed without a check that 00054 * either KTOP = 1 or H(KTOP,KTOP-1) = 0 00055 * and 00056 * either KBOT = N or H(KBOT+1,KBOT) = 0. 00057 * 00058 * NSHFTS (input) integer scalar 00059 * NSHFTS gives the number of simultaneous shifts. NSHFTS 00060 * must be positive and even. 00061 * 00062 * SR (input/output) REAL array of size (NSHFTS) 00063 * SI (input/output) REAL array of size (NSHFTS) 00064 * SR contains the real parts and SI contains the imaginary 00065 * parts of the NSHFTS shifts of origin that define the 00066 * multi-shift QR sweep. On output SR and SI may be 00067 * reordered. 00068 * 00069 * H (input/output) REAL array of size (LDH,N) 00070 * On input H contains a Hessenberg matrix. On output a 00071 * multi-shift QR sweep with shifts SR(J)+i*SI(J) is applied 00072 * to the isolated diagonal block in rows and columns KTOP 00073 * through KBOT. 00074 * 00075 * LDH (input) integer scalar 00076 * LDH is the leading dimension of H just as declared in the 00077 * calling procedure. LDH.GE.MAX(1,N). 00078 * 00079 * ILOZ (input) INTEGER 00080 * IHIZ (input) INTEGER 00081 * Specify the rows of Z to which transformations must be 00082 * applied if WANTZ is .TRUE.. 1 .LE. ILOZ .LE. IHIZ .LE. N 00083 * 00084 * Z (input/output) REAL array of size (LDZ,IHI) 00085 * If WANTZ = .TRUE., then the QR Sweep orthogonal 00086 * similarity transformation is accumulated into 00087 * Z(ILOZ:IHIZ,ILO:IHI) from the right. 00088 * If WANTZ = .FALSE., then Z is unreferenced. 00089 * 00090 * LDZ (input) integer scalar 00091 * LDA is the leading dimension of Z just as declared in 00092 * the calling procedure. LDZ.GE.N. 00093 * 00094 * V (workspace) REAL array of size (LDV,NSHFTS/2) 00095 * 00096 * LDV (input) integer scalar 00097 * LDV is the leading dimension of V as declared in the 00098 * calling procedure. LDV.GE.3. 00099 * 00100 * U (workspace) REAL array of size 00101 * (LDU,3*NSHFTS-3) 00102 * 00103 * LDU (input) integer scalar 00104 * LDU is the leading dimension of U just as declared in the 00105 * in the calling subroutine. LDU.GE.3*NSHFTS-3. 00106 * 00107 * NH (input) integer scalar 00108 * NH is the number of columns in array WH available for 00109 * workspace. NH.GE.1. 00110 * 00111 * WH (workspace) REAL array of size (LDWH,NH) 00112 * 00113 * LDWH (input) integer scalar 00114 * Leading dimension of WH just as declared in the 00115 * calling procedure. LDWH.GE.3*NSHFTS-3. 00116 * 00117 * NV (input) integer scalar 00118 * NV is the number of rows in WV agailable for workspace. 00119 * NV.GE.1. 00120 * 00121 * WV (workspace) REAL array of size 00122 * (LDWV,3*NSHFTS-3) 00123 * 00124 * LDWV (input) integer scalar 00125 * LDWV is the leading dimension of WV as declared in the 00126 * in the calling subroutine. LDWV.GE.NV. 00127 * 00128 * ================================================================ 00129 * Based on contributions by 00130 * Karen Braman and Ralph Byers, Department of Mathematics, 00131 * University of Kansas, USA 00132 * 00133 * ================================================================ 00134 * Reference: 00135 * 00136 * K. Braman, R. Byers and R. Mathias, The Multi-Shift QR 00137 * Algorithm Part I: Maintaining Well Focused Shifts, and 00138 * Level 3 Performance, SIAM Journal of Matrix Analysis, 00139 * volume 23, pages 929--947, 2002. 00140 * 00141 * ================================================================ 00142 * .. Parameters .. 00143 REAL ZERO, ONE 00144 PARAMETER ( ZERO = 0.0e0, ONE = 1.0e0 ) 00145 * .. 00146 * .. Local Scalars .. 00147 REAL ALPHA, BETA, H11, H12, H21, H22, REFSUM, 00148 $ SAFMAX, SAFMIN, SCL, SMLNUM, SWAP, TST1, TST2, 00149 $ ULP 00150 INTEGER I, I2, I4, INCOL, J, J2, J4, JBOT, JCOL, JLEN, 00151 $ JROW, JTOP, K, K1, KDU, KMS, KNZ, KRCOL, KZS, 00152 $ M, M22, MBOT, MEND, MSTART, MTOP, NBMPS, NDCOL, 00153 $ NS, NU 00154 LOGICAL ACCUM, BLK22, BMP22 00155 * .. 00156 * .. External Functions .. 00157 REAL SLAMCH 00158 EXTERNAL SLAMCH 00159 * .. 00160 * .. Intrinsic Functions .. 00161 * 00162 INTRINSIC ABS, MAX, MIN, MOD, REAL 00163 * .. 00164 * .. Local Arrays .. 00165 REAL VT( 3 ) 00166 * .. 00167 * .. External Subroutines .. 00168 EXTERNAL SGEMM, SLABAD, SLACPY, SLAQR1, SLARFG, SLASET, 00169 $ STRMM 00170 * .. 00171 * .. Executable Statements .. 00172 * 00173 * ==== If there are no shifts, then there is nothing to do. ==== 00174 * 00175 IF( NSHFTS.LT.2 ) 00176 $ RETURN 00177 * 00178 * ==== If the active block is empty or 1-by-1, then there 00179 * . is nothing to do. ==== 00180 * 00181 IF( KTOP.GE.KBOT ) 00182 $ RETURN 00183 * 00184 * ==== Shuffle shifts into pairs of real shifts and pairs 00185 * . of complex conjugate shifts assuming complex 00186 * . conjugate shifts are already adjacent to one 00187 * . another. ==== 00188 * 00189 DO 10 I = 1, NSHFTS - 2, 2 00190 IF( SI( I ).NE.-SI( I+1 ) ) THEN 00191 * 00192 SWAP = SR( I ) 00193 SR( I ) = SR( I+1 ) 00194 SR( I+1 ) = SR( I+2 ) 00195 SR( I+2 ) = SWAP 00196 * 00197 SWAP = SI( I ) 00198 SI( I ) = SI( I+1 ) 00199 SI( I+1 ) = SI( I+2 ) 00200 SI( I+2 ) = SWAP 00201 END IF 00202 10 CONTINUE 00203 * 00204 * ==== NSHFTS is supposed to be even, but if it is odd, 00205 * . then simply reduce it by one. The shuffle above 00206 * . ensures that the dropped shift is real and that 00207 * . the remaining shifts are paired. ==== 00208 * 00209 NS = NSHFTS - MOD( NSHFTS, 2 ) 00210 * 00211 * ==== Machine constants for deflation ==== 00212 * 00213 SAFMIN = SLAMCH( 'SAFE MINIMUM' ) 00214 SAFMAX = ONE / SAFMIN 00215 CALL SLABAD( SAFMIN, SAFMAX ) 00216 ULP = SLAMCH( 'PRECISION' ) 00217 SMLNUM = SAFMIN*( REAL( N ) / ULP ) 00218 * 00219 * ==== Use accumulated reflections to update far-from-diagonal 00220 * . entries ? ==== 00221 * 00222 ACCUM = ( KACC22.EQ.1 ) .OR. ( KACC22.EQ.2 ) 00223 * 00224 * ==== If so, exploit the 2-by-2 block structure? ==== 00225 * 00226 BLK22 = ( NS.GT.2 ) .AND. ( KACC22.EQ.2 ) 00227 * 00228 * ==== clear trash ==== 00229 * 00230 IF( KTOP+2.LE.KBOT ) 00231 $ H( KTOP+2, KTOP ) = ZERO 00232 * 00233 * ==== NBMPS = number of 2-shift bulges in the chain ==== 00234 * 00235 NBMPS = NS / 2 00236 * 00237 * ==== KDU = width of slab ==== 00238 * 00239 KDU = 6*NBMPS - 3 00240 * 00241 * ==== Create and chase chains of NBMPS bulges ==== 00242 * 00243 DO 220 INCOL = 3*( 1-NBMPS ) + KTOP - 1, KBOT - 2, 3*NBMPS - 2 00244 NDCOL = INCOL + KDU 00245 IF( ACCUM ) 00246 $ CALL SLASET( 'ALL', KDU, KDU, ZERO, ONE, U, LDU ) 00247 * 00248 * ==== Near-the-diagonal bulge chase. The following loop 00249 * . performs the near-the-diagonal part of a small bulge 00250 * . multi-shift QR sweep. Each 6*NBMPS-2 column diagonal 00251 * . chunk extends from column INCOL to column NDCOL 00252 * . (including both column INCOL and column NDCOL). The 00253 * . following loop chases a 3*NBMPS column long chain of 00254 * . NBMPS bulges 3*NBMPS-2 columns to the right. (INCOL 00255 * . may be less than KTOP and and NDCOL may be greater than 00256 * . KBOT indicating phantom columns from which to chase 00257 * . bulges before they are actually introduced or to which 00258 * . to chase bulges beyond column KBOT.) ==== 00259 * 00260 DO 150 KRCOL = INCOL, MIN( INCOL+3*NBMPS-3, KBOT-2 ) 00261 * 00262 * ==== Bulges number MTOP to MBOT are active double implicit 00263 * . shift bulges. There may or may not also be small 00264 * . 2-by-2 bulge, if there is room. The inactive bulges 00265 * . (if any) must wait until the active bulges have moved 00266 * . down the diagonal to make room. The phantom matrix 00267 * . paradigm described above helps keep track. ==== 00268 * 00269 MTOP = MAX( 1, ( ( KTOP-1 )-KRCOL+2 ) / 3+1 ) 00270 MBOT = MIN( NBMPS, ( KBOT-KRCOL ) / 3 ) 00271 M22 = MBOT + 1 00272 BMP22 = ( MBOT.LT.NBMPS ) .AND. ( KRCOL+3*( M22-1 ) ).EQ. 00273 $ ( KBOT-2 ) 00274 * 00275 * ==== Generate reflections to chase the chain right 00276 * . one column. (The minimum value of K is KTOP-1.) ==== 00277 * 00278 DO 20 M = MTOP, MBOT 00279 K = KRCOL + 3*( M-1 ) 00280 IF( K.EQ.KTOP-1 ) THEN 00281 CALL SLAQR1( 3, H( KTOP, KTOP ), LDH, SR( 2*M-1 ), 00282 $ SI( 2*M-1 ), SR( 2*M ), SI( 2*M ), 00283 $ V( 1, M ) ) 00284 ALPHA = V( 1, M ) 00285 CALL SLARFG( 3, ALPHA, V( 2, M ), 1, V( 1, M ) ) 00286 ELSE 00287 BETA = H( K+1, K ) 00288 V( 2, M ) = H( K+2, K ) 00289 V( 3, M ) = H( K+3, K ) 00290 CALL SLARFG( 3, BETA, V( 2, M ), 1, V( 1, M ) ) 00291 * 00292 * ==== A Bulge may collapse because of vigilant 00293 * . deflation or destructive underflow. In the 00294 * . underflow case, try the two-small-subdiagonals 00295 * . trick to try to reinflate the bulge. ==== 00296 * 00297 IF( H( K+3, K ).NE.ZERO .OR. H( K+3, K+1 ).NE. 00298 $ ZERO .OR. H( K+3, K+2 ).EQ.ZERO ) THEN 00299 * 00300 * ==== Typical case: not collapsed (yet). ==== 00301 * 00302 H( K+1, K ) = BETA 00303 H( K+2, K ) = ZERO 00304 H( K+3, K ) = ZERO 00305 ELSE 00306 * 00307 * ==== Atypical case: collapsed. Attempt to 00308 * . reintroduce ignoring H(K+1,K) and H(K+2,K). 00309 * . If the fill resulting from the new 00310 * . reflector is too large, then abandon it. 00311 * . Otherwise, use the new one. ==== 00312 * 00313 CALL SLAQR1( 3, H( K+1, K+1 ), LDH, SR( 2*M-1 ), 00314 $ SI( 2*M-1 ), SR( 2*M ), SI( 2*M ), 00315 $ VT ) 00316 ALPHA = VT( 1 ) 00317 CALL SLARFG( 3, ALPHA, VT( 2 ), 1, VT( 1 ) ) 00318 REFSUM = VT( 1 )*( H( K+1, K )+VT( 2 )* 00319 $ H( K+2, K ) ) 00320 * 00321 IF( ABS( H( K+2, K )-REFSUM*VT( 2 ) )+ 00322 $ ABS( REFSUM*VT( 3 ) ).GT.ULP* 00323 $ ( ABS( H( K, K ) )+ABS( H( K+1, 00324 $ K+1 ) )+ABS( H( K+2, K+2 ) ) ) ) THEN 00325 * 00326 * ==== Starting a new bulge here would 00327 * . create non-negligible fill. Use 00328 * . the old one with trepidation. ==== 00329 * 00330 H( K+1, K ) = BETA 00331 H( K+2, K ) = ZERO 00332 H( K+3, K ) = ZERO 00333 ELSE 00334 * 00335 * ==== Stating a new bulge here would 00336 * . create only negligible fill. 00337 * . Replace the old reflector with 00338 * . the new one. ==== 00339 * 00340 H( K+1, K ) = H( K+1, K ) - REFSUM 00341 H( K+2, K ) = ZERO 00342 H( K+3, K ) = ZERO 00343 V( 1, M ) = VT( 1 ) 00344 V( 2, M ) = VT( 2 ) 00345 V( 3, M ) = VT( 3 ) 00346 END IF 00347 END IF 00348 END IF 00349 20 CONTINUE 00350 * 00351 * ==== Generate a 2-by-2 reflection, if needed. ==== 00352 * 00353 K = KRCOL + 3*( M22-1 ) 00354 IF( BMP22 ) THEN 00355 IF( K.EQ.KTOP-1 ) THEN 00356 CALL SLAQR1( 2, H( K+1, K+1 ), LDH, SR( 2*M22-1 ), 00357 $ SI( 2*M22-1 ), SR( 2*M22 ), SI( 2*M22 ), 00358 $ V( 1, M22 ) ) 00359 BETA = V( 1, M22 ) 00360 CALL SLARFG( 2, BETA, V( 2, M22 ), 1, V( 1, M22 ) ) 00361 ELSE 00362 BETA = H( K+1, K ) 00363 V( 2, M22 ) = H( K+2, K ) 00364 CALL SLARFG( 2, BETA, V( 2, M22 ), 1, V( 1, M22 ) ) 00365 H( K+1, K ) = BETA 00366 H( K+2, K ) = ZERO 00367 END IF 00368 END IF 00369 * 00370 * ==== Multiply H by reflections from the left ==== 00371 * 00372 IF( ACCUM ) THEN 00373 JBOT = MIN( NDCOL, KBOT ) 00374 ELSE IF( WANTT ) THEN 00375 JBOT = N 00376 ELSE 00377 JBOT = KBOT 00378 END IF 00379 DO 40 J = MAX( KTOP, KRCOL ), JBOT 00380 MEND = MIN( MBOT, ( J-KRCOL+2 ) / 3 ) 00381 DO 30 M = MTOP, MEND 00382 K = KRCOL + 3*( M-1 ) 00383 REFSUM = V( 1, M )*( H( K+1, J )+V( 2, M )* 00384 $ H( K+2, J )+V( 3, M )*H( K+3, J ) ) 00385 H( K+1, J ) = H( K+1, J ) - REFSUM 00386 H( K+2, J ) = H( K+2, J ) - REFSUM*V( 2, M ) 00387 H( K+3, J ) = H( K+3, J ) - REFSUM*V( 3, M ) 00388 30 CONTINUE 00389 40 CONTINUE 00390 IF( BMP22 ) THEN 00391 K = KRCOL + 3*( M22-1 ) 00392 DO 50 J = MAX( K+1, KTOP ), JBOT 00393 REFSUM = V( 1, M22 )*( H( K+1, J )+V( 2, M22 )* 00394 $ H( K+2, J ) ) 00395 H( K+1, J ) = H( K+1, J ) - REFSUM 00396 H( K+2, J ) = H( K+2, J ) - REFSUM*V( 2, M22 ) 00397 50 CONTINUE 00398 END IF 00399 * 00400 * ==== Multiply H by reflections from the right. 00401 * . Delay filling in the last row until the 00402 * . vigilant deflation check is complete. ==== 00403 * 00404 IF( ACCUM ) THEN 00405 JTOP = MAX( KTOP, INCOL ) 00406 ELSE IF( WANTT ) THEN 00407 JTOP = 1 00408 ELSE 00409 JTOP = KTOP 00410 END IF 00411 DO 90 M = MTOP, MBOT 00412 IF( V( 1, M ).NE.ZERO ) THEN 00413 K = KRCOL + 3*( M-1 ) 00414 DO 60 J = JTOP, MIN( KBOT, K+3 ) 00415 REFSUM = V( 1, M )*( H( J, K+1 )+V( 2, M )* 00416 $ H( J, K+2 )+V( 3, M )*H( J, K+3 ) ) 00417 H( J, K+1 ) = H( J, K+1 ) - REFSUM 00418 H( J, K+2 ) = H( J, K+2 ) - REFSUM*V( 2, M ) 00419 H( J, K+3 ) = H( J, K+3 ) - REFSUM*V( 3, M ) 00420 60 CONTINUE 00421 * 00422 IF( ACCUM ) THEN 00423 * 00424 * ==== Accumulate U. (If necessary, update Z later 00425 * . with with an efficient matrix-matrix 00426 * . multiply.) ==== 00427 * 00428 KMS = K - INCOL 00429 DO 70 J = MAX( 1, KTOP-INCOL ), KDU 00430 REFSUM = V( 1, M )*( U( J, KMS+1 )+V( 2, M )* 00431 $ U( J, KMS+2 )+V( 3, M )*U( J, KMS+3 ) ) 00432 U( J, KMS+1 ) = U( J, KMS+1 ) - REFSUM 00433 U( J, KMS+2 ) = U( J, KMS+2 ) - REFSUM*V( 2, M ) 00434 U( J, KMS+3 ) = U( J, KMS+3 ) - REFSUM*V( 3, M ) 00435 70 CONTINUE 00436 ELSE IF( WANTZ ) THEN 00437 * 00438 * ==== U is not accumulated, so update Z 00439 * . now by multiplying by reflections 00440 * . from the right. ==== 00441 * 00442 DO 80 J = ILOZ, IHIZ 00443 REFSUM = V( 1, M )*( Z( J, K+1 )+V( 2, M )* 00444 $ Z( J, K+2 )+V( 3, M )*Z( J, K+3 ) ) 00445 Z( J, K+1 ) = Z( J, K+1 ) - REFSUM 00446 Z( J, K+2 ) = Z( J, K+2 ) - REFSUM*V( 2, M ) 00447 Z( J, K+3 ) = Z( J, K+3 ) - REFSUM*V( 3, M ) 00448 80 CONTINUE 00449 END IF 00450 END IF 00451 90 CONTINUE 00452 * 00453 * ==== Special case: 2-by-2 reflection (if needed) ==== 00454 * 00455 K = KRCOL + 3*( M22-1 ) 00456 IF( BMP22 ) THEN 00457 IF ( V( 1, M22 ).NE.ZERO ) THEN 00458 DO 100 J = JTOP, MIN( KBOT, K+3 ) 00459 REFSUM = V( 1, M22 )*( H( J, K+1 )+V( 2, M22 )* 00460 $ H( J, K+2 ) ) 00461 H( J, K+1 ) = H( J, K+1 ) - REFSUM 00462 H( J, K+2 ) = H( J, K+2 ) - REFSUM*V( 2, M22 ) 00463 100 CONTINUE 00464 * 00465 IF( ACCUM ) THEN 00466 KMS = K - INCOL 00467 DO 110 J = MAX( 1, KTOP-INCOL ), KDU 00468 REFSUM = V( 1, M22 )*( U( J, KMS+1 )+ 00469 $ V( 2, M22 )*U( J, KMS+2 ) ) 00470 U( J, KMS+1 ) = U( J, KMS+1 ) - REFSUM 00471 U( J, KMS+2 ) = U( J, KMS+2 ) - REFSUM* 00472 $ V( 2, M22 ) 00473 110 CONTINUE 00474 ELSE IF( WANTZ ) THEN 00475 DO 120 J = ILOZ, IHIZ 00476 REFSUM = V( 1, M22 )*( Z( J, K+1 )+V( 2, M22 )* 00477 $ Z( J, K+2 ) ) 00478 Z( J, K+1 ) = Z( J, K+1 ) - REFSUM 00479 Z( J, K+2 ) = Z( J, K+2 ) - REFSUM*V( 2, M22 ) 00480 120 CONTINUE 00481 END IF 00482 END IF 00483 END IF 00484 * 00485 * ==== Vigilant deflation check ==== 00486 * 00487 MSTART = MTOP 00488 IF( KRCOL+3*( MSTART-1 ).LT.KTOP ) 00489 $ MSTART = MSTART + 1 00490 MEND = MBOT 00491 IF( BMP22 ) 00492 $ MEND = MEND + 1 00493 IF( KRCOL.EQ.KBOT-2 ) 00494 $ MEND = MEND + 1 00495 DO 130 M = MSTART, MEND 00496 K = MIN( KBOT-1, KRCOL+3*( M-1 ) ) 00497 * 00498 * ==== The following convergence test requires that 00499 * . the tradition small-compared-to-nearby-diagonals 00500 * . criterion and the Ahues & Tisseur (LAWN 122, 1997) 00501 * . criteria both be satisfied. The latter improves 00502 * . accuracy in some examples. Falling back on an 00503 * . alternate convergence criterion when TST1 or TST2 00504 * . is zero (as done here) is traditional but probably 00505 * . unnecessary. ==== 00506 * 00507 IF( H( K+1, K ).NE.ZERO ) THEN 00508 TST1 = ABS( H( K, K ) ) + ABS( H( K+1, K+1 ) ) 00509 IF( TST1.EQ.ZERO ) THEN 00510 IF( K.GE.KTOP+1 ) 00511 $ TST1 = TST1 + ABS( H( K, K-1 ) ) 00512 IF( K.GE.KTOP+2 ) 00513 $ TST1 = TST1 + ABS( H( K, K-2 ) ) 00514 IF( K.GE.KTOP+3 ) 00515 $ TST1 = TST1 + ABS( H( K, K-3 ) ) 00516 IF( K.LE.KBOT-2 ) 00517 $ TST1 = TST1 + ABS( H( K+2, K+1 ) ) 00518 IF( K.LE.KBOT-3 ) 00519 $ TST1 = TST1 + ABS( H( K+3, K+1 ) ) 00520 IF( K.LE.KBOT-4 ) 00521 $ TST1 = TST1 + ABS( H( K+4, K+1 ) ) 00522 END IF 00523 IF( ABS( H( K+1, K ) ).LE.MAX( SMLNUM, ULP*TST1 ) ) 00524 $ THEN 00525 H12 = MAX( ABS( H( K+1, K ) ), ABS( H( K, K+1 ) ) ) 00526 H21 = MIN( ABS( H( K+1, K ) ), ABS( H( K, K+1 ) ) ) 00527 H11 = MAX( ABS( H( K+1, K+1 ) ), 00528 $ ABS( H( K, K )-H( K+1, K+1 ) ) ) 00529 H22 = MIN( ABS( H( K+1, K+1 ) ), 00530 $ ABS( H( K, K )-H( K+1, K+1 ) ) ) 00531 SCL = H11 + H12 00532 TST2 = H22*( H11 / SCL ) 00533 * 00534 IF( TST2.EQ.ZERO .OR. H21*( H12 / SCL ).LE. 00535 $ MAX( SMLNUM, ULP*TST2 ) )H( K+1, K ) = ZERO 00536 END IF 00537 END IF 00538 130 CONTINUE 00539 * 00540 * ==== Fill in the last row of each bulge. ==== 00541 * 00542 MEND = MIN( NBMPS, ( KBOT-KRCOL-1 ) / 3 ) 00543 DO 140 M = MTOP, MEND 00544 K = KRCOL + 3*( M-1 ) 00545 REFSUM = V( 1, M )*V( 3, M )*H( K+4, K+3 ) 00546 H( K+4, K+1 ) = -REFSUM 00547 H( K+4, K+2 ) = -REFSUM*V( 2, M ) 00548 H( K+4, K+3 ) = H( K+4, K+3 ) - REFSUM*V( 3, M ) 00549 140 CONTINUE 00550 * 00551 * ==== End of near-the-diagonal bulge chase. ==== 00552 * 00553 150 CONTINUE 00554 * 00555 * ==== Use U (if accumulated) to update far-from-diagonal 00556 * . entries in H. If required, use U to update Z as 00557 * . well. ==== 00558 * 00559 IF( ACCUM ) THEN 00560 IF( WANTT ) THEN 00561 JTOP = 1 00562 JBOT = N 00563 ELSE 00564 JTOP = KTOP 00565 JBOT = KBOT 00566 END IF 00567 IF( ( .NOT.BLK22 ) .OR. ( INCOL.LT.KTOP ) .OR. 00568 $ ( NDCOL.GT.KBOT ) .OR. ( NS.LE.2 ) ) THEN 00569 * 00570 * ==== Updates not exploiting the 2-by-2 block 00571 * . structure of U. K1 and NU keep track of 00572 * . the location and size of U in the special 00573 * . cases of introducing bulges and chasing 00574 * . bulges off the bottom. In these special 00575 * . cases and in case the number of shifts 00576 * . is NS = 2, there is no 2-by-2 block 00577 * . structure to exploit. ==== 00578 * 00579 K1 = MAX( 1, KTOP-INCOL ) 00580 NU = ( KDU-MAX( 0, NDCOL-KBOT ) ) - K1 + 1 00581 * 00582 * ==== Horizontal Multiply ==== 00583 * 00584 DO 160 JCOL = MIN( NDCOL, KBOT ) + 1, JBOT, NH 00585 JLEN = MIN( NH, JBOT-JCOL+1 ) 00586 CALL SGEMM( 'C', 'N', NU, JLEN, NU, ONE, U( K1, K1 ), 00587 $ LDU, H( INCOL+K1, JCOL ), LDH, ZERO, WH, 00588 $ LDWH ) 00589 CALL SLACPY( 'ALL', NU, JLEN, WH, LDWH, 00590 $ H( INCOL+K1, JCOL ), LDH ) 00591 160 CONTINUE 00592 * 00593 * ==== Vertical multiply ==== 00594 * 00595 DO 170 JROW = JTOP, MAX( KTOP, INCOL ) - 1, NV 00596 JLEN = MIN( NV, MAX( KTOP, INCOL )-JROW ) 00597 CALL SGEMM( 'N', 'N', JLEN, NU, NU, ONE, 00598 $ H( JROW, INCOL+K1 ), LDH, U( K1, K1 ), 00599 $ LDU, ZERO, WV, LDWV ) 00600 CALL SLACPY( 'ALL', JLEN, NU, WV, LDWV, 00601 $ H( JROW, INCOL+K1 ), LDH ) 00602 170 CONTINUE 00603 * 00604 * ==== Z multiply (also vertical) ==== 00605 * 00606 IF( WANTZ ) THEN 00607 DO 180 JROW = ILOZ, IHIZ, NV 00608 JLEN = MIN( NV, IHIZ-JROW+1 ) 00609 CALL SGEMM( 'N', 'N', JLEN, NU, NU, ONE, 00610 $ Z( JROW, INCOL+K1 ), LDZ, U( K1, K1 ), 00611 $ LDU, ZERO, WV, LDWV ) 00612 CALL SLACPY( 'ALL', JLEN, NU, WV, LDWV, 00613 $ Z( JROW, INCOL+K1 ), LDZ ) 00614 180 CONTINUE 00615 END IF 00616 ELSE 00617 * 00618 * ==== Updates exploiting U's 2-by-2 block structure. 00619 * . (I2, I4, J2, J4 are the last rows and columns 00620 * . of the blocks.) ==== 00621 * 00622 I2 = ( KDU+1 ) / 2 00623 I4 = KDU 00624 J2 = I4 - I2 00625 J4 = KDU 00626 * 00627 * ==== KZS and KNZ deal with the band of zeros 00628 * . along the diagonal of one of the triangular 00629 * . blocks. ==== 00630 * 00631 KZS = ( J4-J2 ) - ( NS+1 ) 00632 KNZ = NS + 1 00633 * 00634 * ==== Horizontal multiply ==== 00635 * 00636 DO 190 JCOL = MIN( NDCOL, KBOT ) + 1, JBOT, NH 00637 JLEN = MIN( NH, JBOT-JCOL+1 ) 00638 * 00639 * ==== Copy bottom of H to top+KZS of scratch ==== 00640 * (The first KZS rows get multiplied by zero.) ==== 00641 * 00642 CALL SLACPY( 'ALL', KNZ, JLEN, H( INCOL+1+J2, JCOL ), 00643 $ LDH, WH( KZS+1, 1 ), LDWH ) 00644 * 00645 * ==== Multiply by U21**T ==== 00646 * 00647 CALL SLASET( 'ALL', KZS, JLEN, ZERO, ZERO, WH, LDWH ) 00648 CALL STRMM( 'L', 'U', 'C', 'N', KNZ, JLEN, ONE, 00649 $ U( J2+1, 1+KZS ), LDU, WH( KZS+1, 1 ), 00650 $ LDWH ) 00651 * 00652 * ==== Multiply top of H by U11**T ==== 00653 * 00654 CALL SGEMM( 'C', 'N', I2, JLEN, J2, ONE, U, LDU, 00655 $ H( INCOL+1, JCOL ), LDH, ONE, WH, LDWH ) 00656 * 00657 * ==== Copy top of H to bottom of WH ==== 00658 * 00659 CALL SLACPY( 'ALL', J2, JLEN, H( INCOL+1, JCOL ), LDH, 00660 $ WH( I2+1, 1 ), LDWH ) 00661 * 00662 * ==== Multiply by U21**T ==== 00663 * 00664 CALL STRMM( 'L', 'L', 'C', 'N', J2, JLEN, ONE, 00665 $ U( 1, I2+1 ), LDU, WH( I2+1, 1 ), LDWH ) 00666 * 00667 * ==== Multiply by U22 ==== 00668 * 00669 CALL SGEMM( 'C', 'N', I4-I2, JLEN, J4-J2, ONE, 00670 $ U( J2+1, I2+1 ), LDU, 00671 $ H( INCOL+1+J2, JCOL ), LDH, ONE, 00672 $ WH( I2+1, 1 ), LDWH ) 00673 * 00674 * ==== Copy it back ==== 00675 * 00676 CALL SLACPY( 'ALL', KDU, JLEN, WH, LDWH, 00677 $ H( INCOL+1, JCOL ), LDH ) 00678 190 CONTINUE 00679 * 00680 * ==== Vertical multiply ==== 00681 * 00682 DO 200 JROW = JTOP, MAX( INCOL, KTOP ) - 1, NV 00683 JLEN = MIN( NV, MAX( INCOL, KTOP )-JROW ) 00684 * 00685 * ==== Copy right of H to scratch (the first KZS 00686 * . columns get multiplied by zero) ==== 00687 * 00688 CALL SLACPY( 'ALL', JLEN, KNZ, H( JROW, INCOL+1+J2 ), 00689 $ LDH, WV( 1, 1+KZS ), LDWV ) 00690 * 00691 * ==== Multiply by U21 ==== 00692 * 00693 CALL SLASET( 'ALL', JLEN, KZS, ZERO, ZERO, WV, LDWV ) 00694 CALL STRMM( 'R', 'U', 'N', 'N', JLEN, KNZ, ONE, 00695 $ U( J2+1, 1+KZS ), LDU, WV( 1, 1+KZS ), 00696 $ LDWV ) 00697 * 00698 * ==== Multiply by U11 ==== 00699 * 00700 CALL SGEMM( 'N', 'N', JLEN, I2, J2, ONE, 00701 $ H( JROW, INCOL+1 ), LDH, U, LDU, ONE, WV, 00702 $ LDWV ) 00703 * 00704 * ==== Copy left of H to right of scratch ==== 00705 * 00706 CALL SLACPY( 'ALL', JLEN, J2, H( JROW, INCOL+1 ), LDH, 00707 $ WV( 1, 1+I2 ), LDWV ) 00708 * 00709 * ==== Multiply by U21 ==== 00710 * 00711 CALL STRMM( 'R', 'L', 'N', 'N', JLEN, I4-I2, ONE, 00712 $ U( 1, I2+1 ), LDU, WV( 1, 1+I2 ), LDWV ) 00713 * 00714 * ==== Multiply by U22 ==== 00715 * 00716 CALL SGEMM( 'N', 'N', JLEN, I4-I2, J4-J2, ONE, 00717 $ H( JROW, INCOL+1+J2 ), LDH, 00718 $ U( J2+1, I2+1 ), LDU, ONE, WV( 1, 1+I2 ), 00719 $ LDWV ) 00720 * 00721 * ==== Copy it back ==== 00722 * 00723 CALL SLACPY( 'ALL', JLEN, KDU, WV, LDWV, 00724 $ H( JROW, INCOL+1 ), LDH ) 00725 200 CONTINUE 00726 * 00727 * ==== Multiply Z (also vertical) ==== 00728 * 00729 IF( WANTZ ) THEN 00730 DO 210 JROW = ILOZ, IHIZ, NV 00731 JLEN = MIN( NV, IHIZ-JROW+1 ) 00732 * 00733 * ==== Copy right of Z to left of scratch (first 00734 * . KZS columns get multiplied by zero) ==== 00735 * 00736 CALL SLACPY( 'ALL', JLEN, KNZ, 00737 $ Z( JROW, INCOL+1+J2 ), LDZ, 00738 $ WV( 1, 1+KZS ), LDWV ) 00739 * 00740 * ==== Multiply by U12 ==== 00741 * 00742 CALL SLASET( 'ALL', JLEN, KZS, ZERO, ZERO, WV, 00743 $ LDWV ) 00744 CALL STRMM( 'R', 'U', 'N', 'N', JLEN, KNZ, ONE, 00745 $ U( J2+1, 1+KZS ), LDU, WV( 1, 1+KZS ), 00746 $ LDWV ) 00747 * 00748 * ==== Multiply by U11 ==== 00749 * 00750 CALL SGEMM( 'N', 'N', JLEN, I2, J2, ONE, 00751 $ Z( JROW, INCOL+1 ), LDZ, U, LDU, ONE, 00752 $ WV, LDWV ) 00753 * 00754 * ==== Copy left of Z to right of scratch ==== 00755 * 00756 CALL SLACPY( 'ALL', JLEN, J2, Z( JROW, INCOL+1 ), 00757 $ LDZ, WV( 1, 1+I2 ), LDWV ) 00758 * 00759 * ==== Multiply by U21 ==== 00760 * 00761 CALL STRMM( 'R', 'L', 'N', 'N', JLEN, I4-I2, ONE, 00762 $ U( 1, I2+1 ), LDU, WV( 1, 1+I2 ), 00763 $ LDWV ) 00764 * 00765 * ==== Multiply by U22 ==== 00766 * 00767 CALL SGEMM( 'N', 'N', JLEN, I4-I2, J4-J2, ONE, 00768 $ Z( JROW, INCOL+1+J2 ), LDZ, 00769 $ U( J2+1, I2+1 ), LDU, ONE, 00770 $ WV( 1, 1+I2 ), LDWV ) 00771 * 00772 * ==== Copy the result back to Z ==== 00773 * 00774 CALL SLACPY( 'ALL', JLEN, KDU, WV, LDWV, 00775 $ Z( JROW, INCOL+1 ), LDZ ) 00776 210 CONTINUE 00777 END IF 00778 END IF 00779 END IF 00780 220 CONTINUE 00781 * 00782 * ==== End of SLAQR5 ==== 00783 * 00784 END