SCALAPACK 2.2.2
LAPACK: Linear Algebra PACKage
Loading...
Searching...
No Matches
pdsyev.f
Go to the documentation of this file.
1 SUBROUTINE pdsyev( JOBZ, UPLO, N, A, IA, JA, DESCA, W,
2 $ Z, IZ, JZ, DESCZ, WORK, LWORK, INFO )
3*
4* -- ScaLAPACK routine (version 1.7) --
5* University of Tennessee, Knoxville, Oak Ridge National Laboratory,
6* and University of California, Berkeley.
7* May 25, 2001
8*
9* .. Scalar Arguments ..
10 CHARACTER JOBZ, UPLO
11 INTEGER IA, INFO, IZ, JA, JZ, LWORK, N
12* ..
13* .. Array Arguments ..
14 INTEGER DESCA( * ), DESCZ( * )
15 DOUBLE PRECISION A( * ), W( * ), WORK( * ), Z( * )
16* ..
17*
18* Purpose
19* =======
20*
21* PDSYEV computes all eigenvalues and, optionally, eigenvectors
22* of a real symmetric matrix A by calling the recommended sequence
23* of ScaLAPACK routines.
24*
25* In its present form, PDSYEV assumes a homogeneous system and makes
26* no checks for consistency of the eigenvalues or eigenvectors across
27* the different processes. Because of this, it is possible that a
28* heterogeneous system may return incorrect results without any error
29* messages.
30*
31* Notes
32* =====
33* A description vector is associated with each 2D block-cyclicly dis-
34* tributed matrix. This vector stores the information required to
35* establish the mapping between a matrix entry and its corresponding
36* process and memory location.
37*
38* In the following comments, the character _ should be read as
39* "of the distributed matrix". Let A be a generic term for any 2D
40* block cyclicly distributed matrix. Its description vector is DESCA:
41*
42* NOTATION STORED IN EXPLANATION
43* --------------- -------------- --------------------------------------
44* DTYPE_A(global) DESCA( DTYPE_) The descriptor type.
45* CTXT_A (global) DESCA( CTXT_ ) The BLACS context handle, indicating
46* the BLACS process grid A is distribu-
47* ted over. The context itself is glo-
48* bal, but the handle (the integer
49* value) may vary.
50* M_A (global) DESCA( M_ ) The number of rows in the distributed
51* matrix A.
52* N_A (global) DESCA( N_ ) The number of columns in the distri-
53* buted matrix A.
54* MB_A (global) DESCA( MB_ ) The blocking factor used to distribute
55* the rows of A.
56* NB_A (global) DESCA( NB_ ) The blocking factor used to distribute
57* the columns of A.
58* RSRC_A (global) DESCA( RSRC_ ) The process row over which the first
59* row of the matrix A is distributed.
60* CSRC_A (global) DESCA( CSRC_ ) The process column over which the
61* first column of A is distributed.
62* LLD_A (local) DESCA( LLD_ ) The leading dimension of the local
63* array storing the local blocks of the
64* distributed matrix A.
65* LLD_A >= MAX(1,LOCr(M_A)).
66*
67* Let K be the number of rows or columns of a distributed matrix,
68* and assume that its process grid has dimension p x q.
69* LOCr( K ) denotes the number of elements of K that a process
70* would receive if K were distributed over the p processes of its
71* process column.
72* Similarly, LOCc( K ) denotes the number of elements of K that a
73* process would receive if K were distributed over the q processes of
74* its process row.
75* The values of LOCr() and LOCc() may be determined via a call to the
76* ScaLAPACK tool function, NUMROC:
77* LOCr( M ) = NUMROC( M, MB_A, MYROW, RSRC_A, NPROW ),
78* LOCc( N ) = NUMROC( N, NB_A, MYCOL, CSRC_A, NPCOL ).
79*
80* Arguments
81* =========
82*
83* NP = the number of rows local to a given process.
84* NQ = the number of columns local to a given process.
85*
86* JOBZ (global input) CHARACTER*1
87* Specifies whether or not to compute the eigenvectors:
88* = 'N': Compute eigenvalues only.
89* = 'V': Compute eigenvalues and eigenvectors.
90*
91* UPLO (global input) CHARACTER*1
92* Specifies whether the upper or lower triangular part of the
93* symmetric matrix A is stored:
94* = 'U': Upper triangular
95* = 'L': Lower triangular
96*
97* N (global input) INTEGER
98* The number of rows and columns of the matrix A. N >= 0.
99*
100* A (local input/workspace) block cyclic DOUBLE PRECISION array,
101* global dimension (N, N), local dimension ( LLD_A,
102* LOCc(JA+N-1) )
103*
104* On entry, the symmetric matrix A. If UPLO = 'U', only the
105* upper triangular part of A is used to define the elements of
106* the symmetric matrix. If UPLO = 'L', only the lower
107* triangular part of A is used to define the elements of the
108* symmetric matrix.
109*
110* On exit, the lower triangle (if UPLO='L') or the upper
111* triangle (if UPLO='U') of A, including the diagonal, is
112* destroyed.
113*
114* IA (global input) INTEGER
115* A's global row index, which points to the beginning of the
116* submatrix which is to be operated on.
117*
118* JA (global input) INTEGER
119* A's global column index, which points to the beginning of
120* the submatrix which is to be operated on.
121*
122* DESCA (global and local input) INTEGER array of dimension DLEN_.
123* The array descriptor for the distributed matrix A.
124* If DESCA( CTXT_ ) is incorrect, PDSYEV cannot guarantee
125* correct error reporting.
126*
127* W (global output) DOUBLE PRECISION array, dimension (N)
128* If INFO=0, the eigenvalues in ascending order.
129*
130* Z (local output) DOUBLE PRECISION array,
131* global dimension (N, N),
132* local dimension ( LLD_Z, LOCc(JZ+N-1) )
133* If JOBZ = 'V', then on normal exit the first M columns of Z
134* contain the orthonormal eigenvectors of the matrix
135* corresponding to the selected eigenvalues.
136* If JOBZ = 'N', then Z is not referenced.
137*
138* IZ (global input) INTEGER
139* Z's global row index, which points to the beginning of the
140* submatrix which is to be operated on.
141*
142* JZ (global input) INTEGER
143* Z's global column index, which points to the beginning of
144* the submatrix which is to be operated on.
145*
146* DESCZ (global and local input) INTEGER array of dimension DLEN_.
147* The array descriptor for the distributed matrix Z.
148* DESCZ( CTXT_ ) must equal DESCA( CTXT_ )
149*
150* WORK (local workspace/output) DOUBLE PRECISION array,
151* dimension (LWORK)
152* Version 1.0: on output, WORK(1) returns the workspace
153* needed to guarantee completion.
154* If the input parameters are incorrect, WORK(1) may also be
155* incorrect.
156*
157* If JOBZ='N' WORK(1) = minimal=optimal amount of workspace
158* If JOBZ='V' WORK(1) = minimal workspace required to
159* generate all the eigenvectors.
160*
161*
162* LWORK (local input) INTEGER
163* See below for definitions of variables used to define LWORK.
164* If no eigenvectors are requested (JOBZ = 'N') then
165* LWORK >= 5*N + SIZESYTRD + 1
166* where
167* SIZESYTRD = The workspace requirement for PDSYTRD
168* and is MAX( NB * ( NP +1 ), 3 * NB )
169* If eigenvectors are requested (JOBZ = 'V' ) then
170* the amount of workspace required to guarantee that all
171* eigenvectors are computed is:
172*
173* QRMEM = 2*N-2
174* LWMIN = 5*N + N*LDC + MAX( SIZEMQRLEFT, QRMEM ) + 1
175*
176* Variable definitions:
177* NB = DESCA( MB_ ) = DESCA( NB_ ) =
178* DESCZ( MB_ ) = DESCZ( NB_ )
179* NN = MAX( N, NB, 2 )
180* DESCA( RSRC_ ) = DESCA( RSRC_ ) = DESCZ( RSRC_ ) =
181* DESCZ( CSRC_ ) = 0
182* NP = NUMROC( NN, NB, 0, 0, NPROW )
183* NQ = NUMROC( MAX( N, NB, 2 ), NB, 0, 0, NPCOL )
184* NRC = NUMROC( N, NB, MYPROWC, 0, NPROCS)
185* LDC = MAX( 1, NRC )
186* SIZEMQRLEFT = The workspace requirement for PDORMTR
187* when it's SIDE argument is 'L'.
188*
189* With MYPROWC defined when a new context is created as:
190* CALL BLACS_GET( DESCA( CTXT_ ), 0, CONTEXTC )
191* CALL BLACS_GRIDINIT( CONTEXTC, 'R', NPROCS, 1 )
192* CALL BLACS_GRIDINFO( CONTEXTC, NPROWC, NPCOLC, MYPROWC,
193* MYPCOLC )
194*
195* If LWORK = -1, the LWORK is global input and a workspace
196* query is assumed; the routine only calculates the minimum
197* size for the WORK array. The required workspace is returned
198* as the first element of WORK and no error message is issued
199* by PXERBLA.
200*
201* INFO (global output) INTEGER
202* = 0: successful exit
203* < 0: If the i-th argument is an array and the j-entry had
204* an illegal value, then INFO = -(i*100+j), if the i-th
205* argument is a scalar and had an illegal value, then
206* INFO = -i.
207* > 0: If INFO = 1 through N, the i(th) eigenvalue did not
208* converge in DSTEQR2 after a total of 30*N iterations.
209* If INFO = N+1, then PDSYEV has detected heterogeneity
210* by finding that eigenvalues were not identical across
211* the process grid. In this case, the accuracy of
212* the results from PDSYEV cannot be guaranteed.
213*
214* Alignment requirements
215* ======================
216*
217* The distributed submatrices A(IA:*, JA:*) and Z(IZ:IZ+M-1,JZ:JZ+N-1)
218* must verify some alignment properties, namely the following
219* expressions should be true:
220*
221* ( MB_A.EQ.NB_A.EQ.MB_Z .AND. IROFFA.EQ.IROFFZ .AND. IROFFA.EQ.0 .AND.
222* IAROW.EQ.IZROW )
223* where
224* IROFFA = MOD( IA-1, MB_A ) and ICOFFA = MOD( JA-1, NB_A ).
225*
226* =====================================================================
227*
228* Version 1.4 limitations:
229* DESCA(MB_) = DESCA(NB_)
230* DESCA(M_) = DESCZ(M_)
231* DESCA(N_) = DESCZ(N_)
232* DESCA(MB_) = DESCZ(MB_)
233* DESCA(NB_) = DESCZ(NB_)
234* DESCA(RSRC_) = DESCZ(RSRC_)
235*
236* .. Parameters ..
237 INTEGER BLOCK_CYCLIC_2D, DLEN_, DTYPE_, CTXT_, M_, N_,
238 $ mb_, nb_, rsrc_, csrc_, lld_
239 parameter( block_cyclic_2d = 1, dlen_ = 9, dtype_ = 1,
240 $ ctxt_ = 2, m_ = 3, n_ = 4, mb_ = 5, nb_ = 6,
241 $ rsrc_ = 7, csrc_ = 8, lld_ = 9 )
242 DOUBLE PRECISION FIVE, ONE, TEN, ZERO
243 parameter( zero = 0.0d+0, one = 1.0d+0,
244 $ ten = 10.0d+0, five = 5.0d+0 )
245 INTEGER IERREIN, IERRCLS, IERRSPC, IERREBZ, ITHVAL
246 parameter( ierrein = 1, ierrcls = 2, ierrspc = 4,
247 $ ierrebz = 8, ithval = 10 )
248* ..
249* .. Local Scalars ..
250 LOGICAL LOWER, WANTZ
251 INTEGER CONTEXTC, CSRC_A, I, IACOL, IAROW, ICOFFA,
252 $ iinfo, indd, indd2, inde, inde2, indtau,
253 $ indwork, indwork2, iroffa, iroffz, iscale,
254 $ izrow, j, k, ldc, llwork, lwmin, mb_a, mb_z,
255 $ mycol, mypcolc, myprowc, myrow, nb, nb_a, nb_z,
256 $ np, npcol, npcolc, nprocs, nprow, nprowc, nq,
257 $ nrc, qrmem, rsrc_a, rsrc_z, sizemqrleft,
258 $ sizesytrd
259 DOUBLE PRECISION ANRM, BIGNUM, EPS, RMAX, RMIN, SAFMIN, SIGMA,
260 $ smlnum
261* ..
262* .. Local Arrays ..
263 INTEGER DESCQR( 9 ), IDUM1( 3 ), IDUM2( 3 )
264* ..
265* .. External Functions ..
266 LOGICAL LSAME
267 INTEGER INDXG2P, NUMROC, SL_GRIDRESHAPE
268 DOUBLE PRECISION PDLAMCH, PDLANSY
269 EXTERNAL lsame, numroc, pdlamch, pdlansy,
270 $ sl_gridreshape
271* ..
272* .. External Subroutines ..
273 EXTERNAL blacs_gridexit, blacs_gridinfo, chk1mat, dcopy,
274 $ descinit, dscal, dsteqr2, pchk1mat, pchk2mat,
275 $ pdelget, pdgemr2d, pdlascl, pdlaset, pdormtr,
277* ..
278* .. Intrinsic Functions ..
279 INTRINSIC abs, dble, ichar, max, min, mod, sqrt, int
280* ..
281* .. Executable Statements ..
282* This is just to keep ftnchek and toolpack/1 happy
283 IF( block_cyclic_2d*csrc_*ctxt_*dlen_*dtype_*lld_*mb_*m_*nb_*n_*
284 $ rsrc_.LT.0 )RETURN
285*
286* Quick return
287*
288 IF( n.EQ.0 ) RETURN
289*
290* Test the input arguments.
291*
292 CALL blacs_gridinfo( desca( ctxt_ ), nprow, npcol, myrow, mycol )
293 info = 0
294*
295 wantz = lsame( jobz, 'V' )
296 IF( nprow.EQ.-1 ) THEN
297 info = -( 700+ctxt_ )
298 ELSE IF( wantz ) THEN
299 IF( desca( ctxt_ ).NE.descz( ctxt_ ) ) THEN
300 info = -( 1200+ctxt_ )
301 END IF
302 END IF
303 IF( info .EQ. 0 ) THEN
304 CALL chk1mat( n, 3, n, 3, ia, ja, desca, 7, info )
305 IF( wantz )
306 $ CALL chk1mat( n, 3, n, 3, iz, jz, descz, 12, info )
307*
308 IF( info.EQ.0 ) THEN
309*
310* Get machine constants.
311*
312 safmin = pdlamch( desca( ctxt_ ), 'Safe minimum' )
313 eps = pdlamch( desca( ctxt_ ), 'Precision' )
314 smlnum = safmin / eps
315 bignum = one / smlnum
316 rmin = sqrt( smlnum )
317 rmax = min( sqrt( bignum ), one / sqrt( sqrt( safmin ) ) )
318*
319 nprocs = nprow*npcol
320 nb_a = desca( nb_ )
321 mb_a = desca( mb_ )
322 nb = nb_a
323 lower = lsame( uplo, 'L' )
324*
325 rsrc_a = desca( rsrc_ )
326 csrc_a = desca( csrc_ )
327 iroffa = mod( ia-1, mb_a )
328 icoffa = mod( ja-1, nb_a )
329 iarow = indxg2p( 1, nb_a, myrow, rsrc_a, nprow )
330 iacol = indxg2p( 1, mb_a, mycol, csrc_a, npcol )
331 np = numroc( n+iroffa, nb, myrow, iarow, nprow )
332 nq = numroc( n+icoffa, nb, mycol, iacol, npcol )
333
334 IF( wantz ) THEN
335 nb_z = descz( nb_ )
336 mb_z = descz( mb_ )
337 rsrc_z = descz( rsrc_ )
338 iroffz = mod( iz-1, mb_a )
339 izrow = indxg2p( 1, nb_a, myrow, rsrc_z, nprow )
340 sizemqrleft = max( ( nb_a*( nb_a-1 ) ) / 2, ( np+nq )*
341 $ nb_a ) + nb_a*nb_a
342 ELSE
343 sizemqrleft = 0
344 iroffz = 0
345 izrow = 0
346 END IF
347 sizesytrd = max( nb * ( np +1 ), 3 * nb )
348*
349* Initialize the context of the single column distributed
350* matrix required by DSTEQR2. This specific distribution
351* allows each process to do 1/pth of the work updating matrix
352* Q during DSTEQR2 and achieve some parallelization to an
353* otherwise serial subroutine.
354*
355 ldc = 0
356 IF( wantz ) THEN
357 contextc = sl_gridreshape( desca( ctxt_ ), 0, 1, 1,
358 $ nprocs, 1 )
359 CALL blacs_gridinfo( contextc, nprowc, npcolc, myprowc,
360 $ mypcolc )
361 nrc = numroc( n, nb_a, myprowc, 0, nprocs)
362 ldc = max( 1, nrc )
363 CALL descinit( descqr, n, n, nb, nb, 0, 0, contextc,
364 $ ldc, info )
365 END IF
366
367*
368* Set up pointers into the WORK array
369*
370 indtau = 1
371 inde = indtau + n
372 indd = inde + n
373 indd2 = indd + n
374 inde2 = indd2 + n
375 indwork = inde2 + n
376 indwork2 = indwork + n*ldc
377 llwork = lwork - indwork + 1
378*
379* Compute the total amount of space needed
380*
381 qrmem = 2*n-2
382 IF( wantz ) THEN
383 lwmin = 5*n + n*ldc + max( sizemqrleft, qrmem ) + 1
384 ELSE
385 lwmin = 5*n + sizesytrd + 1
386 END IF
387*
388 END IF
389 IF( info.EQ.0 ) THEN
390 IF( .NOT.( wantz .OR. lsame( jobz, 'N' ) ) ) THEN
391 info = -1
392 ELSE IF( .NOT.( lower .OR. lsame( uplo, 'U' ) ) ) THEN
393 info = -2
394 ELSE IF( lwork.LT.lwmin .AND. lwork.NE.-1 ) THEN
395 info = -14
396 ELSE IF( iroffa.NE.0 ) THEN
397 info = -5
398 ELSE IF( desca( mb_ ).NE.desca( nb_ ) ) THEN
399 info = -( 700+nb_ )
400 END IF
401 IF( wantz ) THEN
402 IF( iroffa.NE.iroffz ) THEN
403 info = -10
404 ELSE IF( iarow.NE.izrow ) THEN
405 info = -10
406 ELSE IF( desca( m_ ).NE.descz( m_ ) ) THEN
407 info = -( 1200+m_ )
408 ELSE IF( desca( n_ ).NE.descz( n_ ) ) THEN
409 info = -( 1200+n_ )
410 ELSE IF( desca( mb_ ).NE.descz( mb_ ) ) THEN
411 info = -( 1200+mb_ )
412 ELSE IF( desca( nb_ ).NE.descz( nb_ ) ) THEN
413 info = -( 1200+nb_ )
414 ELSE IF( desca( rsrc_ ).NE.descz( rsrc_ ) ) THEN
415 info = -( 1200+rsrc_ )
416 ELSE IF( desca( ctxt_ ).NE.descz( ctxt_ ) ) THEN
417 info = -( 1200+ctxt_ )
418 ENDIF
419 END IF
420 END IF
421 IF( wantz ) THEN
422 idum1( 1 ) = ichar( 'V' )
423 ELSE
424 idum1( 1 ) = ichar( 'N' )
425 END IF
426 idum2( 1 ) = 1
427 IF( lower ) THEN
428 idum1( 2 ) = ichar( 'L' )
429 ELSE
430 idum1( 2 ) = ichar( 'U' )
431 END IF
432 idum2( 2 ) = 2
433 IF( lwork.EQ.-1 ) THEN
434 idum1( 3 ) = -1
435 ELSE
436 idum1( 3 ) = 1
437 END IF
438 idum2( 3 ) = 3
439 IF( lsame( jobz, 'V' ) ) THEN
440 CALL pchk2mat( n, 3, n, 3, ia, ja, desca, 7, n, 3, n, 3,
441 $ iz, jz, descz, 12, 3, idum1, idum2, info )
442 ELSE
443 CALL pchk1mat( n, 3, n, 3, ia, ja, desca, 7, 3, idum1,
444 $ idum2, info )
445 END IF
446*
447* Write the required workspace for lwork queries.
448*
449 work( 1 ) = dble( lwmin )
450 END IF
451*
452 IF( info.NE.0 ) THEN
453 CALL pxerbla( desca( ctxt_ ), 'PDSYEV', -info )
454 IF( wantz ) CALL blacs_gridexit( contextc )
455 RETURN
456 ELSE IF( lwork .EQ. -1 ) THEN
457 IF( wantz ) CALL blacs_gridexit( contextc )
458 RETURN
459 END IF
460*
461* Scale matrix to allowable range, if necessary.
462*
463 iscale = 0
464*
465 anrm = pdlansy( 'M', uplo, n, a, ia, ja, desca, work( indwork ) )
466*
467
468 IF( anrm.GT.zero .AND. anrm.LT.rmin ) THEN
469 iscale = 1
470 sigma = rmin / anrm
471 ELSE IF( anrm.GT.rmax ) THEN
472 iscale = 1
473 sigma = rmax / anrm
474 END IF
475*
476 IF( iscale.EQ.1 ) THEN
477 CALL pdlascl( uplo, one, sigma, n, n, a, ia, ja, desca, iinfo )
478 END IF
479*
480* Reduce symmetric matrix to tridiagonal form.
481*
482 CALL pdsytrd( uplo, n, a, ia, ja, desca, work( indd ),
483 $ work( inde ), work( indtau ), work( indwork ),
484 $ llwork, iinfo )
485*
486* Copy the values of D, E to all processes.
487*
488 DO 10 i=1,n
489 CALL pdelget( 'A', ' ', work(indd2+i-1), a,
490 $ i+ia-1, i+ja-1, desca )
491 10 CONTINUE
492 IF( lsame( uplo, 'U') ) THEN
493 DO 20 i=1,n-1
494 CALL pdelget( 'A', ' ', work(inde2+i-1), a,
495 $ i+ia-1, i+ja, desca )
496 20 CONTINUE
497 ELSE
498 DO 30 i=1,n-1
499 CALL pdelget( 'A', ' ', work(inde2+i-1), a,
500 $ i+ia, i+ja-1, desca )
501 30 CONTINUE
502 ENDIF
503*
504 IF( wantz ) THEN
505*
506 CALL pdlaset( 'Full', n, n, zero, one, work( indwork ), 1, 1,
507 $ descqr )
508*
509* DSTEQR2 is a modified version of LAPACK's DSTEQR. The
510* modifications allow each process to perform partial updates
511* to matrix Q.
512*
513 CALL dsteqr2( 'I', n, work( indd2 ), work( inde2 ),
514 $ work( indwork ), ldc, nrc, work( indwork2 ),
515 $ info )
516*
517 CALL pdgemr2d( n, n, work( indwork ), 1, 1, descqr, z, ia, ja,
518 $ descz, contextc )
519*
520 CALL pdormtr( 'L', uplo, 'N', n, n, a, ia, ja, desca,
521 $ work( indtau ), z, iz, jz, descz,
522 $ work( indwork ), llwork, iinfo )
523*
524 ELSE
525*
526 CALL dsteqr2( 'N', n, work( indd2 ), work( inde2 ),
527 $ work( indwork ), 1, 1, work( indwork2 ),
528 $ info )
529 ENDIF
530*
531* Copy eigenvalues from workspace to output array
532*
533 CALL dcopy( n, work( indd2 ), 1, w, 1 )
534*
535* If matrix was scaled, then rescale eigenvalues appropriately.
536*
537 IF( iscale .EQ. 1 ) THEN
538 CALL dscal( n, one / sigma, w, 1 )
539 END IF
540*
541* Free up resources
542*
543 IF( wantz ) THEN
544 CALL blacs_gridexit( contextc )
545 END IF
546*
547* Compare every ith eigenvalue, or all if there are only a few,
548* across the process grid to check for heterogeneity.
549*
550 IF( n.LE.ithval ) THEN
551 j = n
552 k = 1
553 ELSE
554 j = n/ithval
555 k = ithval
556 END IF
557*
558 DO 40 i = 1, j
559 work( i+indtau ) = w( (i-1)*k+1 )
560 work( i+inde ) = w( (i-1)*k+1 )
561 40 CONTINUE
562*
563 CALL dgamn2d( desca( ctxt_ ), 'a', ' ', j, 1, work( 1+indtau ),
564 $ j, 1, 1, -1, -1, 0 )
565 CALL dgamx2d( desca( ctxt_ ), 'a', ' ', j, 1, work( 1+inde ),
566 $ j, 1, 1, -1, -1, 0 )
567*
568 DO 50 i = 1, j
569 IF( info.EQ.0 .AND. ( work( i+indtau )-work( i+inde )
570 $ .NE. zero ) )THEN
571 info = n+1
572 END IF
573 50 CONTINUE
574*
575 RETURN
576*
577* End of PDSYEV
578*
579 END
subroutine chk1mat(ma, mapos0, na, napos0, ia, ja, desca, descapos0, info)
Definition chk1mat.f:3
subroutine descinit(desc, m, n, mb, nb, irsrc, icsrc, ictxt, lld, info)
Definition descinit.f:3
subroutine dsteqr2(compz, n, d, e, z, ldz, nr, work, info)
Definition dsteqr2.f:2
#define max(A, B)
Definition pcgemr.c:180
#define min(A, B)
Definition pcgemr.c:181
subroutine pchk1mat(ma, mapos0, na, napos0, ia, ja, desca, descapos0, nextra, ex, expos, info)
Definition pchkxmat.f:3
subroutine pchk2mat(ma, mapos0, na, napos0, ia, ja, desca, descapos0, mb, mbpos0, nb, nbpos0, ib, jb, descb, descbpos0, nextra, ex, expos, info)
Definition pchkxmat.f:175
subroutine pdlaset(uplo, m, n, alpha, beta, a, ia, ja, desca)
Definition pdblastst.f:6862
subroutine pdelget(scope, top, alpha, a, ia, ja, desca)
Definition pdelget.f:2
subroutine pdlascl(type, cfrom, cto, m, n, a, ia, ja, desca, info)
Definition pdlascl.f:3
subroutine pdormtr(side, uplo, trans, m, n, a, ia, ja, desca, tau, c, ic, jc, descc, work, lwork, info)
Definition pdormtr.f:3
subroutine pdsyev(jobz, uplo, n, a, ia, ja, desca, w, z, iz, jz, descz, work, lwork, info)
Definition pdsyev.f:3
subroutine pdsytrd(uplo, n, a, ia, ja, desca, d, e, tau, work, lwork, info)
Definition pdsytrd.f:3
subroutine pxerbla(ictxt, srname, info)
Definition pxerbla.f:2