next up previous
Next: OptimizationTuning, and Trade-offs Up: No Title Previous: Data Distribution

Parallel Implementation

  In this section we describe the parallel implementation of LU factorization, with partial pivoting over rows, for a block-partitioned matrix. The matrix, A, to be factored is assumed to have a block cyclic decomposition, and at the end of the computation is overwritten by the lower and upper triangular factors, L and U. This implicitly determines the decomposition of L and U. Quite a high-level description is given here since the details of the parallel implementation involve optimization issues that will be addressed in Section 7.

  
Figure 10: Block cyclic decomposition of a tex2html_wrap_inline2189 matrix with a block size of tex2html_wrap_inline2191, onto a tex2html_wrap_inline2193 process template. Each small rectangle represents one matrix block - individual matrix elements are not shown. In (a), shading is used to emphasize the process template that is periodically stamped over the matrix, and each block is labeled with the process to which it is assigned. In (b), each shaded region shows the blocks in one process, and is labeled with the corresponding global block indices. In both figures, the black rectangles indicate the blocks assigned to process (0,0).

  figure458
Figure 11: The same matrix decomposition as shown in Figure 10, but for a template offset of tex2html_wrap_inline2249. Dashed entries in (b) indicate that the block does not contain any data. In both figures, the black rectangles indicate the blocks assigned to process (0,0).

  figure466
Figure 12: These 4 figures show different ways of decomposing a matrix. Each cell represents a matrix element, and is labeled by the position, (p,q), in the template of the process to which it is assigned. To emphasize the pattern of decomposition, the matrix entries assigned to the process in the first row and column of the template are shown shaded, and each separate shaded region represents a matrix block. Figures (a) and (b) show block and cyclic row-oriented decompositions, respectively, for 4 nodes. In figures (c) and (d) the corresponding column-oriented decompositions are shown. Below each figure we give the values of r, s, P, and Q corresponding to the decomposition. In all cases tex2html_wrap_inline2297.

  figure480
Figure 13: These 4 figures show different ways of decomposing a matrix over 16 processes arranged as a tex2html_wrap_inline2333 template. Below each figure we give the values of r, s, P, and Q corresponding to the decomposition. In all cases tex2html_wrap_inline2297.

The sequential LU factorization algorithm described in Section 4.1 uses square blocks. Although in the parallel algorithm we could choose to decompose the matrix using nonsquare blocks, this would result in a more complicated code, and additional sources of concurrent overhead. For LU factorization we, therefore, restrict the decomposition to use only square blocks, so that the blocks used to decompose the matrix are the same as those used to partition the computation. If the block size is tex2html_wrap_inline1875, then an tex2html_wrap_inline1863 matrix consists of tex2html_wrap_inline2349 blocks, where and tex2html_wrap_inline2353.

As discussed in Section 4.1, LU factorization proceeds in a series of sequential steps indexed by tex2html_wrap_inline2355, in each of which the following three tasks are performed,

  1. factor the kth column of blocks, performing pivoting if necessary. This evaluates the matrices tex2html_wrap_inline1999, tex2html_wrap_inline2001, and tex2html_wrap_inline2003 in Figure 6.
  2. evaluate the kth block row of U by solving the lower triangular system tex2html_wrap_inline2005.
  3. do a rank-r update on the trailing submatrix E, replacing it with tex2html_wrap_inline2013.

We now consider the parallel implementation of each of these tasks. The computation in the factorization step involves a single column of blocks, and these lie in a single column of the process template. In the kth factorization step, each of the r columns in block column k is processed in turn. Consider the ith column in block column k. The pivot is selected by finding the element with largest absolute value in this column between row kr+i and the last row, inclusive. The elements involved in the pivot search at this stage are shown shaded in Figure 14. Having selected the pivot, the value of the pivot and its row are broadcast to all other processors. Next, pivoting is performed by exchanging the entire row kr+i with the row containing the pivot. We exchange entire rows, rather than just the part to the right of the columns already factored, in order to simplify the application of the pivots to the righthand side in any subsequent solve phase. Finally, each value in the column below the pivot is divided by the pivot. If a cyclic column decomposition is used, like that shown in Figure 12(d), only one processor is involved in the factorization of the block column, and no communication is necessary between the processes. However, in general P processes are involved, and communication is necessary in selecting the pivot, and exchanging the pivot rows.

  figure502
Figure 14: This figure shows pivoting for step i of the kth stage of LU factorization. The element with largest absolute value in the gray shaded part of column kr+i is found, and the row containing it is exchanged with row kr+i. If the rows exchanged lie in different processes, communication may be necessary.

The solution of the lower triangular system tex2html_wrap_inline2005 to evaluate the kth block row of U involves a single row of blocks, and these lie in a single row of the process template. If a cyclic row decomposition is used, like that shown in Figure 12(b), only one processor is involved in the triangular solve, and no communication is necessary between the processes. However, in general Q processes are involved, and communication is necessary to broadcast the lower triangular matrix, tex2html_wrap_inline1999, to all processes in the row. Once this has been done, each process in the row independently performs a lower triangular solve for the blocks of C that it holds.

The communication necessary to update the trailing submatrix at step k takes place in two steps. First, each process holding part of tex2html_wrap_inline2001 broadcasts these blocks to the other processes in the same row of the template. This may be done in conjunction with the broadcast of tex2html_wrap_inline1999, mentioned in the preceding paragraph, so that all of the factored panel is broadcast together. Next, each process holding part of tex2html_wrap_inline2419 broadcasts these blocks to the other processes in the same column of the template. Each process can then complete the update of the blocks that it holds with no further communication.

  figure508
Figure 15: Pseudocode for the basic parallel block-partitioned LU factorization algorithm. This code is executed by each process. The first box inside the k loop factors the kth column of blocks. The second box solves a lower triangular system to evaluate the kth row of blocks of U, and the third box updates the trailing submatrix. The template offset is given by tex2html_wrap_inline2457, and (p,q) is position of a process in the template.

A pseudocode outline of the parallel LU factorization algorithm is given in Figure 15. There are two points worth noting in Figure 15. First, the triangular solve and update phases operate on matrix blocks and may, therefore, be done with parallel versions of the Level 3 BLAS (specifically, xTRSM and xGEMM, respectively). The factorization of the column of blocks, however, involves a loop over matrix columns. Hence, is it not a block-oriented computation, and cannot be performed using the Level 3 BLAS. The second point to note is that most of the parallelism in the code comes from updating the trailing submatrix since this is the only phase in which all the processes are busy.

Figure 15 also shows quite clearly where communication is required; namely, in finding the pivot, exchanging pivot rows, and performing various types of broadcast. The exact way in which these communications are done and interleaved with computation generally has an important effect on performance, and will be discussed in more detail in Section 7.

Figure 15 refers to broadcasting data to all processes in the same row or column of the template. This is a common operation in parallel linear algebra algorithms, so the idea will be described here in a little more detail. Consider, for example, the task of broadcasting the lower triangular block, tex2html_wrap_inline1999, to all processes in the same row of the template, as required before solving tex2html_wrap_inline2005. If tex2html_wrap_inline1999 is in process (p,q), then it will be broadcast to all processes in row p of the process template. As a second example, consider the broadcast of tex2html_wrap_inline2001 to all processes in the same template row, as required before updating the trailing submatrix. This type of ``rowcast'' is shown schematically in Figure 16(a). If tex2html_wrap_inline2001 is in column q of the template, then each process (p,q) broadcasts its blocks of tex2html_wrap_inline2001 to the other processes in row p of the template. Loosely speaking, we can say that tex2html_wrap_inline1999 and tex2html_wrap_inline2001 are broadcast along the rows of the template. This type of data movement is the same as that performed by the Fortran 90 routine SPREAD [7]. The broadcast of tex2html_wrap_inline2419 to all processes in the same template column is very similar. This type of communication is sometimes referred to as a ``colcast'', and is shown in Figure 16(b).

  figure534
Figure 16: Schematic representation of broadcast along rows and columns of a tex2html_wrap_inline2489 process template. In (a), each shaded process broadcasts to the processes in the same row of the process template. In (b), each shaded process broadcasts to the processes in the same column of the process template.


next up previous
Next: OptimizationTuning, and Trade-offs Up: No Title Previous: Data Distribution

Jack Dongarra
Sun Feb 9 10:05:05 EST 1997