Developing MPI Programs with Domain Decomposition

Slide Note
Embed
Share

Domain decomposition is a parallelization method used for developing MPI programs by partitioning the domain into portions and assigning them to different processes. Three common ways of partitioning are block, cyclic, and block-cyclic, each with its own communication requirements. Considerations for partitioning multi-dimensional domains are also discussed, aiming to simplify communication and minimize overhead. An example code snippet illustrates the concept of domain decomposition in a distributed memory system and the impact of partitioning on communication. The importance of choosing the appropriate partitioning scheme based on communication requirements is emphasized, with the distinction between embarrassingly parallel and communication-dependent computations highlighted.


Uploaded on Sep 21, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Programming distributed memory systems: Developing MPI programs with domain decomposition Domain decomposition

  2. Domain decomposition A parallelization method especially useful for developing MPI programs. o Partition the domain into portions and assign different domain portions to different processes. o Add necessary communication when needed. Example: Consider domain decomposition of 1D-domain of size 100 among 4 processes. 0 99

  3. domain decomposition of 1D-domain of size 100 among 4 processes. In general, the domain can be partitioned in three different ways, block, cyclic, and block-cyclic. Block: 0 P0 24 25 P1 49 50 P2 74 75 P3 99 Cyclic: Block-cyclic:

  4. Partitioning multi-dimensional domain P0 P1 P2 P3 P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 With multiple dimensions, one can choose which dimension(s) to partition. o What is the best way to partition? Simplify and minimize the communication!

  5. Domain decomposition example: What is the partition scheme used in pi_mpi.c? = + + n n 1 0 . 4 n = PI Lim 5 . 0 5 . 0 i i n n 1 1 ( * ) 1 i MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myid); H = 1.0 / (DOUBLE) N; SUM = 0.0; h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += 4.0 / (1.0 + x*x); } mypi = h * sum; FOR (I = 1; I <= N; I++) { X = H * ((DOUBLE)I - 0.5); SUM += 4.0 / (1.0 + X*X); } MYPI = H * SUM; if (myid == 0) { for (i=1; i<numprocs; i++) { MPI_Recv(&tmp, 1, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &status); mypi += tmp; } } else MPI_Send(&mypi, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD); /* see pi_mpi.c */

  6. Domain decomposition in pi_mpi.c It uses cyclic partitioning. Can we change it to block partitioning? o The partitioning can be computed using numprocs and myid. pi_mpi.c is unusual in that the computation for each domain does not require data from another domain. This is called embarrassingly parallel. In most applications, the computation for each domain requires data from another domain, resulting in communication! o The communication requirement is what decides whether a partitioning is a good partitioning.

  7. Rough tasks in developing MPI program (from a sequential code) with domain decomposition 1. Break up the domain into portions. Assign each portion to a process 2. Provide a map of all domains to each process (each process knows who owns which data). 3. Orchestra the computation 1. Insert the communication and synchronization calls when necessary 2. Modify the code (e.g. mapping local index to global index) to find the domain portion for each process, and only compute the domain portion

  8. Domain decomposition example: matrix multiplication C[N][K] A[N][M] B[M][K] C[i][j] = A[i][0]*B[0][j] + A[i][1]*B[1][j] + + A[i][M-1]*B[M-1][j]

  9. Step1: breakup domains and assign to processes P0 P1 P2 P3 C[N][K] A[N][M] B[M][K] Distribute rows in the C matrix. Each process will compute a number of rows of the C matrix. How to partition A and B?

  10. Steps 1 breakup domains P0 P1 P2 P3 P0 P1 P2 P3 P0 P1 P2 P3 C[N][K] A[N][M] B[M][K] Distribute rows in the C and A matrices. How to partition B? Each process need the whole B array to complete the computation. Any partition would work, but different partitions will have different communication requirement Block distribution of columns will make communication relatively simple.

  11. for (i=0; i<nprocs; i++) { Node i sends its B array to all other nodes All other nodes receive the block Call mm_sse(localN, M, LocalK, a, receivedB, workC) Copy workC to C } Steps 3 Orchestra the computation P0 P1 P2 P3 P0 P1 P2 P3 P0 P1 P2 P3 P0 for (i=0; i<nprocs; i++) { Process i sends its B array to all other processes All other nodes receive the B block Call mm(localN, M, LocalK, a, receivedB, workC) Copy workC to C }

  12. Example: SOR

  13. Step 1: Partition the domain grid grid P0 P1 P2 P3 P1 p2 temp temp P1 p2 P1 p2

  14. Step 3: Orchestra the computation To update grid, the top row in P1 needs data From P0 (bottom row in P0), the bottom row In P1 needs the top row in P2. grid P0 P1 P2 P3 P1 temp Boundary elements must be communicated. P1 P1

  15. Communication of boundary elements Processes 0, 1, 2 send lower row to Processes 1,2 3. grid grid Processes 1, 2, 3 receiver upper row from processes 0, 1, 2 Process 1, 2, 3 send the upper row to processes 0, 1, 2 Processes 0, 1, 2 receive the lower row from processes 1, 2,3 p1 p2

  16. MPI code for Communicating boundary elements (See lect22/jacobi_mpi.c) if (rank < size - 1) MPI_Send( xlocal[maxn/size], maxn, MPI_DOUBLE, rank + 1, 0, MPI_COMM_WORLD ); if (rank > 0) MPI_Recv( xlocal[0], maxn, MPI_DOUBLE, rank - 1, 0, MPI_COMM_WORLD, &status ); /* Send down unless I'm at the bottom */ if (rank > 0) MPI_Send( xlocal[1], maxn, MPI_DOUBLE, rank - 1, 1, MPI_COMM_WORLD ); if (rank < size - 1) MPI_Recv( xlocal[maxn/size+1], maxn, MPI_DOUBLE, rank + 1, 1, MPI_COMM_WORLD, &status );

  17. Ideas for parallelizing our DNN code with MPI Option 1: Each processes handles one layer (in forward and backward propagation). o Each process except the process for the input layer receives data from the previous layer, do the computation, send the results to the next layer oThis is ok, but the number of processes that can be used is limited. For the 4 layer DNN, at most 8 processes can be used.

  18. Option 1

  19. Ideas for parallelizing our DNN code with MPI Option 2: Distributed each of the large layers among all processes o Each process maintains a portion of the neurons in each layer o To do the update, each process often needs all input data (both forward and back propagation), which are distributed among processes. This result in many MPI_Allgather operations. oThis will allow more processes to be used and get better speedup.

Related