Shared-Memory Computing with Open MP

undefined
 
Lecture 5: Shared-memory Computing with
Open MP
undefined
 
Shared Memory Computing
undefined
 
Getting Started: Example 1 (Hello)
 
#include "stdafx.h"
undefined
 
Getting Started: Example 1 (Hello)
 
Set the compiler option in the Visual Studio development environment
1.
Open the project's 
Property Pages
 dialog box.
2.
Expand the 
Configuration Properties
 node.
3.
Expand the 
C/C++
 node.
4.
Select the 
Language
 property page.
5.
Modify the 
OpenMP Support
 property.
 
Set the arguments to the main function parameters in the Visual Studio
development environment 
(e.g., 
int
 main(
int
 argc, 
char
* argv[])
1.
In Project Properties (click Project or project name on solution Explore),
expand Configuration Properties, and then click Debugging.
2.
In the pane on the right, in the textbox to the right of Command Arguments,
type the arguments to main you want to use. For example, one two
 
In case the compiler doesn’t support OpenMP
 
CS4230
 
# include <omp.h>
 
#ifdef _OPENMP
# include <omp.h>
#endif
 
08/30/2012
 
5
 
In case the compiler doesn’t support OpenMP
 
CS4230
 
# ifdef _OPENMP
   int my_rank = omp_get_thread_num ( );
   int thread_count = omp_get_num_threads ( );
# e l s e
   int my_rank = 0;
   int thread_count = 1;
# endif
 
08/30/2012
 
6
 
OpenMP: Prevailing Shared Memory Programming Approach
 
Model for shared-memory parallel programming
Portable across shared-memory architectures
Scalable (on shared-memory platforms)
Incremental parallelization
-
Parallelize individual computations in a program while leaving
the rest of the program sequential
Compiler based
-
Compiler generates thread program and synchronization
Extensions to existing programming languages
(Fortran, C and C++)
-
mainly by directives
-
a few library routines
 
See http://www.openmp.org
 
09/06/2011
 
CS4961
 
OpenMP Execution Model
 
09/04/2012
 
CS4230
 
join
 
OpenMP uses Pragmas
 
Pragmas are special preprocessor instructions.
Typically added to a system to allow behaviors that
aren’t part of the basic C specification.
Compilers that don’t support the pragmas ignore them.
The interpretation of OpenMP pragmas
-
They modify the statement immediately following the pragma
-
This could be a compound statement such as a loop
 
#pragma omp …
 
08/30/2012
 
CS4230
 
9
 
OpenMP parallel region construct
 
Block of code to be executed by multiple threads in
parallel
E
a
c
h
 
t
h
r
e
a
d
 
e
x
e
c
u
t
e
s
 
t
h
e
 
s
a
m
e
 
c
o
d
e
 
r
e
d
u
n
d
a
n
t
l
y
(
S
P
M
D
)
-
W
o
r
k
 
w
i
t
h
i
n
 
w
o
r
k
-
s
h
a
r
i
n
g
 
c
o
n
s
t
r
u
c
t
s
 
i
s
 
d
i
s
t
r
i
b
u
t
e
d
 
a
m
o
n
g
t
h
e
 
t
h
r
e
a
d
s
 
i
n
 
a
 
t
e
a
m
Example with C/C++ syntax
 
#pragma omp parallel 
[ clause [ clause ] ... ] new-line
  
structured-block
clause can include the following:
private
 (list)
s
h
a
r
e
d
 
(
l
i
s
t
)
 
09/04/2012
 
CS4230
Programming Model – Data Sharing
Parallel programs often employ
two types of data
-
Shared data, visible to all
threads, similarly named
-
Private data, visible to a single
thread (often stack-allocated)
 
OpenMP:
shared
 variables are shared
private
 variables are private
Default is 
shared
Loop index is
 private
 
 
 
 
 
 
 
 
 
 
 
/
/
 
s
h
a
r
e
d
,
 
g
l
o
b
a
l
s
i
n
t
 
b
i
g
d
a
t
a
[
1
0
2
4
]
;
v
o
i
d
*
 
f
o
o
(
v
o
i
d
*
 
b
a
r
)
 
{
 
 
/
/
 
p
r
i
v
a
t
e
,
 
s
t
a
c
k
 
 
i
n
t
 
t
i
d
;
 
 
/
*
 
C
a
l
c
u
l
a
t
i
o
n
 
g
o
e
s
 
 
 
 
 
h
e
r
e
 
*
/
}
int bigdata[1024];
void* foo(void* bar) {
  int tid;
  #pragma omp parallel \
   shared ( bigdata ) \
   
private ( tid )
  {
    /* Calc. here */
  }
}
 
OpenMP critical directive
 
Enclosed code
– executed by all threads, but
restricted to only one thread at a time
#pragma omp critical 
[ 
( 
name 
) 
] new-line
    structured-block
A thread waits at the beginning of a critical region until no
other thread in the team is executing a critical region with
the same name.
All unnamed
 
critical 
directives map to the same
unspecified name.
 
09/04/2012
 
CS4230
 
Example 2: calculate the area under a curve
 
CS4230
 
T
h
e
 
t
r
a
p
e
z
o
i
d
a
l
 
r
u
l
e
 
S
e
r
i
a
l
 
p
r
o
g
r
a
m
 
P
a
r
a
l
l
e
l
 
p
r
o
g
r
a
m
 
f
o
r
 
m
y
_
r
a
n
k
 
m
y
_
r
a
n
k
=
 
0
 
m
y
_
r
a
n
k
=
 
1
 
CS4230
 
Example 2: calculate the area under a curve using
critical directive
 
CS4230
 
Example 2: calculate the area under a curve using
critical directive
 
OpenMp Reductions
 
OpenMP has reduce operation
sum = 0;
#pragma omp parallel for reduction(+:sum)
for (i=0; i < 100; i++)     {
sum += array[i];
}
 
Reduce ops and init() values (C and C++):
+   0         bitwise  &  ~0      logical &   1
-   0         bitwise  |   0      logical |   0
*   1         bitwise  ^   0
 FORTRAN also supports min and max reductions
 
09/04/2012
 
CS4230
 
CS4230
 
Example 2: calculate the area under a curve using
reduction clause
 
Local_trap
 
Local_trap
 
CS4230
 
09/04/2012
 
Example 2: calculate the area under a curve using loop
undefined
 
Example 3: calculate 
π
 
#include
 
"stdafx.h"
#ifdef
 _OPENMP
  
#include
 
<omp.h>
#endif
#include
 
<stdio.h>
#include
 
<stdlib.h>
#include
 
<windows.h>
 
int
 main(
int
 argc, 
char
* argv[]) {
   
double
   global_result = 0.0;
   
volatile
 DWORD dwStart;
 
   
int
 n = 100000000;
      printf(
"numberInterval %d \n"
, n);
   int
 numThreads = strtol(argv[1], NULL, 10);
   dwStart = GetTickCount();
   #  pragma
 omp parallel num_threads(numThreads)
   PI(0, 1, n, &global_result);
 
   printf(
"number of threads %d \n"
, numThreads);
   printf(
"Pi = %f \n"
, global_result);
   printf_s(
"milliseconds %d \n"
, GetTickCount() - dwStart);
}
undefined
 
Example 3: calculate 
π
 
void
 PI(
double
 a, 
double
 b, 
int
 numIntervals, 
double
* global_result_p) {
   
int
 i;
   
double
 x, my_result, sum = 0.0, interval, local_a, local_b, local_numIntervals;
   
int
 myThread = omp_get_thread_num();
   
int
 numThreads = omp_get_num_threads();
 
   interval = (b-a)/ (
double
) numIntervals;
   local_numIntervals = numIntervals/numThreads;
   local_a = a + myThread*local_numIntervals*interval;
   local_b = local_a + local_numIntervals*interval;
   sum = 0.0;
   
for
 (i = 0; i < local_numIntervals; i++) {
      x = local_a + i*interval;
      sum = sum + 4.0 / (1.0 + x*x);
   };
   my_result = interval * sum;
 
   #  pragma
 om critical
   *global_result_p += my_result;
}
 
A Programmer’s View of OpenMP
 
OpenMP is a portable, threaded, shared-memory
programming 
specification
 with “light” syntax
-
Exact behavior depends on OpenMP implementation!
-
Requires compiler support (
C/C++
 or Fortran)
OpenMP will:
-
Allow a programmer to separate a program into 
serial regions
and 
parallel regions, 
rather than concurrently-executing
threads
.
-
Hide stack management
-
Provide synchronization constructs
OpenMP will not:
-
Parallelize automatically
-
Guarantee speedup
-
Provide freedom from data races
 
08/30/2012
 
CS4230
 
21
 
OpenMP runtime library, Query Functions
 
omp_get_num_threads
:
Returns the number of threads currently in the team executing the
parallel region from which it is called
int omp_get_num_threads(void);
 
omp_get_thread_num:
Returns the thread number, within the team, that lies between 
0 
and
omp_get_num_threads()
, inclusive. The master thread of the
team is thread 
0
int omp_get_thread_num(void);
 
08/30/2012
 
CS4230
 
22
 
Impact of Scheduling Decision
 
Load balance
-
Same work in each iteration?
-
Processors working at same speed?
Scheduling overhead
-
Static decisions are cheap because they require no run-time
coordination
-
Dynamic decisions have overhead that is impacted by
complexity and frequency of decisions
Data locality
-
Particularly within cache lines for small chunk sizes
-
Also impacts data reuse on same processor
 
09/04/2012
 
CS4230
 
09/04/2012
 
CS4230
 
Summary of Lecture
 
OpenMP, data-parallel constructs only
-
Task-parallel constructs later
What’s good?
-
Small changes are required to produce a parallel program from
sequential (parallel formulation)
-
Avoid having to express low-level mapping details
-
Portable and scalable, correct on 1 processor
What is missing?
-
Not completely natural if want to write a parallel code from
scratch
-
Not always possible to express certain common parallel
constructs
-
Locality management
-
Control of performance
 
E
x
e
r
c
i
s
e
(1)
Read example 1 -3. Compile them and run them, respectively.
 
(2) Revise the programs in (1) using reduction clause and using loop, respectively.
 
(3) For each value of N obtained in (2),  get the running time when the number of threads is
1,2,4,8,16, respectively, find the speed-up rate (notice that the ideal rate is, investigate the
change in the speed-up rates, and discuss the reasons.
 
(4)
 Suppose matrix A and matrix B are saved in two dimension arrays. Write  OpenMP
programs for A+B and A*B, respectively. Run them using different number of threads, and
find the speed-up rate.
Slide Note
Embed
Share

Shared-memory computing with Open MP offers a parallel programming model that is portable and scalable across shared-memory architectures. It allows for incremental parallelization, compiler-based thread program generation, and synchronization. Open MP pragmas help in parallelizing individual computations within a program while maintaining the sequential execution of the rest. The approach involves extending existing programming languages like Fortran, C, and C++ through directives and a few library routines.

  • Open MP
  • Shared-memory computing
  • Parallel programming
  • Compiler-based
  • Pragmas

Uploaded on Sep 27, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Lecture 5: Shared-memory Computing with Open MP

  2. Shared Memory Computing

  3. Getting Started: Example 1 (Hello) #include "stdafx.h"

  4. Getting Started: Example 1 (Hello) Set the compiler option in the Visual Studio development environment 1. Open the project's Property Pages dialog box. 2. Expand the Configuration Properties node. 3. Expand the C/C++ node. 4. Select the Language property page. 5. Modify the OpenMP Support property. Set the arguments to the main function parameters in the Visual Studio development environment (e.g., int main(int argc, char* argv[]) 1. In Project Properties (click Project or project name on solution Explore), expand Configuration Properties, and then click Debugging. 2. In the pane on the right, in the textbox to the right of Command Arguments, type the arguments to main you want to use. For example, one two

  5. In case the compiler doesnt support OpenMP # include <omp.h> #ifdef _OPENMP # include <omp.h> #endif CS4230 08/30/2012 5

  6. In case the compiler doesnt support OpenMP # ifdef _OPENMP int my_rank = omp_get_thread_num ( ); int thread_count = omp_get_num_threads ( ); # e l s e int my_rank = 0; int thread_count = 1; # endif CS4230 08/30/2012 6

  7. OpenMP: Prevailing Shared Memory Programming Approach Model for shared-memory parallel programming Portable across shared-memory architectures Scalable (on shared-memory platforms) Incremental parallelization - Parallelize individual computations in a program while leaving the rest of the program sequential Compiler based - Compiler generates thread program and synchronization Extensions to existing programming languages (Fortran, C and C++) - mainly by directives - a few library routines See http://www.openmp.org 09/06/2011 CS4961

  8. OpenMP Execution Model fork join 09/04/2012 CS4230

  9. OpenMP uses Pragmas Pragmas are special preprocessor instructions. Typically added to a system to allow behaviors that aren t part of the basic C specification. Compilers that don t support the pragmas ignore them. The interpretation of OpenMP pragmas - They modify the statement immediately following the pragma - This could be a compound statement such as a loop #pragma omp 08/30/2012 CS4230 9

  10. OpenMP parallel region construct Block of code to be executed by multiple threads in parallel Each thread executes the same code redundantly (SPMD) - Work within work-sharing constructs is distributed among the threads in a team Example with C/C++ syntax #pragma omp parallel [ clause [ clause ] ... ] new-line structured-block clause can include the following: private (list) shared (list) 09/04/2012 CS4230

  11. Programming Model Data Sharing // shared, globals Parallel programs often employ two types of data - Shared data, visible to all threads, similarly named - Private data, visible to a single thread (often stack-allocated) int bigdata[1024]; int bigdata[1024]; void* foo(void* bar) { void* foo(void* bar) { // private, stack int tid; int tid; OpenMP: shared variables are shared private variables are private Default is shared Loop index is private #pragma omp parallel \ /* Calculation goes shared ( bigdata ) \ here */ private ( tid ) } { /* Calc. here */ } }

  12. OpenMP critical directive Enclosed code executed by all threads, but restricted to only one thread at a time #pragma omp critical [ ( name ) ] new-line structured-block A thread waits at the beginning of a critical region until no other thread in the team is executing a critical region with the same name. All unnamedcritical directives map to the same unspecified name. 09/04/2012 CS4230

  13. Example 2: calculate the area under a curve The trapezoidal rule my_rank = 0 my_rank = 1 Parallel program for my_rank Serial program CS4230

  14. Example 2: calculate the area under a curve using critical directive CS4230

  15. Example 2: calculate the area under a curve using critical directive CS4230

  16. OpenMp Reductions OpenMP has reduce operation sum = 0; #pragma omp parallel for reduction(+:sum) for (i=0; i < 100; i++) { sum += array[i]; } Reduce ops and init() values (C and C++): + 0 bitwise & ~0 logical & 1 - 0 bitwise | 0 logical | 0 * 1 bitwise ^ 0 FORTRAN also supports min and max reductions 09/04/2012 CS4230

  17. Example 2: calculate the area under a curve using reduction clause Local_trap Local_trap CS4230

  18. Example 2: calculate the area under a curve using loop CS4230 09/04/2012

  19. Example 3: calculate #include "stdafx.h" #ifdef _OPENMP #include <omp.h> #endif #include <stdio.h> #include <stdlib.h> #include <windows.h> int main(int argc, char* argv[]) { double global_result = 0.0; volatile DWORD dwStart; int n = 100000000; printf("numberInterval %d \n", n); int numThreads = strtol(argv[1], NULL, 10); dwStart = GetTickCount(); # pragma omp parallel num_threads(numThreads) PI(0, 1, n, &global_result); printf("number of threads %d \n", numThreads); printf("Pi = %f \n", global_result); printf_s("milliseconds %d \n", GetTickCount() - dwStart); }

  20. Example 3: calculate void PI(double a, double b, int numIntervals, double* global_result_p) { int i; double x, my_result, sum = 0.0, interval, local_a, local_b, local_numIntervals; int myThread = omp_get_thread_num(); int numThreads = omp_get_num_threads(); interval = (b-a)/ (double) numIntervals; local_numIntervals = numIntervals/numThreads; local_a = a + myThread*local_numIntervals*interval; local_b = local_a + local_numIntervals*interval; sum = 0.0; for (i = 0; i < local_numIntervals; i++) { x = local_a + i*interval; sum = sum + 4.0 / (1.0 + x*x); }; my_result = interval * sum; # pragma om critical *global_result_p += my_result; }

  21. A Programmers View of OpenMP OpenMP is a portable, threaded, shared-memory programming specificationwith light syntax - Exact behavior depends on OpenMP implementation! - Requires compiler support (C/C++ or Fortran) OpenMP will: - Allow a programmer to separate a program into serial regions and parallel regions, rather than concurrently-executing threads. - Hide stack management - Provide synchronization constructs OpenMP will not: - Parallelize automatically - Guarantee speedup - Provide freedom from data races 08/30/2012 CS4230 21

  22. OpenMP runtime library, Query Functions omp_get_num_threads: Returns the number of threads currently in the team executing the parallel region from which it is called int omp_get_num_threads(void); omp_get_thread_num: Returns the thread number, within the team, that lies between 0 and omp_get_num_threads(), inclusive. The master thread of the team is thread 0 int omp_get_thread_num(void); 08/30/2012 CS4230 22

  23. Impact of Scheduling Decision Load balance - Same work in each iteration? - Processors working at same speed? Scheduling overhead - Static decisions are cheap because they require no run-time coordination - Dynamic decisions have overhead that is impacted by complexity and frequency of decisions Data locality - Particularly within cache lines for small chunk sizes - Also impacts data reuse on same processor 09/04/2012 CS4230

  24. Summary of Lecture OpenMP, data-parallel constructs only - Task-parallel constructs later What s good? - Small changes are required to produce a parallel program from sequential (parallel formulation) - Avoid having to express low-level mapping details - Portable and scalable, correct on 1 processor What is missing? - Not completely natural if want to write a parallel code from scratch - Not always possible to express certain common parallel constructs - Locality management - Control of performance 09/04/2012 CS4230

  25. Exercise (1)Read example 1 -3. Compile them and run them, respectively. (2) Revise the programs in (1) using reduction clause and using loop, respectively. (3) For each value of N obtained in (2), get the running time when the number of threads is 1,2,4,8,16, respectively, find the speed-up rate (notice that the ideal rate is, investigate the change in the speed-up rates, and discuss the reasons. (4) Suppose matrix A and matrix B are saved in two dimension arrays. Write OpenMP programs for A+B and A*B, respectively. Run them using different number of threads, and find the speed-up rate.

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#