About the Authors Rohit Chandra


Parallelizing a Simple Loop


Download 1.99 Mb.
Pdf ko'rish
bet4/20
Sana12.12.2020
Hajmi1.99 Mb.
#165337
1   2   3   4   5   6   7   8   9   ...   20
Bog'liq
Parallel Programming in OpenMP


Parallelizing a Simple Loop
Enough of high-level concepts: let us look at a simple parallel program
that shows how to execute a simple loop in parallel. Consider Example
2.2: a multiply-add, or saxpy loop as it is often called (for “single-precision
a*x plus y”). In a true saxpy the variable y is an array, but for simplicity
here we have y as just a scalar variable.
subroutine saxpy(z, a, x, y, n)
integer i, n
real z(n), a, x(n), y
do i = 1, n
   z(i) = a * x(i) + y
enddo
return
end
The loop in Example 2.2 has no dependences. In other words the result
of one loop iteration does not depend on the result of any other iteration.
This means that two different iterations could be executed simultaneously
by two different processors. We parallelize this loop using the parallel do
construct as shown in Example 2.3.
      subroutine saxpy(z, a, x, y, n)
      integer i, n
      real z(n), a, x(n), y
Example 2.2
A simple saxpy-like loop.
Example 2.3
The saxpy-like loop parallelized using OpenMP.

24
Chapter 2—Getting Started with OpenMP
!$omp parallel do
      do i = 1, n
         z(i) = a * x(i) + y
      enddo
      return
      end
As we can see, the only change to the original program is the addition
of the parallel do directive. This directive must be followed by a do loop
construct and specifies that the iterations of the do loop be executed con-
currently across multiple threads. An OpenMP compiler must create a set
of threads and distribute the iterations of the do loop across those threads
for parallel execution.
Before describing the runtime execution in detail, notice the minimal
changes to the original sequential program. Furthermore, the original pro-
gram remains “unchanged.” When compiled using a non-OpenMP com-
piler, the parallel do directive is simply ignored, and the program
continues to run serially and correctly.
2.3.1
Runtime Execution Model of an OpenMP Program
Let us examine what happens when the saxpy subroutine is invoked.
This is most easily explained through the execution diagram depicted in
Figure 2.2, which presents the execution of our example on four threads.
Each vertical line represents a thread of execution. Time is measured
along the vertical axis and increases as we move downwards along that
axis. As we can see, the original program executes serially, that is, with a
single thread, referred to as the “master thread” in an OpenMP program.
This master thread invokes the saxpy subroutine and encounters the paral-
lel do directive. At this point the master thread creates some additional
threads (three in this example), and together with these additional threads
(often referred to as “slave threads”) forms a team of four parallel threads.
These four threads divide the iterations of the do loop among themselves,
with each thread executing a subset of the total number of iterations.
There is an implicit barrier at the end of the parallel do construct. There-
fore, after finishing its portions of the iterations, each thread waits for the
remaining threads to finish their iterations. Once all the threads have fin-
ished and all iterations have been executed, the barrier condition is com-
plete and all threads are released from the barrier. At this point, the slave
threads disappear and the master thread resumes execution of the code
past the do loop of the parallel do construct.
A “thread” in this discussion refers to an independent locus of control
that executes within the same shared address space as the original sequen-

2.3
Parallelizing a Simple Loop
25
tial program, with direct access to all of its variables. OpenMP does not
specify the underlying execution vehicle; that is exclusively an implemen-
tation issue. An OpenMP implementation may choose to provide this
abstraction in any of multiple ways—for instance, one implementation
may map an OpenMP thread onto an operating system process, while
another may map it onto a lightweight thread such as a Pthread. The
choice of implementation does not affect the thread abstraction that we
have just described.
A second issue is the manner in which iterations of the do loop are
divided among the multiple threads. The iterations of the do loop are not
replicated; rather, each thread is assigned a unique and distinct set of iter-
ations to execute. Since the iterations of the do loop are assumed to be
independent and can execute concurrently, OpenMP does not specify how
the iterations are to be divided among the threads; this choice is left to the
OpenMP compiler implementation. However, since the distribution of loop
iterations across threads can significantly affect performance, OpenMP
does supply additional attributes that can be provided with the parallel do
directive and used to specify how the iterations are to be distributed across
threads. These mechanisms are discussed later in Chapters 3 and 6.
2.3.2
Communication and Data Scoping
The shared memory model within OpenMP prescribes that multiple
OpenMP threads execute within the same shared address space; we now
Master thread executes serial portion of the
code.
Master thread enters the saxpy subroutine.
Master thread encounters parallel do directive.
Creates slave threads.
Master and slave threads divide iterations of
parallel do loop and execute them concurrently.
Implicit barrier: Wait for all threads to finish their
iterations.
Master thread resumes execution after the do
loop. Slave threads disappear.
Figure 2.2
Runtime execution of the saxpy parallel program.

26
Chapter 2—Getting Started with OpenMP
examine the data references in the previous example within this memory
model.
Each iteration of the loop reads the scalar variables a and y, as well as
an element of the array x; it updates the corresponding element of the
array z. What happens when multiple iterations of the loop execute con-
currently on different threads? The variables that are being read—a, y, and
x—can be read directly from shared memory since their values remain
unchanged for the duration of the loop. Furthermore, since each iteration
updates a distinct element of z, updates from different iterations are really
to different memory locations and do not step over each other; these
updates too can be made directly to shared memory. However,  the loop
index variable i presents a problem: since each iteration needs a distinct
value for the value of the loop index variable, multiple iterations cannot
use the same memory location for and still execute concurrently.
This issue is addressed in OpenMP through the notion of private vari-
ables. For a parallel do directive, in OpenMP the default rules state that the
do loop index variable is private to each thread, and all other variable ref-
erences are shared. We illustrate this further in Figure 2.3. The top of the
figure illustrates the data context during serial execution. Only the master
thread is executing, and there is a single globally shared instance of each
variable in the program as shown. All variable references made by the sin-
gle master thread refer to the instance in the global address space.
The bottom of the figure illustrates the context during parallel execu-
tion. As shown, multiple threads are executing concurrently within the
same global address space used during serial execution. However,  along
with the global memory as before, every thread also has a private copy of
the variable i within its context. All variable references to variable i within
a thread refer to the private copy of the variable within that thread; refer-
ences to variables other than i, such as a, x, and so on, refer to the in-
stance in global shared memory.
Let us describe the behavior of private variables in more detail. As
shown in the figure, during parallel execution each thread works with a
new, private instance of the variable i. This variable is newly created
within each thread at the start of the parallel construct parallel do. Since a
private variable gets a new memory location with each thread, its initial
value is undefined within the parallel construct. Furthermore, once the
parallel do construct has completed and the master thread resumes serial
execution, all private instances of the variable i disappear, and the master
thread continues execution as before, within the shared address space
including i. Since the private copies of the variable i have disappeared and
we revert back to the single global address space, the value of i is assumed
to be undefined after the parallel construct. 

2.3
Parallelizing a Simple Loop
27
This example was deliberately chosen to be simple and does not need
any explicit scoping attributes. The scope of each variable is therefore
automatically determined by the default OpenMP rules. Subsequent exam-
ples in this chapter will present explicit data scoping clauses that may be
provided by the programmer. These clauses allow the programmer to eas-
ily specify the sharing properties of variables in the program for a parallel
construct, and depend upon the implementation to provide the desired
behavior.
2.3.3
Synchronization in the Simple Loop Example
Our simple example did not include any explicit synchronization con-
struct; let us now examine the synchronization requirements of the code
and understand why it still executes correctly.
Synchronization is primarily used to control access to shared objects.
There are two potential synchronization requirements in this example.
First, the shared variable z is modified by multiple threads. However,
recall that since each thread modifies a distinct element of z, there are no
z
a
x
y
n
i
Global shared
memory
Serial execution
(master thread only)
All data references are to
global shared instances
z
i
i
i
i
a
x
y
n
i
Global shared
memory
Each thread has a
private copy of i
References to i are to 
the private copy
Parallel execution
(multiple threads)
References to z , a , x , y , n
are to global shared instances
Figure 2.3
The behavior of private variables in an OpenMP program.

28
Chapter 2—Getting Started with OpenMP
data conflicts and the updates can proceed concurrently without synchro-
nization.
The second requirement is that all the values for the array z must have
been updated when execution continues after the parallel do loop. Other-
wise the master thread (recall that only the master thread executes the
code after the parallel do construct) may not see the “latest” values of z in
subsequent code because there may still be slave threads executing their
sets of iterations. This requirement is met by the following property of the
parallel do construct in OpenMP: the parallel do directive has an implicit
barrier at its end—that is, each thread waits for all other threads to com-
plete their set of iterations. In particular the master thread waits for all
slave threads to complete, before resuming execution after the parallel do
construct. This in turn guarantees that all iterations have completed, and
all the values of z have been computed.
The default properties of the parallel do construct obviated the need
for explicit synchronization in this example. Later in the chapter we
present more complex codes that illustrate the synchronization constructs
provided in OpenMP.
2.3.4
Final Words on the Simple Loop Example
The kind of parallelism exposed in this example is known as loop-level
parallelism. As we saw, this type of parallelism is relatively easy to express
and can be used to parallelize large codes in a straightforward manner sim-
ply by incrementally parallelizing individual loops, perhaps one at a time.
However, loop-level parallelism does have its limitations. Applications that
spend substantial portions of their execution time in noniterative (i.e., non-
loop) constructs are less amenable to this form of parallelization. Further-
more, each parallel loop incurs the overhead for joining the threads at the
end of the loop. As described above, each join is a synchronization point
where all the threads must wait for the slowest one to arrive. This has a
negative impact on performance and scalability since the program can only
run as fast as the slowest thread in each parallel loop. 
Nonetheless, if the goal is to parallelize a code for a modest-size mul-
tiprocessor, loop-level parallelism remains a very attractive approach. In a
later section we will describe a more general method for parallelizing
applications that can be used for nonloop constructs as well. First, how-
ever, we will consider a slightly more complicated loop and further inves-
tigate the issues that arise in parallelizing loops.

2.4
A More Complicated Loop
29
2.4
A More Complicated Loop
It would be nice if all loops were as simple as the one in Example 2.3,
however, that is seldom the case. In this section we look at a slightly more
complicated loop and how OpenMP is used to address the new issues that
arise. The loop we examine is the outer loop in a Mandelbrot generator. A
Mandelbrot generator is simply a program that determines which points in
a plane belong to the Mandelbrot set. This is done by computing an itera-
tive equation for each point we consider. The result of this calculation can
then be used to color the corresponding pixel to generate the ubiquitous
Mandelbrot image. This is a nice example because the Mandelbrot image
is a visual representation of the underlying computation. However, we
need not overly concern ourselves with the mechanics of computing the
Mandelbrot set but rather focus on the structure of the loop.
real*8 x, y
integer i, j, m, n, maxiter
integer depth(*, *)
integer mandel_val
...
maxiter = 200
do i = 1, m
    do j = 1, n
        x = i/real(m)
        y = j/real(n)
        depth(j, i) = mandel_val(x, y, maxiter)
    enddo
enddo
Example 2.4 presents the sequential code for a Mandelbrot generator.
For simplicity we have left out the code for evaluating the Mandelbrot
equation (it is in the function mandel_val). The code we present is fairly
straightforward: we loop for m points in the i direction and n points in the
j direction, generate an x and y coordinate for each point (here restricted
to the range (0,1]), and then compute the Mandelbrot equation for the
given coordinates. The result is assigned to the two-dimensional array
depth. Presumably this array will next be passed to a graphics routine for
drawing the image.
Our strategy for parallelizing this loop remains the same as for the
saxpy loop—we would like to use the parallel do directive. However, we
Example 2.4
Mandelbrot generator: serial version.

30
Chapter 2—Getting Started with OpenMP
must first convince ourselves that different iterations of the loop are actu-
ally independent and can execute in parallel, that is, that there are no data
dependences in the loop from one iteration to another. Our brief descrip-
tion of a Mandelbrot generator would indicate that there are no depen-
dences. In other words, the result for computing the Mandelbrot equation
on a particular point does not depend on the result from any other point.
This is evident in the code itself. The function mandel_val only takes x, y,
and maxiter as arguments, so it can only know about the one point it is
working on. If mandel_val included i or j as arguments, then there might
be reason to suspect a dependence because the function could conceivably
reference values for some other point than the one it is currently working
on. Of course in practice we would want to look at the source code for
mandel_val to be absolutely sure there are no dependences. There is
always  the possibility that the function modifies global structures not
passed through the argument list, but that is not the case in this example.
As a matter of jargon, a function such as this one that can safely be exe-
cuted in parallel is referred to as being thread-safe. More interesting from a
programming point of view, of course, are those functions that are not
inherently thread-safe but must be made so through the use of synchroni-
zation.
Having convinced ourselves that there are no dependences, let us look
more closely at what the loop is doing and how it differs from the saxpy
loop of Example 2.3. The two loops differ in some fundamental ways.
Additional complexities in this loop include a nested loop, three more sca-
lar variables being assigned ( j, x, and y), and a function call in the inner-
most loop. Let us consider each of these added complexities in terms of
our runtime execution model.
Our understanding of the parallel do/end parallel do directive pair is
that it will take the iterations of the enclosed loop, divide them among
some number of parallel threads, and let each parallel thread execute its
set of iterations. This does not change in the presence of a nested loop or a
called function. Each thread will simply execute the nested loop and call
the function in parallel with the other threads. So as far as control con-
structs are concerned, given that the called function is thread-safe, there is
no additional difficulty on account of the added complexity of the loop.
As one may suspect, things are not so simple with the data environ-
ment. Any time we have variables being assigned values inside a parallel
loop, we need to concern ourselves with the data environment. We know
from the saxpy example that by default the loop index variable i will be
private and everything else in the loop will be shared. This is appropriate
for m and n since they are only read across different iterations and don’t
change in value from one iteration to the next. Looking at the other vari-

2.4
A More Complicated Loop
31
ables, though, the default rules are not accurate. We have a nested loop
index variable j
,
and as a rule loop index variables should be private. The
reason of course is that we want each thread to work on its own set of iter-
ations. If were shared, we would have the problem that there would be
just one “global” value (meaning the same for all threads) for j. Conse-
quently, each time a thread would increment j in its nested loop, it would
also inadvertently modify the index variable for all other threads as well.
So j must be a private variable within each thread. We do this with the pri-
vate clause by simply specifying private( j) on the parallel do directive.
Since this is almost always  the desired behavior, loop index variables in
Fortran are treated as having private scope by default, unless specified
otherwise.
1
What about the other variables, x and y? The same reasoning as above
leads us to conclude that these variables also need to be private. Consider
that i and j are private, and x and y are calculated based on the values of i
and j; therefore it follows that x and y should be private. Alternatively,
remember that x and y store the coordinates for the point in the plane for
which we will compute the Mandelbrot equation. Since we wish each
thread to work concurrently on a set of points in the plane, we need to
have “parallel” (meaning here “multiple”) storage for describing a point
such that each thread can work independently. Therefore we must specify
x and y to be private.
Finally we must consider synchronization in this loop. Recall that the
main use of synchronization is to control access to shared objects. In the
saxpy example, the only synchronization requirement was an implicit bar-
rier at the close of the parallel do. This example is no different. There are
two shared objectsmaxiter and depth. The variable maxiter is only read
and never written (we would need to check the mandel_val function to
confirm this); consequently there is no reason or need to synchronize ref-
erences to this shared variable. On the other hand, the array depth is mod-
ified in the loop. However, as with the saxpy example, the elements of the
array are modified independently by each parallel thread, so the only syn-
chronization necessary is at the end of the loop. In other words, we cannot
allow the master thread to reference the depth array until all the array ele-
ments have been updated. This necessitates a barrier at the end of the
loop, but we do not have to explicitly insert one since it is implicit in the
end parallel do directive.
1
This is actually a tricky area where the Fortran default rules differ from the C/C++ rules. In
Fortran, the compiler will make  j private by default, but in C/C++ the programmer must
declare it private. Chapter 3 discusses the default rules for Fortran and C/C++ in greater
detail.

32
Chapter 2—Getting Started with OpenMP
At this point we are ready to present the parallelized version of Exam-
ple 2.4. As with the saxpy example, we parallelized the loop in Example
2.5 with just the parallel do directive. We also see our first example of the
private clause. There are many more ways to modify the behavior of the
parallel do directive; these are described in Chapter 3.
      maxiter = 200
Download 1.99 Mb.

Do'stlaringiz bilan baham:
1   2   3   4   5   6   7   8   9   ...   20




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling