v1.0 2015-11

1. Preface

The goal of this book is to introduce the reader to the following.

  1. The concepts of parallel computing.

  2. Some basic parallel algorithm design principles and techniques,

  3. Real-world performance and efficiency concerns in writing parallel software and techniques for dealing with them, and

  4. Parallel programming in C++.

For parallel programming in C++, we use a library, called PASL, that we have been developing over the past 5 years. The implementation of the library uses advanced scheduling techniques to run parallel programs efficiently on modern multicores and provides a range of utilities for understanding the behavior of parallel programs.

PASL stands for Parallel Algorithm Scheduling Library. It also sounds a bit like the French phrase "pas seul" (pronounced "pa-sole"), meaning "not alone".

We expect that the instructions in this book to allow the reader to write performant parallel programs at a relatively high level (essentially at the same level of C++ code) without having to worry too much about lower level details such as machine specific optimizations, which might otherwise be necessary.

All code that associated with this book can be found at the Github repository linked by the following URL:

This code-base includes the examples presented in the book, see file minicourse/examples.hpp.

Some of the material in this book is based on the course, 15-210, co-taught with Guy Blelloch at CMU.

This book does not focus on the design and analysis of parallel algorithms. The interested reader can find more details this topic in this book.

2. C++ Background

The material is entirely based on C++ and a library for writing parallel programs in C++. We use recent features of C++ such as closures or lambda expressions and templates. A deep understanding of these topics is not necessary to follow the course notes, because we explain them at a high level as we go, but such prior knowledge might be helpful; some pointers are provided below.

2.1. Template metaprogramming

Templates are C++'s way of providing for parametric polymorphism, which allows using the same code at multiple types. For example, in modern functional languages such as the ML family or Haskell, you can write a function $\lambda~x.x$ as an identity function that returns its argument for any type of $x$. You don’t have to write the function at every type that you plan to apply. Since functional languages such as ML and Haskell rely on type inference and have powerful type systems, they can infer from your code the most general type (within the constraints of the type system). For example, the function $\lambda~x.x$ can be given the type $\forall \alpha. \alpha \rightarrow \alpha$. This type says that the function works for any type $\alpha$ and given an argument of type $\alpha$, it returns a value of type $\alpha$.

C++ provides for polymorphism with templates. In its most basic form, a template is a class declaration or a function declaration, which is explicitly stated to be polymorphic, by making explicit the type variable. Since C++ does not in general perform type inference (in a rigorous sense of the word), it requires some help from the programmer.

For example, the following code below defines an array class that is parametric in the type of its elements. The declaration template <class T> says that the declaration of class array, which follows is parameterized by the identifier T. The definition of class array then uses T as a type variable. For example, the array defines a pointer to element sequences of type T, and the sub function returns an element of type T etc.

template <class T>
class array {
  public:
   array (int size) {a = new T[size];}
   T sub (int i) { a[i];}

  private:
    *T a;
}

Note that the only part of the syntax template <class T> that is changeable is the identifier T. In other words, you should think of the syntax template <class T> as a binding form that allows you to pick an identifier (in this case T). You might ask why the type identifier/variable T is a class. This is a good question. The authors find it most helpful to not think much about such questions, especially in the context of the C++ language.

Once defined a template class can be initialized with different type variables by using the < > syntax. For examples, we can define different arrays such as the following.

array<int> myFavoriteNumbers(7);
array<char*> myFavoriteNames(7);

Again, since C++ does not perform type inference for class instances, the C++ compiler expects the programmer to eliminate explicitly parametricity by specifying the argument type.

It is also possible to define polymorphic or generic functions. For example, the following declarations defines a generic identity function.

template <class T>
T identity(T x) { return x;}

Once defined, this function can be used without explicitly specializing it at various types. In contrast to templated classes, C++ does provide some type inference for calls to templated functions. So generic functions can be specialized implicitly, as shown in the examples below.

i = identity (3)
s = identity ("template programming can be ugly")

This brief summary of templates should suffice for the purposes of the material covered in this book. Templates are covered in significant detail by many books, blogs, and discussions boards. We refer the interested reader to those sources for further information.

2.2. Lambda expressions

The C++11 reference provides good documentation on lambda expressions.

3. Chapter: Multithreading

The term multithreading refers to computation- with multiple threads of control. Once created a thread performs a computation by executing the instructions specified until it terminates. A multithreaded computation start by executing a main thread, which is the thread at which the execution starts. A thread can create or spawn another thread and synchronize with other threads by using a variety of synchorization constructs such as locks, mutex’s, synchronization variables, and semaphores.

DAG Representation

A multithreaded computation can be represented by a DAG, a Directed Acyclic Graph, or written also more simply a dag, of vertices. The figure below show an example multithreaded computation and its dag. Each vertex represents the execution of an instruction, such as an addition, a multiplication, a memory operation, a (thread) spawn operation, or a synchronization operation. A vertex representing a spawn operation has outdegree two. A synchronization operation waits for an operation belonging to a thread to complete, and thus a vertex representing a synchronization operation has indegree two.

A multithreaded computation.
Figure 1. A multithreaded computation.

Throughout this book, we make two assumptions about the structure of the dag:

  1. Each vertex has outdegree at most two.

  2. The DAG has exactly one root vertex with indegree zero and one final vertex vertex with outdegree zero. The root is the first instruction of the root thread.

The outdegree assumption naturally follows by the fact that each vertex represents an instruction, which can create at most one thread.

Execution

Execution of a multithreaded computation or its dag traverses vertices in some partial order and executes them. During an execution, we say that a vertex in the dag is ready if all of its ancestors have been executed. Similarly, we say that a thread is ready if it contains a ready vertex. Note that a thread can contain only one ready vertex at any time.

Multhreaded programs are executed by using a scheduler that assigns vertices of the dag to processes. A process here is a generic term that refers to any kind of system level thread.

In this book. we are interested in a class of scheduling algorithms that maintain a work pool or work, consisting of ready threads. Execution starts with the root thread in the pool. It ends when the final vertex is executed.

To obtain work, a process removes a thread from the pool and executes its ready vertex. We refer to the thread executed by a process as the assigned thread. When executed, the ready vertex can make the next vertex of the thread ready, which then also gets executed an so on until one of the following synchronization actions occur.

  1. Die: The process executes last vertex of the thread, causing the thread to die. The process then obtains other work.

  2. Block: The assigned vertex executes but the next vertex does not become ready. This blocks the thread and thus the process obtains other work.

  3. Enable: The assigned vertex makes ready the continuation of the vertex and unblocks another previously blocked thread by making a vertex from that thread ready. In this case, the process inserts both (any) one thread into the work pool and continues to execute the other.

  4. Spawn: The assigned vertex spaws another thread. As in the previous case, the process inserts one thread into the work pool and continues to execute the other.

These actions are not mutually exclusive. For example, a thread may spawn/enable a thread and die. In this case, the process performs the corresponding steps for each action.

Enabling Tree

Consider the execution of a dag. If the execution of a vertex $u$ enables another vertex $v$, then we call the edge $(u,v)$ an enabling edge and we call $u$ the designated parent of $v$.

Note that any vertex other than the root vertex has one designated parent. Thus the subgraph induced by the enabling edges is a rooted tree that we call the enabling tree.

Cost Model: Work and Span

For analyzing the efficiency and performance of multithreaded programs, we use several cost measures, the most important ones include work and span. We define the work of a computation as the number of vertices in the dag and the span as the length of the longest path in the dag.

Execution
Ready vertices

3.1. Writing Multithreaded Programs: Pthreads

Multithreaded programs can be written using a variety of language abstractions interfaces. One of the most widely used interfaces is the POSIX Threads* or *Pthreads interface, which specifies a programming interface for a standardized C language in the IEEE POSIX 1003.1c standard. Pthreads provide a rich interface that enable the programmer to create multiple threads of control that can synchronize by using the nearly the whole range of the synchronization facilities mentioned above.

Hello world with Pthreads

An example Pthread program is shown below. The main thread (executing function main) creates 8 child threads and terminates. Each child in turn runs the function helloWorld and immediately terminates. Since the main thread does not wait for the children to terminate, it may terminate before the children does, depending on how threads are scheduled on the available processors.

#include <iostream>
#include <cstdlib>
#include <pthread.h>

using namespace std;

#define NTHREADS 8

void *helloWorld(void *threadid)
{
   long tid;
   tid = (long)threadid;
   cout << "Hello world! It is me, 00" << tid << endl;
   pthread_exit(NULL);
}

int main ()
{
   pthread_t threads[NTHREADS];
   int rc;
   int i;
   for( i=0; i < NTHREADS; i++ ){
      cout << "main: creating thread 00" << i << endl;
      error = pthread_create(&threads[i], NULL, helloWorld, (void *) i);
      if (error) {
         cout << "Error: unable to create thread," << error << endl;
         exit(-1);
      }
   }
   pthread_exit(NULL);
}

When executed this program may print the following.

main: creating thread 000
main: creating thread 001
main: creating thread 002
main: creating thread 003
main: creating thread 004
main: creating thread 005
main: creating thread 006
main: creating thread 007
Hello world! It is me, 000
Hello world! It is me, 001
Hello world! It is me, 002
Hello world! It is me, 003
Hello world! It is me, 004
Hello world! It is me, 005
Hello world! It is me, 006
Hello world! It is me, 007

But that would be unlikely, a more likely output would look like this:

main: creating thread 000
main: creating thread 001
main: creating thread 002
main: creating thread 003
main: creating thread 004
main: creating thread 005
main: creating thread 006
main: creating thread 007
Hello world! It is me, 000
Hello world! It is me, 001
Hello world! It is me, 006
Hello world! It is me, 003
Hello world! It is me, 002
Hello world! It is me, 005
Hello world! It is me, 004
Hello world! It is me, 007

And may even look like this

main: creating thread 000
main: creating thread 001
main: creating thread 002
main: creating thread 003
Hello world! It is me, 000
Hello world! It is me, 001
Hello world! It is me, 003
Hello world! It is me, 002
main: creating thread 004
main: creating thread 005
main: creating thread 006
main: creating thread 007
Hello world! It is me, 006
Hello world! It is me, 005
Hello world! It is me, 004
Hello world! It is me, 007

3.2. Writing Multithreaded Programs: Structured Multithreading

Interface such as Pthreads enable the programmer to create a wide variety of multithreaded computations that can be structured in many different ways. Large classes of interesting multithreaded computations, however, can be expcessed using a more structured approach, where threads are restricted in the way that they synchronize with other threads. One such interesting class of computations is fork-join computations where a thread can spawn or "fork" another thread or "join" with another existing thread. Joining a thread is the only mechanism through which threads synchronize. The figure below illustrates a fork-join computation. The main threads forks thread A, which then spaws thread B. Thread B then joins thread A, which then joins Thread M.

A multithreaded, fork-join computation.
Figure 2. A multithreaded fork-join computation.

In addition to fork-join, there are other interfaces for structured multithreading such as async-finish, and futures. These interfaces are adopted in many programming languages: the Cilk language is primarily based on fork-join but also has some limited support for async-finish; the X10 language is primarily based on async-finish but also supports futures; the Haskell language provides support for fork-join and futures as well as others; the Parallel ML language as implemented by the Manticore project is primarily based on fork-join parallelism.

3.3. Parallelism versus concurrency

Structured multithreading offer important benefits both in terms of efficiency and expressiveness. Using programming constructs such as fork-join and futures, it is usually possible to write parallel programs such that the program accepts a "sequential semantics" but executes in parallel. The sequential semantics enables the programmer to treat the program as a serial program for the purposes of correctness. A run-time system then creates threads as necessary to execute the program in parallel. This then offers the best of both worlds: the programmer can for the most part reason sequentially but the program executes in parallel. The benefit of structured multithreading in terms of efficiency stems from the fact that threads are restricted in the way that they communicate, whic make it possible to implement threads very efficiently.

More precisely, consider some sequential language such as the untyped lamda calculus and its sequential dynamic semantics specified as a strict, small step transition relation. We can extend this language with the structured multithreading by enriching the syntax language with "fork-join" and "futures" constructs. We can now extend the dynamic semantics of the language in two ways: 1) trivially ignore these constructs and execute serially as usual, and 2) execute in parallel by creating parallel threads. We can then show that these two semantics are in fact identical, i.e., that they produce the same value for the same expressions. In other words, we can extend a rich programming language with fork-join and futures and still give the language a sequential semantics. This is a powerful conclusion: it shows that structured multithreading is nothing but an efficiency and performance concern; it can be ignored from the perspective of correctness.

We refer to such languagus that accept both sequential and multithreaded semantics as parallel. Broadly we use the term parallelism to refer to the idea of computing in parallel by using such structured multithreading constructs. As we shall see, we can write parallel algorithms for many interesting problems. For example, Fibonacci in parallel and so on.

While parallel algorithms or applications constitute a large class, they don’t cover all applications. Specifically applications that can be expressed by using richer forms of multithreading such as the one offered by Pthreads do not always accept a sequential semantics. In such *concurrent* applications, threads can communicate and coordinate in complex ways to accomplish the intended result. A classic concurrenty example is the "producer consumer problem", where a consumer and a producer thread coordinate by using a fixed size buffer of items. The producer fills the buffer with items and the consumer removes items from the buffer and they coordinate to make sure that the buffer is never filled more than it can take. We can use operating-system level processes instead of threads to implement similar concurrent applications.

In other words, parallelism is a property of the hardware or the software platform where the computation takes place, whereas concurrency is a property of the application.

Parallelism and concurrency are orthogonal dimensions in the space of all applications. Some applications are concurrent, some are not. Many concurrent applications can benefit from parallelism. For example, a browser, which is a concurrent application itself as it may use a parallel algorithm to perform certain tasks. On the other hand, there is often no need to add concurrency to a parallel application, because this unnecessarily complicates software. It can, however, lead to improvements in efficiency.

The following quote from Dijkstra suggest pursuing the approach of making parallelism just a matter of execution (not one of semantics), which is the goal of the much of the work on the development of programming languages today. Note that in this particular quote, Dijkstra does not mention that parallel algorithm design requires thinking carefully about parallelism, which is one aspect where parallel and serial computations differ.

From the past terms such as "sequential programming" and "parallel programming" are still with us, and we should try to get rid of them, for they are a great source of confusion. They date from the period that it was the purpose of our programs to instruct our machines, now it is the purpose of the machines to execute our programs. Whether the machine does so sequentially, one thing at a time, or with considerable amount of concurrency, is a matter of implementation, and should not be regarded as a property of the programming language.
Selected Writings on Computing: A Personal Perspective (EWD 508)
— Edsger W. Dijkstra

4. Chapter: Fork-join parallelism

Fork-join parallelism, a fundamental model in parallel computing, dates back to 1963 and has since been widely used in parallel computing. In fork join parallelism, computations create opportunities for parallelism by branching at certain points that are specified by annotations in the program text.

Each branching point forks the control flow of the computation into two or more logical threads. When control reaches the branching point, the branches start running. When all branches complete, the control joins back to unify the flows from the branches. Results computed by the branches are typically read from memory and merged at the join point. Parallel regions can fork and join recursively in the same manner that divide and conquer programs split and join recursively. In this sense, fork join is the divide and conquer of parallel computing.

As we will see, it is often possible to extend an existing language with support for fork-join parallelism by providing libraries or compiler extensions that support a few simple primitives. Such extensions to a language make it easy to derive a sequential program from a parallel program by syntactically substituting the parallelism annotations with corresponding serial annotations. This in turn enables reasoning about the semantics or the meaning of parallel programs by essentially "ignoring" parallelism. This approach to parallel computing is sometimes called implicit parallelism or structured parallelism. Programming languages that support implicit parallelism are called implicitly parallel languages.

PASL is a C++ library that enables writing implicitly parallel programs. In PASL, fork join is expressed by application of the fork2() function. The function expects two arguments: one for each of the two branches. Each branch is specified by one C++ lambda expression.

Example 1. Fork join

In the sample code below, the first branch writes the value 1 into the cell b1 and the second 2 into b2; at the join point, the sum of the contents of b1 and b2 is written into the cell j.

long b1 = 0;
long b2 = 0;
long j  = 0;

fork2([&] {
  // first branch
  b1 = 1;
}, [&] {
  // second branch
  b2 = 2;
});
// join point
j = b1 + b2;

std::cout << "b1 = " << b1 << "; b2 = " << b2 << "; ";
std::cout << "j = " << j << ";" << std::endl;

Output:

b1 = 1; b2 = 2; j = 3;

When this code runs, the two branches of the fork join are both run to completion. The branches may or may not run in parallel (i.e., on different cores). In general, the choice of whether or not any two such branches are run in parallel is chosen by the PASL runtime system. The join point is scheduled to run by the PASL runtime only after both branches complete. Before both branches complete, the join point is effectively blocked. Later, we will explain in some more detail the scheduling algorithms that the PASL uses to handle such load balancing and synchronization duties.

In fork-join programs, a thread is a sequence of instructions that do not contain calls to fork2(). A thread is essentially a piece of sequential computation. The two branches passed to fork2() in the example above correspond, for example, to two independent threads. Moreover, the statement following the join point (i.e., the continuation) is also a thread.

Note
If the syntax in the code above is unfamiliar, it might be a good idea to review the notes on lambda expressions in C++11. In a nutshell, the two branches of fork2() are provided as lambda-expressions where all free variables are passed by reference.
Note
Fork join of arbitrary arity is readily derived by repeated application of binary fork join. As such, binary fork join is universal because it is powerful enough to generalize to fork join of arbitrary arity.

All writes performed by the branches of the binary fork join are guaranteed by the PASL runtime to commit all of the changes that they make to memory before the join statement runs. In terms of our code snippet, all writes performed by two branches of fork2 are committed to memory before the join point is scheduled. The PASL runtime guarantees this property by using a local barrier. Such barriers are efficient, because they involve just a single dynamic synchronization point between at most two processors.

Example 2. Writes and the join statement

In the example below, both writes into b1 and b2 are guaranteed to be performed before the print statement.

long b1 = 0;
long b2 = 0;

fork2([&] {
  b1 = 1;
}, [&] {
  b2 = 2;
});

std::cout << "b1 = " << b1 << "; b2 = " << b2 << std::endl;

Output:

b1 = 1; b2 = 2

PASL provides no guarantee on the visibility of writes between any two parallel branches. In the code just above, for example, writes performed by the first branch (e.g., the write to b1) may or may not be visible to the second, and vice versa.

4.1. Parallel Fibonacci

Now, we have all the tools we need to describe our first parallel code: the recursive Fibonacci function. Although useless as a program because of efficiency issues, this example is the "hello world" program of parallel computing.

Recall that the $n^{th}$ Fibonnacci number is defined by the recurrence relation

$$ \begin{array}{lcl} F(n) & = & F(n-1) + F(n-2) \end{array} $$

with base cases

$$ F(0) = 0, \, F(1) = 1 $$

Let us start by considering a sequential algorithm. Following the definition of Fibonacci numbers, we can write the code for (inefficiently) computing the $n^{th}$ Fibonnacci number as follows. This function for computing the Fibonacci numbers is inefficient because the algorithm takes exponential time, whereas there exist dynamic programming solutions that take linear time.

long fib_seq (long n) {
  long result;
  if (n < 2) {
    result = n;
  } else {
    long a, b;
    a = fib_seq(n-1);
    b = fib_seq(n-2);
    result = a + b;
  }
  return result;
}

To write a parallel version, we remark that the two recursive calls are completely independent: they do not depend on each other (neither uses a piece of data generated or written by another). We can therefore perform the recursive calls in parallel. In general, any two independent functions can be run in parallel. To indicate that two functions can be run in parallel, we use fork2().

long fib_par(long n) {
  long result;
  if (n < 2) {
    result = n;
  } else {
    long a, b;
    fork2([&] {
      a = fib_par(n-1);
    }, [&] {
      b = fib_par(n-2);
    });
    result = a + b;
  }
  return result;
}

4.2. Incrementing an array, in parallel

Suppose that we wish to map an array to another by incrementing each element by one. We can write the code for a function map_incr that performs this computation serially.

void map_incr(const long* source, long* dest, long n) {
  for (long i = 0; i < n; i++)
    dest[i] = source[i] + 1;
}
Example 3. Example: Using map_incr.

The code below illustrates an example use of map_incr.

const long n = 4;
long xs[n] = { 1, 2, 3, 4 };
long ys[n];
map_incr(xs, ys, n);
for (long i = 0; i < n; i++)
  std::cout << ys[i] << " ";
std::cout << std::endl;

Output:

2 3 4 5

This is not a good parallel algorithm but it is not difficult to give a parallel algorithm for incrementing an array. The code for such an algorithm is given below.

void map_incr_rec(const long* source, long* dest, long lo, long hi) {
  long n = hi - lo;
  if (n == 0) {
    // do nothing
  } else if (n == 1) {
    dest[lo] = source[lo] + 1;
  } else {
    long mid = (lo + hi) / 2;
    fork2([&] {
      map_incr_rec(source, dest, lo, mid);
    }, [&] {
      map_incr_rec(source, dest, mid, hi);
    });
  }
}

It is easy to see that this algorithm has O(n) work and $O(\log{n})$ span.

4.3. The sequential elision

In the Fibonacci example, we started with a sequential algorithm and derived a parallel algorithm by annotating independent functions. It is also possible to go the other way and derive a sequential algorithm from a parallel one. As you have probably guessed this direction is easier, because all we have to do is remove the calls to the fork2 function. The sequential elision of our parallel Fibonacci code can be written by replacing the call to fork2() with a statement that performs the two calls (arguments of fork2()) sequentially as follows.

long fib_par(long n) {
  long result;
  if (n < 2) {
    result = n;
  } else {
    long a, b;
    ([&] {
      a = fib_par(n-1);
    })();
    ([&] {
      b = fib_par(n-2);
    })();
    result = a + b;
  }
  return result;
}
Note
Although this code is slightly different than the sequential version that we wrote, it is not too far away, because the only the difference is the creation and application of the lambda-expressions. An optimizing compiler for C++ can easily "inline" such computations. Indeed, After an optimizing compiler applies certain optimizations, the performance of this code the same as the performance of fib_seq.

The sequential elision is often useful for debugging and for optimization. It is useful for debugging because it is usually easier to find bugs in sequential runs of parallel code than in parallel runs of the same code. It is useful in optimization because the sequentialized code helps us to isolate the purely algorithmic overheads that are introduced by parallelism. By isolating these costs, we can more effectively pinpoint inefficiencies in our code.

5. Critical Sections and Mutual Exclusion

In a multithreaded program, a critical section is a part of the program that may not be executed by more than one thread at the same time. Critical sections typically contain code that alters shared objects, such as shared (e.g., global) variables. This means that the a critical section requires mutual exclusion: only one thread can be inside the critical section at any time.

Since only one thread can be inside a critical section at a time, threads must coordinate to make sure that they don’t enter the critical section at the same time. If threads do not coordinate and multiple threads enter the critical section at the same time, we say that a race condition occurs, because the outcome of the program depends on the relative timing of the threads, and thus can vary from one execution to another. Race conditions are sometimes benign but usually not so, because they can lead to incorrect behavior. Spectacular examples of race conditions' effects include the "Northeast Blackout" of 2003, which affected 45 million people in the US and 10 million people in Canada.

It can be extremely difficult to find a race condition, because of the non-determinacy of execution. A race condition may lead to an incorrect behavior only a tiny fraction of the time, making it extremely difficult to observe and reproduce it. For example, the software fault that lead to the Northeast blackout took software engineers "weeks of poring through millions of lines of code and data to find it" according to one of the companies involved.

The problem of designing algorithms or protocols for ensuring mutual exclusion is called the mutual exclusion probsem or the critical section problem. There are many ways of solvig instances of the mutual exclusion problem. But broadly, we can distinguish two categories: spin-locks and blocking-locks. The idea in spin locks is to busy wait until the critical section is clear of other threads. Solutions based on blocking locks is similar except that instead of waiting, threads simply block. When the critical section is clear, a blocked thread receives a signal that allows it to proceed. The term mutex, short for "mutual exclusion" is sometimes used to refer to a lock.

Mutual exclusions problems have been studied extensively in the context of several areas of computer scienct. For example, in operating systems research, processes, which like threads are independent threads of control, belonging usually but not always to different programs, can share certain systems' resources. To enable such sharing safely and efficiently, researchers have proposed various forms of locks such as semaphores, which accepts both a busy-waiting and blocking semantics. Another class of locks, called condition variables enable blocking synchronization by conditioning an the value of a variable.

5.1. Parallelism and Mutual Exclusion

In parallel programming, mutual exclusion problems do not have to arise. For example, if we program in a purely functional language extended with structured multithreading primitives such as fork-join and futures, programs remain purely functional and mutual-exclusion problems, and hence race conditions, do not arise. If we program in an imperative language, however, where memory is always a shared resource, even when it is not intended to be so, threads can easily share memory objects, even unintentionally, leading to race conditions.

Example 4. Writing to the same location in parallel.

In the code below, both branches of fork2 are writing into b. What should then the output of this program be?

long b = 0;

fork2([&] {
  b = 1;
}, [&] {
  b = 2;
});

std::cout << "b = " << std::endl;

At the time of the print, the contents of b is determined by the last write. Thus depending on which of the two branches perform the write, we can see both possibilities:

Output:

b = 1

Output:

b = 2
Example 5. Fibonacci

Consider the following alternative implementation of the Fibonacci function. By "inlining" the plus operation in both branches, the programmer got rid of the addition operation after the fork2.

long fib_par_racy(long n) {
  long result = 0;
  if (n < 2) {
    result = n;
  } else {
    fork2([&] {
      result += fib_par_racy(n-1);
    },[&] {
      result += fib_par_racy(n-2);
    });
  }
  return result;
}

This code is not correct because it has a race condition.

As in the example shows, separate threads are updating the value result but it might look like this is not a race condition because the update consists of an addition operation, which reads the value and then writes to i. The race condition might be easier to see if we expand out the applications of the += operator.

long fib_par_racy(long n) {
  long result = 0;
  if (n < 2) {
    result = n;
  } else {
    fork2([&] {
      long a1 = fib_par_racy(n-1);
      long a2 = result;
      result = a1 + a2;
    },[&] {
      long b1 = fib_par_racy(n-2);
      long b2 = result;
      result = b1 + b2;
    });
  }
  return result;
}

When written in this way, it is clear that these two parallel threads are not independent: they both read result and write to result. Thus the outcome depends on the order in which these reads and writes are performed, as shown in the next example.

Example 6. Execution trace of a race condition

The following table takes us through one possible execution trace of the call fib_par_racy(2). The number to the left of each instruction describes the time at which the instruction is executed. Note that since this is a parallel program, multiple instructions can be executed at the same time. The particular execution that we have in this example gives us a bogus result: the result is 0, not 1 as it should be.

Time step Thread 1 Thread 2

1

a1 = fib_par_racy(1)

b2 = fib_par_racy(0)

2

a2 = result

b3 = result

3

result = a1 + a2

_

4

_

result = b1 + b2

The reason we get a bogus result is that both threads read the initial value of result at the same time and thus do not see each others write. In this example, the second thread "wins the race" and writes into result. The value 1 written by the first thread is effectively lost by being overwritten by the second thread.

5.2. Synchronization Hardware

Since mutual exclusion is a common problem in computer science, many hardware systems provide specific synchronization operations that can help solve instances of the problem. These operations may allow, for example, testing the contents of a (machine) word then modifying it, perhaps by swapping it with another word. Such operations are sometimes called atomic read-modify-write or RMW, for short, operations.

A handful of different RMW operations have been proposed. They include operations such as load-link/store-conditional, fetch-and-add, and compare-and-swap. They typically take the memory location x, and a value v and replace the value of stored at x with f(x,v). For example, the fetch-and-add operation takes the location x and the increment-amount, and atomically increments the value at that location by the specified amount, i.e., f(x,v) = *x + v.

The compare-and-swap operation takes the location x and takes a pair of values (a,b) as the second argument, and stores b into x if the value in x is a, i.e., f(x,(a,b)) = if *x = a then b else a; the operation returns a boolean indicating whether the operation successfully stored a new value in x. The operation "compare-and-swap" is a reasonably powerful synchronization operation: it can be used by arbitrarily many threads to agree (reach consensus) on a value. This instruction therefore is frequently provided by modern parallel architectures such as Intel’s X86.

In C$++$, the atomic class can be used to perform synchronization. Objects af this type are guarantee to be free of race conditions; and in fact, in C++, they are the only objects that are guaranteed to be free from race conditions. The contents of an atomic object can be accessed by load opeations, updated by store operation, and also updated by compare_exchange_weak and compare_exchange_strong operations, the latter of which implement the compare-and-swap operation.

Example 7. Accessing the contents of atomic memory cells

Access to the contents of any given cell is achieved by the load() and store() methods.

std::atomic<bool> flag;

flag.store(false);
std::cout << flag.load() << std::endl;
flag.store(true);
std::cout << flag.load() << std::endl;

Output:

0
1

The key operation that help with race conditions is the compare-and-exchange operation.

Definition: compare and swap

When executed with a ‘target` atomic object and an expected cell and a new value `new’ this operation performs the following steps, atomically:

  1. Read the contents of target.

  2. If the contents equals the contents of expected, then writes new into the target and returns true.

  3. Otherwise, returns false.

Example 8. Reading and writing atomic objects
std::atomic<bool> flag;

flag.store(false);
bool expected = false;
bool was_successful = flag.compare_exchange_strong(expected, true);
std::cout << "was_successful = " << was_successful << "; flag = " << flag.load() << std::endl;
bool expected2 = false;
bool was_successful2 = flag.compare_exchange_strong(expected2, true);
std::cout << "was_successful2 = " << was_successful2 << "; flag = " <<
flag.load() << std::endl;

Output:

was_successful = 1; flag = 1
was_successful2 = 0; flag = 1

As another example use of the atomic class, recall our Fibonacci example with the race condition. In that example, race condition arises because of concurrent writes to the result variable. We can eliminate this kind of race condition by using different memory locations, or by using an atomic class and using a compare_exchange_strong operation.

Example 9. Fibonacci

The following implementation of Fibonacci is not safe because the variable result is shared and updated by multiple threads.

long fib_par_racy(long n) {
  long result = 0;
  if (n < 2) {
    result = n;
  } else {
    fork2([&] {
      result += fib_par_racy(n-1);
    },[&] {
      result += fib_par_racy(n-2);
    });
  }
  return result;
}

We can solve this problem by declaring result to be an atomic type and using a standard busy-waiting protocol based on compare-and-swap.


long fib_par_atomic(long n) {
  atomic<long> result = 0;
  if (n < 2) {
    result.store(n);
  } else {
    fork2([&] {
      long r = fib_par_racy(n-1);
      // Atomically update result.
      while (true) {
        long exp = result.load();
        bool flag = result.compare_exchange_strong(exp,exp+r)
        if (flag) {break;}
      }
    },[&] {
      long r = fib_par_racy(n-2);
      // Atomically update result.
      while (true) {
        long exp = result.load();
        bool flag = result.compare_exchange_strong(exp,exp+r)
        if (flag) {break;}
      }
    });
  }
  return result;
}

The idea behind the solution is to load the current value of result and atomically update result only if it has not been modified (by another thread) since it was loaded. This guarantees that the result is always updated (read and modified) correctly without missing an update from another thread.

The example above illustrates a typical use of the compare-and-swap operation. In this particular example, we can probably prove our code is correct. But this is not always as easy due to a problem called the "ABA problem."

5.3. ABA problem

While reasonably powerful, compare-and-swap suffers from the so-called ABA problem. To see this consider the following scenario where a shared variable result is update by multiple threads in parallel: a thread, say $T$, reads the result and stores its current value, say 2, in current. In the mean time some other thread also reads result and performs some operations on it, setting it back to 2 after it is done. Now, thread $T$ takes its turn again and attempts to store a new value into result by using 2 as the old value and being successful in doing so, because the value stored in result appears to have not changed. The trouble is that the value has actually changed and has been changed back to the value that it used to be. Thus, compare-and-swap was not able to detect this change because it only relies on a simple shallow notion of equality. If for example, the value stored in result was a pointer, the fact that the pointer remains the same does not mean that values accessible from the pointer has not been modified; if for example, the pointer led to a tree structure, an update deep in the tree could leave the pointer unchanged, even though the tree has changed.

This problem is called the ABA problem, because it involves cycling the atomic memory between the three values $A$, $B$, and again $A$). The ABA problem is an important limitation of compare-and-swap: the operation itself is not atomic but is able to behave as if it is atomic if it can be ensured that the equality test of the subject memory cell suffices for correctness.

In the example below, ABA problem may happen (if the counter is incremented and decremented again in between a load and a store) but it is impossible to observe because it is harmless. If however, the compare-and-swap was on a memory object with references, the ABA problem could have had observable effects.

The ABA problem can be exploited to give seemingly correct implementations that are in fact incorrect. To reduce the changes of bugs due to the ABA problem, memory objects subject to compare-and-swap are usually tagged with an additional field that counts the number of updates. This solves the basic problem but only up to a point because the counter itself can also wrap around. The load-link/store-conditional operation solves this problem by performing the write only if the memory location has not been updated since the last read (load) but its practical implementations are hard to come by.

6. Chapter: Experimenting with PASL

We are now going to study the practical performance of our parallel algorithms written with PASL on multicore computers.

To be concrete with our instructions, we assume that our username is pasl and that our home directory is /home/pasl/. You need to replace these settings with your own where appropriate.

6.1. Software Setup

You can skip this section if you are using a computer already setup by us or you have installed an image file containing our software. To skip this part and use installed binaries, see the heading "Starting with installed binaries", below.

6.1.1. Check for software dependencies

Currently, the software associated with this course supports Linux only. Any machine that is configured with a recent version of Linux and has access to at least two processors should be fine for the purposes of this course. Before we can get started, however, the following packages need to be installed on your system.

Software dependency Version Nature of dependency

gcc

>= 4.9.0

required to build PASL binaries

php

>= 5.3.10

required by PASL makefiles to build PASL binaries

ocaml

>= 4.0.0

required to build the benchmarking tools (i.e., pbench and pview)

R

>= 2.4.1

required by benchmarking tools to generate reports in bar plot and scatter plot form

latex

recent

optional; required by benchmarking tools to generate reports in tabular form

git

recent

optional; can be used to access PASL source files

tcmalloc

>= 2.2

optional; may be useful to improve performance of PASL binaries

hwloc

recent

optional; might be useful to improve performance on large systems with NUMA (see below)

6.1.2. Fetch source files and configure

Let us change to our home directory: /home/pasl.

The PASL sources that we are going to use are part of a branch that we created specifically for this course. You can access the sources either via the tarball linked by the github webpage or, if you have git, via the command below.

$ git clone -b edu https://github.com/deepsea-inria/pasl.git

This rest of this section explains what are the optional software dependencies and how to configure PASL to use them. We are going to assume that all of these software dependencies have been installed in the folder /home/pasl/Installs/.

6.1.3. Use a custom parallel heap allocator

At the time of writing this document, the system-default implementations of malloc and free that are provided by Linux distributions do not scale well with even moderately large amounts of concurrent allocations. Fortunately, for this reason, organizations, such as Google and Facebook, have implemented their own scalable allocators that serve as drop-in replacements for malloc and free. We have observed the best results from Google’s allocator, namely, tcmalloc. Using tcmalloc for your own experiements is easy. Just add to the /home/pasl/pasl/minicourse folder a file named settings.sh with the following contents.

Example 10. Configuration to select tcmalloc

We assume that the package that contains tcmalloc, namely gperftools, has been installed already in the folder /home/pasl/Installs/gperftools-install/. The following lines need to be in the settings.sh file in the /home/pasl/pasl/minicourse folder.

USE_ALLOCATOR=tcmalloc
TCMALLOC_PATH=/home/pasl/Installs/gperftools-install/lib/

Also, the environment linkder needs to be instructed where to find tcmalloc.

export LD_PRELOAD=/home/pasl/Installs/gperftools-install/lib/libtcmalloc.so

This assignment can be issued either at the command line or in the environment loader script, e.g., ~/.bashrc.

Warning
Changes to the settings.sh file take effect only after recompiling the binaries.

6.1.4. Use hwloc

If your system has a non-uniform memory architecture (i.e., NUMA), then you may improve performance of PASL applications by using optional support for hwloc, which is a library that reports detailed information about the host system, such as NUMA layout. Currently, PASL leverages hwloc to configure the NUMA allocation policy for the program. The particular policy that works best for our applications is round-robin NUMA page allocation. Do not worry if that term is unfamiliar: all it does is disable NUMA support, anyway!

Example 11. How to know whether my machine has NUMA

Run the following command.

$ dmesg | grep -i numa

If the output that you see is something like the following, then your machine has NUMA. Otherwise, it probably does not.

[    0.000000] NUMA: Initialized distance table, cnt=8
[    0.000000] NUMA: Node 4 [0,80000000) + [100000000,280000000) -> [0,280000000)

We are going to assume that hwloc has been installed already and is located at /home/pasl/Installs/hwloc-install/. To configure PASL to use hwloc, add the following lines to the settings.sh file in the /home/pasl/pasl/minicourse folder.

Example 12. Configuration to use hwloc
USE_HWLOC=1
HWLOC_PATH=/home/pasl/Installs/hwloc-install/

6.2. Starting with installed binaries

At this point, you have either installed all the necessary software to work with PASL or these are installed for you. In either case, make sure that your PATH variable makes the software visible. For setting up your PATH variable on andrew.cmu domain, see below.

6.2.1. Specific set up for the andrew.cmu domain

We have installed much of the needed software on andrew.cmu.edu. So you need to go through a relatively minimal set up.

First set up your PATH variable to refer to the right directories. Using cshell

setenv PATH  /opt/rh/devtoolset-3/root/usr/bin:/usr/lib64/qt-3.3/bin:/usr/lib64/ccache:/usr/local/bin:/bin:/usr/bin:./

The part added to the default PATH an andrew is

/opt/rh/devtoolset-3/root/usr/bin

It is important that this is at the beginning of the PATH variable. To make interaction easier, we also added the relative path ./ to the PATH variable.

6.2.2. Fetch the benchmarking tools (pbench)

We are going to use two command-line tools to help us to run experiments and to analyze the data. These tools are part of a library that we developed, which is named pbench. The pbench sources are available via github.

$ cd /home/pasl
$ git clone https://github.com/deepsea-inria/pbench.git

The tarball of the sources can be downloaded from the github page.

6.2.3. Build the tools

The following command builds the tools, namely prun and pplot. The former handles the collection of data and the latter the human-readable output (e.g., plots, tables, etc.).

$ make -C /home/pasl/pbench/

Make sure that the build succeeded by checking the pbench directory for the files prun and pplot. If these files do not appear, then the build failed.

6.2.4. Create aliases

We recommend creating the following aliases.

$ alias prun '/home/pasl/pbench/prun'
$ alias pplot '/home/pasl/pbench/pplot'

It will be convenient for you to make these aliases persistent, so that next time you log in, the aliases will be set. Add the commands above to your shell configuration file.

6.2.5. Visualizer Tool

When we are tuning our parallel algorithms, it can be helpful to visualize their processor utilization over time, just in case there are patterns that help to assign blame to certain regions of code. Later, we are going to use the utilization visualizer that comes packaged along with PASL. To build the tool, run the following make command.

$ make -C /home/pasl/pasl/tools/pview pview

Let us create an alias for the tool.

$ alias pview '/home/pasl/pasl/tools/pview/pview'

We recommend that you make this alias persistent by putting it into your shell configuration file (as you did above for the pbench tools).

6.3. Using the Makefile

PASL comes equipped with a Makefile that can generate several different kinds of executables. These different kinds of executables and how they can be generated is described below for a benchmark program pgm.

  • baseline: build the baseline with command make pgm.baseline

  • elision: build the sequential elision with command make pgm.elision

  • optimized: build the optimized binary with command make pgm.opt

  • log: build the log binary with command make pgm.log

  • debug: build the debug binary with the command make pgm.dbg

To speed up the build process, add to the make command the option -j (e.g., make -j pgm.opt). This option enables make to parallelize the build process. Note that, if the build fails, the error messages that are printed to the terminal may be somewhat garbled. As such, it is better to use -j only if after the debugging process is complete.

6.4. Task 1: Run the baseline Fibonacci

We are going to start our experimentation with three different instances of the same program, namely bench. This program serves as a "driver" for the benchmarks that we have implemented. These implementations are good parallel codes that we expect to deliver good performance. We first build the baseline version.

$ cd /home/pasl/pasl/minicourse
$ make bench.baseline
Warning
The command-line examples that we show here assume that you have . in your $PATH. If not, you may need to prefix command-line calls to binaries with ./ (e.g., ./bench.baseline).

The file extension .baseline means that every benchmark in the binary uses the sequential-baseline version of the specified algorithm.

We can now run the baseline for one of our benchmarks, say Fibonacci by using the -bench argument to specify the benchmark and the -n argument to specify the input value for the Fibonacci function.

$ bench.baseline -bench fib -n 39

On our machine, the output of this run is the following.

exectime 0.556
utilization 1.0000
result 63245986

The three lines above provide useful information about the run.

  • The exectime indicates the wall-clock time in seconds that is taken by the benchmark. In general, this time measures only the time taken by the benchmark under consideration. It does not include the time taken to generate the input data, for example.

  • The utilization relates to the utilization of the processors available to the program. In the present case, for a single-processor run, the utilization is by definition 100%. We will return to this measure soon.

  • The result field reports a value computed by the benchmark. In this case, the value is the $39^{th}$ Fibonacci number.

6.5. Task 2: Run the sequential elision of Fibonacci

The .elision extension means that parallel algorithms (not sequential baseline algorithms) are compiled. However, all instances of fork2() are erased as described in an earlier chapter.

$ make bench.elision
$ bench.elision -bench fib -n 39

The run time of the sequential elision in this case is similar to the run time of the sequential baseline because the two are similar codes. However, for most other algorithms, the baseline will typically be at least a little faster.

exectime 0.553
utilization 1.0000
result 63245986

6.6. Task 3: Run parallel Fibonacci

The .opt extension means that the program is compiled with full support for parallel execution. Unless specified otherwise, however, the parallel binary uses just one processor.

$ make bench.opt
$ bench.opt -bench fib -n 39

The output of this program is similar to the output of the previous two programs.

exectime 0.553
utilization 1.0000
result 63245986

Because our machine has 40 processors, we can run the same application using all available processors. Before running this command, please adjust the -proc option to match the number of cores that your machine has. Note that you can use any number of cores up to the number you have available. You can use nproc or lscpu to determine the number of cores your machine has.

$ bench.opt -bench fib -n 39 -proc 40

We see from the output of the 40-processor run that our program ran faster than the sequential runs. Moreover, the utilization field tells us that approximately 86% of the total time spent by the 40 processors was spent performing useful work, not idling.

exectime 0.019
utilization 0.8659
result 63245986
Warning
PASL allows the user to select the number of processors by the -proc key. The maximum value for this key is the number of processors that are available on the machine. PASL raises an error if the programmer asks for more processors than are available.

6.7. Measuring performance with "speedup"

We may ask at this point: What is the improvement that we just observed from the parallel run of our program? One common way to answer this question is to measure the "speedup".

Definition: $P$-processor speedup

The speedup on $P$ processors is the ratio $T_B/T_P$, where the term $T_B$ represents the run time of the sequential baseline program and the term $T_P$ the time measured for the $P$-processor run.

Important
The importance of selecting a good baseline

Note that speedup is defined with respect to a baseline program. How exactly should this baseline program be chosen? One option is to take the sequential elision as a baseline. The speedup curve with such a baseline can be helpful in determining the scalability of a parallel algorithm but it can also be misleading, especially if speedups are taken as a indicator of good performance, which they are not because they are only relative to a specific baseline. For speedups to be a valid indication of good performance, they must be calculated against an optimized implementation of the best serial algorithm (for the same problem.)

The speedup at a given number of processors is a good starting point on the way to evaluating the scalability of the implementation of a parallel algorithm. The next step typically involves considering speedups taken from varying numbers of processors available to the program. The data collected from such a speedup experiment yields a speedup curve, which is a curve that plots the trend of the speedup as the number of processors increases. The shape of the speedup curve provides valuable clues for performance and possibly for tuning: a flattening curve suggests lack of parallelism; a curve that arcs up and then downward suggests that processors may be wasting time by accessing a shared resource in an inefficient manner (e.g., false sharing); a speedup curve with a constant slope indicates at least some scaling.

Example 13. Speedup for our run of Fibonacci on 40 processors

The speedup $T_B/T_{40}$ equals $0.556/0.019 = 29.26$x. Although not linear (i.e., 40x), this speedup is decent considering factors such as: the capabilities of our machine; the overheads relating to parallelism; and the small size of the problem compared to the computing power that our machine offers.

6.7.1. Generate a speedup plot

Let us see what a speedup curve can tell us about our parallel Fibonacci program. We need to first get some data. The following command performs a sequence of runs of the Fibonacci program for varying numbers of processors. You can now run the command yourself.

$ prun speedup -baseline "bench.baseline" -parallel "bench.opt -proc 1,10,20,30,40" -bench fib -n 39

Here is another example on a 24-core machine.

$ prun speedup -baseline "bench.baseline" -parallel "bench.opt -proc 1,4,8,16,24" -bench fib -n 39

Run the following command to generate the speedup plot.

$ pplot speedup

If successful, the command generates a file named plots.pdf. The output should look something like the plot in speedup plot below.

Starting to generate 1 charts.
Produced file plots.pdf.
Speedup curve for the computation of the 39th Fibonacci number.
Figure 3. Speedup curve for the computation of the 39th Fibonacci number.

The plot shows that our Fibonacci application scales well, up to about twenty processors. As expected, at twenty processors, the curve dips downward somewhat. We know that the problem size is the primary factor leading to this dip. How much does the problem size matter? The speedup plot in the Figure below shows clearly the trend. As our problem size grows, so does the speedup improve, until at the calculation of the $45^{th}$ Fibonacci number, the speedup curve is close to being linear.

Speedup plot showing speedup curves at different problem sizes.
Figure 4. Speedup plot showing speedup curves at different problem sizes.
Note
The prun and pplot tools have many more features than those demonstrated here. For details, see the documentation provided with the tools in the file named README.md.
Warning
Noise in experiments

The run time that a given parallel program takes to solve the same problem can vary noticeably because of certain effects that are not under our control, such as OS scheduling, cache effects, paging, etc. We can consider such noise in our experiments random noise. Noise can be a problem for us because noise can lead us to make incorrect conclusions when, say, comparing the performance of two algorithms that perform roughly the same. To deal with randomness, we can perform multiple runs for each data point that we want to measure and consider the mean over these runs. The prun tool enables taking multiple runs via the -runs argument. Moreover, the pplot tool by default shows mean values for any given set of runs and optionally shows error bars. The documentation for these tools gives more detail on how to use the statistics-related features.

6.7.2. Superlinear speedup

Suppose that, on our 40-processor machine, the speedup that we observe is larger than 40x. It might sound improbable or even impossible. But it can happen. Ordinary circumstances should preclude such a superlinear speedup, because, after all, we have only forty processors helping to speed up the computation. Superlinear speedups often indicate that the sequential baseline program is suboptimal. This situation is easy to check: just compare its run time with that of the sequential elision. If the sequential elision is faster, then the baseline is suboptimal. Other factors can cause superlinear speedup: sometimes parallel programs running on multiple processors with private caches benefit from the larger cache capacity. These issues are, however, outside the scope of this course. As a rule of thumb, superlinear speedups should be regarded with suspicion and the cause should be investigated.

6.8. Visualize processor utilization

The 29x speedup that we just calculated for our Fibonacci benchmark was a little dissapointing, and the 86% processor utilization of the run left 14% utilization for improvement. We should be suspicious that, although seemingly large, the problem size that we chose, that is, $n = 39$, was probably a little too small to yield enough work to keep all the processors well fed. To put this hunch to the test, let us examine the utilization of the processors in our system. We need to first build a binary that collects and outputs logging data.

$ make bench.log

We run the program with the new binary in the same fashion as before.

$ bench.log -bench fib -proc 40 -n 39

The output looks something like the following.

exectime 0.019
launch_duration 0.019
utilization     0.8639
thread_send     205
thread_exec     4258
thread_alloc    2838
utilization 0.8639
result 63245986

We need to explain what the new fields mean.

  • The thread_send field tells us that 233 threads were exchaged between processors for the purpose of load balancing;

  • the thread_exec field that 5179 threads were executed by the scheduler;

  • the thread_alloc field that 3452 threads were freshly allocated.

Each of these fields can be useful for tracking down inefficiencies. The number of freshly allocated threads can be a strong indicator because in C++ thread allocation costs can sometimes add up to a significant cost. In the present case, however, none of the new values shown above are highly suspicious, considering that there are all at most in the thousands.

Since we have not yet found the problem, let us look at the visualization of the processor utilization using our pview tool. To get the necessary logging data, we need to run our program again, this time passing the argument --pview.

$ bench.log -bench fib -n 39 -proc 40 --pview

When the run completes, a binary log file named LOG_BIN should be generated in the current directory. Every time we run with --pview this binary file is overwritten. To see the visualization of the log data, we call the visualizer tool from the same directory.

$ pview

The output we see on our 40-processor machine is shown in the Figure below. The window shows one bar per processor. Time goes from left to right. Idle time is represented by red and time spent busy with work by grey. You can zoom in any part of the plot by clicking on the region with the mouse. To reset to the original plot, press the space bar. From the visualization, we can see that most of the time, particularly in the middle, all of the processors keep busy. However, there is a lot of idle time in the beginning and end of the run. This pattern suggests that there just is not enough parallelism in the early and late stages of our Fibonacci computation.

Utilization plot for computation of 39th Fibonacci number.
Figure 5. Utilization plot for computation of 39th Fibonacci number.

6.9. Strong versus weak scaling

We are pretty sure that or Fibonacci program is not scaling as well is it could. But poor scaling on one particular input for $n$ does not necessarily mean there is a problem with the scalability our parallel Fibonacci program in general. What is important is to know more precisely what it is that we want our Fibonacci program to achieve. To this end, let us consider a distinction that is important in high-performance computing: the distinction between strong and weak scaling. So far, we have been studying the strong-scaling profile of the computation of the $39^{th}$ Fibonacci number. In general, strong scaling concerns how the run time varies with the number of processors for a fixed problem size. Sometimes strong scaling is either too ambitious, owing to hardware limitations, or not necessary, because the programmer is happy to live with a looser notion of scaling, namely weak scaling. In weak scaling, the programmer considers a fixed-size problem per processor. We are going to consider something similar to weak scaling. In the Figure below, we have a plot showing how processor utilization varies with the input size. The situation dramatically improves from 12% idle time for the $39^{th}$ Fibonacci number down to 5% idle time for the $41^{st}$ and finally to 1% for the $45^{th}$. At just 1% idle time, the utilization is excellent.

How processor utilization of Fibonacci computation varies with input size.
Figure 6. How processor utilization of Fibonacci computation varies with input size.

The scenario that we just observed is typical of multicore systems. For computations that perform relatively little work, such as the computation of the $39^{th}$ Fibonacci number, properties that are specific to the hardware, OS, and PASL load-balancing algorithm can noticeably limit processor utilization. For computations that perform lots of highly parallel work, such limitations are barely noticeable, because processors spend most of their time performing useful work. Let us return to the largest Fibonacci instance that we considered, namely the computation of the $45^{th}$ Fibonacci number, and consider its utilization plot.

$ bench.log -bench fib -n 45 -proc 40 --pview
$ pview

The utilization plot is shown in the Figure below. Compared the to utilization plot we saw in the Figure above for n=39, the red regions are much less prominent overall and the idle regions at the beginning and end are barely noticeable.

Utilization plot for computation of 45th Fibonacci number.
Figure 7. Utilization plot for computation of 45th Fibonacci number.

6.10. Chapter Summary

We have seen in this lab how to build, run, and evaluate our parallel programs. Concepts that we have seen, such as speedup curves, are going to be useful for evaluating the scalability of our future solutions. Strong scaling is the gold standard for a parallel implementation. But as we have seen, weak scaling is a more realistic target in most cases.

7. Chapter: Executing parallel algorithms

Implicit parallelism allows writing parallel programs at a high level of abstraction. In this section, we discoss techniques for executing such parallel programs on hardware-shared-memory computers such as multicore computers. As our running example, we use the map_incr_rec, whose code is reproduced below.

void map_incr_rec(const long* source, long* dest, long lo, long hi) {
  long n = hi - lo;
  if (n == 0) {
    // do nothing
  } else if (n == 1) {
    dest[lo] = source[lo] + 1;
  } else {
    long mid = (lo + hi) / 2;
    fork2([&] {
      map_incr_rec(source, dest, lo, mid);
    }, [&] {
      map_incr_rec(source, dest, mid, hi);
    });
  }
}

The basic idea is to partition a computation, that is a run of a parallel algorithm on a specified input, into pieces of serial computations, called threads, and map them to available processors while observing the dependencies between them. The task of mapping the threads to available processors is called thread scheduling or simply scheduling.

We call a piece of serial computation a thread, if it executes without performing parallel operations (fork2) except perhaps as its last action. The term thread is short for user-level thread (as opposed to a operating-system thread). When partitioning the computation into threads, it is important for threads to be maximal to minimize scheduling overhead (technically a thread can be as small as a sincle instruction).

Definition: Thread

A thread is a maximal computation (execution of) consisting of a sequence of instructions that do not contain calls to fork2() except perhaps as its last action.

When scheduling a parallel computation, it is important that we don’t alter the intended meaning of the computation. Specifically, if a thread depends another thread, because for example, it reads a piece of data generated by the latter, it cannot be executed before the thread that it depends on. We can conservatively approximate such dependencies by observing the fork2 expressions and by organizing dependencies consistently with them. More specifically, we can represent a computations as a graph where each vertex represents a thread and each edge represents a dependency. Vertices and edges are created by execution of fork2. Each fork2 creates two threads (vertices) corresponding to the two branches and inserts an edge between each branch and the thread that performs the fork2 branches; in addition, each fork2 creates a join or continuation thread (vertex) that depends on the two branches. Since such a graph cannot contain cycles, it is a Directed Acyclic Graph (DAG).

DAG for parallel increment
Figure 8. DAG for parallel increment on an array of $8$: Each vertex corresponds a call to map_inc_rec excluding the fork2 or the continuation of fork2, which is empty, an is annotated with the interval of the input array that it operates on (its argument).

Figure [fig:parallel-inc-dag] illustrates the DAG for an execution of map_incr_rec. We partition each invocation of this function into two threads labeled by "M" and "C" respectively. The threads labeled by $M[i,j$] corresponds to the part of the invocation of map_incr_rec with arguments lo and hi set to $i$ and $j$ respectively; this first part corresponds to the part of execution up and including the fork2 or all of the function if this is a base case. The second corresponds to the "continuation" of the fork2, which is in this case includes no computation.

There is an important connection between computation DAG’s and work and span. Suppose that we assign to each vertex a weight of at least $1$ that corresponds to the work of that vertex (since threads are serial work and span for each vertex is the same). We can then calculate the total weight and total depth of the DAG by summing up weights. The total weight of the DAG corresponds to the work of the computation and the depth corresponds to its span. In our example, each vertex has weight O(1). Thus for an array with n elements, the total weight (work) is O(n) and the depth (span) is $O(\log{n})$.

Having talked about DAG’s we are now ready to talk about how to map parallel computations to actual hardware so as to minimize their run-time, i.e., scheduling. But before we move on to scheduling let us observe a few properties of implicitly parallel computations.

  1. The computation DAG of a parallel algorithm applied to an input unfolds dynamically as the algorithm executes. For example, when we run map_inc_rec with an input with $n$ elements, the DAG initially contains just the root vertex (thread) corresponding to the first call to map_inc_rec but it grows as the execution proceeds.

  2. An execution of a parallel algorithm can generate a massive number of threads. For example, our ‘map_inc_rec’ function generates approximately $4n$ threads for an input with $n$ elements.

  3. The work/span of each thread can vary from a small amount to a very large amount depending on the algorithm. In our example, each thread performs either a conditional, sometimes an addition and a fork operation or performs no actual computation (continuation threads).

Suppose now we are given a computation DAG and we wish to execute the DAG by mapping each thread to one of the $P$ processor that is available on the hardware. When a thread is mapped to a processor, it will be executed requiring time proportional to the work (weight) of the thread. Once the processor completes the thread, we can map another thread to the processor, so that the processor does not idle unnecessarily. As we map threads to processors, it is important that we observe the dependencies between threads, i.e., we don’t execute a thread before its parents.

Definition: Scheduling

The (thread) scheduling problem requires assigning to each thread in a given DAG a processor number and a time step such that

  1. each thread is assigned to a unique processor for as many consecutive steps as its weight,

  2. no thread is executed before its descendants in the DAG, and

  3. no processor is assigned more at most one thread at a time.

The goal of scheduling to minimize critical resources such as time. Computing the shortest schedule for a DAG turns out to be highly nontrivial. In fact, the related decision problem in NP-complete. It is possible, however, to give a good approximation algorithm for the offline version of the problem to generate a 2-factor approximation. Such an appraximation yields a schedule for a given DAG within a factor-two of the shortest schedule. In the online version of the problem, where the DAG unfolds as the computation executes, we don’t know the DAG a priori and we have to account for the costs for scheduling such as migrating threads between schedulers and finding work. To execute parallel programs, we need an solution to this online version of the problem.

An online scheduler or a simply a scheduler is an algorithm that performs scheduling by mapping threads to available processors. For example, if only one processor is available, a scheduler can map all threads to that one processor. If two processors are available, then the scheduler can divide the threads between the two processors as evenly as possible in an attempt to keep the two processors as busy as possible by load balancing.

Example 14. An example 2-processor schedule

The following is a valid schedule for the DAG shown in this Figure assuming that each thread takes unit time.

Time Step Processor 1 Processor 2

1

M [0,8)

2

M [0,4)

M [4,8)

3

M [0,2)

M [4,6)

4

M [0,1)

M [4,5)

5

M [1,2)

M [5,6)

6

C [0,2)

C [4,6)

7

M [2,4)

M [6,8)

8

M [2,3)

M [6,7)

9

M [3,4)

M [7,8)

10

C [2,4)

C [6,8)

11

C [0,4)

C [4,8)

12

C [0,8)

_

We say that a scheduler is greedy if, whenever there is a processor available and a thread ready to be executed, then the scheduler assigns the thread to the processor and starts running the thread immediately. Greedy schedulers have a nice property that is summarized by the following theorem.

Theorem: Greedy Scheduling Principle

If a computation is run on $P$ processors using a perfect greedy scheduler that incurs no costs in creating, locating, and moving threads, then the total time (clock cycles) for running the computation $T_P$ is bounded by

$$ T_P < \frac{W}{p} + S. $$

Here $W$ is the work of the computation, and $S$ is the span of the computation (both measured in units of clock cycles).

This simple statement is powerful. To see this, note that the time to execute the computation is at least $\frac{W}{P}$ because we have a total of $W$ work. As such, the best possible execution strategy is to divide it evenly among the processors. Furthermore, execution time cannot be less than $S$ since $S$ represents the longest chain of sequential dependencies. Thus we have: $ T_P \geq \max\left(\frac{W}{P},S\right). $

This means that a greedy schudeler yields a schedule that is within a factor two of optimal: $\frac{W}{P} + S$ is never more than twice $\max(\frac{W}{P},S)$. Furthermore, when $\frac{W}{P} \gg S$, the difference between the greedy scheduler and the optimal scheduler is very small. In fact, we can rewrite equation above in terms of the average parallelism $\mathbb{P} = W/S$ as follows:

$$ \begin{array}{rcl} T_p & < & \frac{W}{P} + S \\ & = & \frac{W}{P} + \frac{W}{\mathbb{P}}\\ & = & \frac{W}{P}\left(1 + \frac{p}{\mathbb{P}}\right) \end{array} $$

Therefore as long as $\mathbb{P} \gg P$ (the parallelism is much greater than the number of processors), then we obtain near perfect speedup. (Speedup is $W/T_p$ and perfect speedup would be $p$).

The quantity $\mathbb{P}$, sometimes called average parallelism, is usually quite large, because it usually grows polynomially with the input size.

Example 15. Scheduler with a global thread queue.

We can give a simple greedy scheduler by using a queue of threads. At the start of the execution, the scheduler places the root of the DAG into the queue and then repeats the following step until the queue becomes empty: for each idle processor, take the vertex at the front of the queue and assign it to the processor, let each processor run for one step, if at the end of the step, there is a vertex in the DAG whose parents have all completed their execution, then insert that vertex at the tail of the queue.