On the shuttle to the UPCRC (Universal Parallel Computation Research Center) Annual Summit meeting on the Microsoft campus in Redmond, Wash., I was listening in on a discussion about parallel programming patterns. Being a parallel programmer, I was interested in what people (and these were some of the experts in the field) had to say about parallel programming patterns, how they are evolving and how they will impact future parallel coders.
The discussion turned to whether patterns would affect programming languages directly or remain something that would be constructed from statements of the language. I think I’m in the former camp. Here’s why.
For those of us that were programming when Elvis was still alive, think back to writing with assembly language. For the most part, there were instructions for Load, Store, Add, Compare, Jump, plus some variations on these and other miscellaneous instructions. To implement a counting/indexing loop, you would use something like the following:
Initialize counter
LOOP: test end condition,
goto EXIT if done
Loop Body
increment counter
goto LOOP
EXIT: next statement
This is a programming pattern. Surprised? With the proper conditional testing and jumping (goto) instructions within the programming language, this pattern can be implemented in any imperative language. Since this pattern proved to be so useful and pervasive in the computations being written, programming language designers added syntax to “automate” the steps above. For example, the for-loop in C:
for (i = 0; i < N; ++i) {
Loop Body
//}
Once we had threads and the supporting libraries to create and manage threads, parallel coding in shared memory was feasible, but at a pretty crude level since the programmer had to be sure the code handled everything explicitly. For example, dividing the loop iterations among threads can be done with each thread executing code that looks something like this:
start = (N/num_threads) * (myid)
end = (N/num_threads) * (myid + 1)
if (myid == LAST) end = N
for (i = start; i < end; ++i) {
Loop Body
//}
Parallel programming patterns will be abstractions that can be “crudely” implemented in current languages and parallel libraries, like the pseudocode above. New languages (or language extensions) will make programming parallel patterns easier and less error prone. From the example above, OpenMP has the syntax to do this, but it only takes a single line added to the serial code:
#pragma omp for
for (i = 0; i < N; ++i) {
Loop Body
//}
From the evidence above, I think future parallel programming languages or language extensions supporting parallelism will be influenced by the parallel programming patterns we define and use today. And nothing will remain static. During his UPCRC presentation, Design Patterns’ Ralph Johnson remarked that some of the original patterns saw early use, but this use has slacked off. Two reasons he noted for this was that some of the patterns couldn’t easily be implemented in Java and modern OO languages had better ways to accomplish the same tasks — most likely these new languages found inspiration from the patterns and their usage.
For an answer to the question posed in the
title, it boils down (no pun intended) to the old chicken-and-egg paradox. There
were algorithms (patterns) to do computations before there were computers;
prior to that, those algorithms were modifications of previous algorithms
influenced by the tools available. Looking forward, though, we’re still in the
relative stages of infancy for programming, let alone parallel programming.
Clearly, the next generation of parallel programming languages or libraries or
extensions bolted onto serial languages will be influenced by the patterns we
use now for specifying parallel computations.