Abstract
The goal of this work is to simplify parallel application development, and thus ease the learning barriers faced by non-experts. It is especially useful where there is little data-parallelism to be recognized by a compiler. The applications programmer need learn the intricacies of only one primary subroutine in order to get the full benefits of the parallel interface. The applications programmer defines a high level concept, the task, that depends only on his application, and not on any particular parallel library. The task is defined by its three phases: (a) the task input, (b) sequential code to execute the task, and (c) any modifications of global variables that occur as a result of the task. In particular, side effects (which change global variable values) must not occur in phase (b). Forcing the user to re-organize his computation in these terms allows us to present the applications programmer with a single global environment visible to all processors (whether on a SMP or a NOW architecture), in the context of a master-slave architecture. Both a shared memory implementation (running on an SGI or SUN Solaris architecture) and a NOW memory implementation (running on top of MPI) are described. The implementations were tested by a naive program for integer factorization, and by a more sophisticated Todd-Coxeter coset enumeration. Integer factorization was chosen so as to exercise the major features of TOP-C in an unambiguous context.
Keywords
Affiliated Institutions
Related Publications
The Stan Math Library: Reverse-Mode Automatic Differentiation in C++
As computational challenges in optimization and statistical inference grow ever harder, algorithms that utilize derivatives are becoming increasingly more important. The impleme...
Parallel Distributed Processing
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human min...
OpenMP: an industry standard API for shared-memory programming
At its most elemental level, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran (and separately, C and C++ to express shared memory...
The Physiology of the Grid An Open Grid Services Architecture for Distributed Systems Integration
In both e-business and e-science, we often need to integrate services across distributed, heterogeneous, dynamic “virtual organizations ” formed from the disparate resources wit...
Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters
Although molecular dynamics (MD) simulations of biomolecular systems often run for days to months, many events of great scientific interest and pharmaceutical relevance occur on...
Publication Info
- Year
- 1996
- Type
- article
- Volume
- 8
- Pages
- 141-150
- Citations
- 28
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1109/hpdc.1996.546183