Elastic Tasks: Unifying Task Parallelism and SPMD Parallelism with an Adaptive Runtime
In this paper, we introduce elastic tasks, a new high-level parallel programming primitive that can be used to unify task parallelism and SPMD parallelism in a common adaptive scheduling framework. Elastic tasks are internally parallel tasks and can run on a single worker or expand to take over multiple workers. An elastic task can be an ordinary task or an SPMD region that must be executed by one or more workers simultaneously, in a tightly coupled manner. The gains obtained by using elastic tasks, as demonstrated in this paper, are three-fold: (1) they offer theoretical guarantees: given a computation with work W and span S executing on P cores, a work-sharing runtime guarantees a completion time of O(W/P+S+E), and a work-stealing runtime completes the computation in expected time O(W/P + S + E lgP), where E is the number of elastic tasks in the computation, (2) they offer performance benefits in practice by co-scheduling tightly coupled parallel/SPMD subcomputations within a single elastic task, and (3) they can adapt at runtime to the state of the application and work-load of the machine. We also introduce ElastiJ — a runtime system that includes work-sharing and work-stealing scheduling algorithms to support computations with regular and elastic tasks. This scheduler dynamically decides the allocation for each elastic task in a non-centralized manner, and provides close to asymptotically optimal running times for computations that use elastic tasks. We have created an implementation of ElastiJ and present experimental results showing that elastic tasks provide the aforementioned benefits. We also make study on the sensitivity of elastic tasks to the theoretical assumptions and the user parameters.