Specialization for Synchronization

Eric Koskinen

Nondeterminism is an important and prevalent concept in the design and implementation of software algorithms. An important property of nondeterminism is its latent parallelism: A nondeterministic action can evaluate to multiple behaviors. If at least one of these behaviors commutes with concurrent tasks, then there is a serializable execution of the action in parallel with these tasks. Unfortunately, existing implementations of the atomic paradigm -- optimistic as well as pessimistic -- are unable to fully exhaust the parallelism potential of nondeterministic actions, lacking the means to guide concurrent tasks toward nondeterministic choices that minimize interference.

I will discuss our recent work that investigates the problem of utilizing parallelism due to nondeterminism. We introduce a novel concurrency paradigm in which tasks decide what to do based on the potential future behavior of other tasks. This stands in contrast with transactional-memory (TM) systems, in which conflict is based on what has been done in the past (operations/commits to the shared state). This form of coordination -- which we prove to be serializable and, in some cases, progress ensuring -- allows tasks to specialize their nondeterministic computations to exhaust their available parallelism.

I will also report on our Java/C# implementation, which has been applied to 12 greedy and dynamic programming algorithms and is able to perform competitively with specialized parallelization schemes (achieving speedups of up to 5x).