The Contention Avoiding Priority Queue
Efficient and scalable concurrent priority queues are crucial for the performance of many multicore applications, e.g. for task scheduling and the parallelization of various algorithms. Linearizable concurrent priority queues with traditional semantics suffer from an inherent sequential bottleneck in the head the queue. This bottleneck is the motivation for some recently proposed priority queues with more relaxed semantics. We present the contention avoiding concurrent priority queue (CA-PQ), a data structure that functions as a linearizable concurrent priority with traditional semantics under low contention, but activates contention avoiding techniques that give it more relaxed semantics when high contention is detected. CA-PQ avoids contention in the head of the queue by removing items in bulk from the global data structure, which also allows it to often serve DelMin operations without accessing memory that is modified by several threads. We show that CA-PQ scales well. Its cache friendly design achieves performance that is twice as fast compared to using state-of-the-art concurrent priority queues on several instances of a parallel shortest path benchmark.
Publications
- Konstantinos Sagonas and Kjell Winblad. The Contention Avoiding Concurrent Priority Queue In Languages and Compilers for Parallel Computing - 29th International Workshop (LCPC 2016), Revised papers, pages 314-330, Volume 10136 in LNCS, Rochester, NY, USA, September 2016. Springer.
An authors pre-print containing an additional appendix with more benchmark results is also available here:
The Contention Avoiding Concurrent Priority Queue.
Code
- The code for the benchmark that is used in the paper.
- A cleaned up version of the CA-PQ code and the single source shortest path benchmark that is used in the paper has been included in the KLSM benchmark repository.