A Methodology for Implementing Highly Concurrent Data Structures
Description
A concurrent object is a data structure shared by concurrent processes. Conventional techniques for implementing concurrent objects typically rely on critical sections: ensuring that only one process at a time can operate on the object. Nevertheless, critical sections are poorly suited for asynchronous systems: if one process is halted or delayed in a critical section, other, non-faulty processes will be unable to progress. By contrast, a concurrent object implementation is non-blocking if it always guarantees that some process will complete an operation in a finite number of steps, and it is wait-free if it guarantees that each process will complete an operation in a finite number of steps. This paper proposes a new methodology for constructing non-blocking and wait-free implementations of concurrent objects. The object's representation and operations are written as stylized sequential programs, with no explicit synchronization. Each sequential operation is automatically transformed into a non-blocking or wait-free operation using novel synchronization and memory management algorithms. These algorithms are presented for a multiple instruction/multiple data (MIMD) architecture in which n processes communicate by applying read, write, and compare&swap operations to a shared memory. Link to the paper: |
Questions
Question 1:
This paper proposes a methodology to transform sequential procedures to concurrent ones. The sequential procedures have to fulfill certain criteria. What are they?
Question 2:
Please summarize how to transform a sequential procedure dealing with small objects (example in Figure 6)?
Stavros
Q1: Q2: |
Kjell
Q1: Q2: |
Martin
Q1: The procedures must be pure functions, that is, they must not modify the object, but instead return a new copy of the object, with the operation applied. Q2: Each call to the sequential procedure is replaced with the following:
The procedure is repeated until the CAS succeeds. Memory is managed by keeping a pool of objects, and attaching a usage counter to each object, which is zero when the object is unused. To allocate an object, the pool is search until an object whose pointer is 0 is found. To deallocate an object, its counter is decreased. The use counter is increased for all accessed objects, and three sets are used for book-keeping which objects need to be released again for different situations: |
Jonatan
Q1: They must be written in a functional style. Q2:
|
Magnus
Q1: The sequential operations must be implemented in a "functional style". Thus, such an operation on an object is not allowed to change the state of the object. Q2: The small word protocol starts off with a pointer to the initial shared object (the root-pointer). The block that is pointed to by the root pointer is "frozen", such that it cannot be modified by another process until it is unfrozen. It then performs the serial operation, achieving a new pointer to the modified object and attempts to compare-and-swap the modified object into the shared object. If the CAS was succesful, we are done (after some unfreezing and tidying up). Otherwise, the whole procedure is repeated until success. |
Sofia
Q1: They must be written such that any operation that changes the object state returns a new, different copy of the object instead of modifying it in-place. Q2:
|
Afshin
Q1 Q2 result = freeze(object) execute the seq_proc IF object's state has changed after this operation THEN END IF unfreeze all freed blocks ELSE unfreeze all allocated blocks END IF |
Yunyun
Q1 Q2
|
David
Q1 Q2 |
Andreas
Q1 Q2 |
Fred
Q1: Copy the data (locally) if you want to update them. Q2: Retry-loop
|