Wednesday, August 18, 2010

Concurrent Python using transactional memory

Speaker: Fuad Tabba

"Parallelism is hard"
  • Figure out the parallelism in the application.
  • Figure out the required synchronisation. How do you protect the critical sections? Race conditions, deadlocks, livelocks. Locks take away the parallelism you had hoped to achieve.
  • Locks have inherent overhead.
Python uses one "Global Interpreter Lock" to protect shared resources.

Transactional memory: Atomicity, Consistency, Isolation, Durability.
The use of transactional memory is to abstract away critical sections so that parallel programs can become easier to write.

In the Sun Rock Processor (which has limited hardware support for transactional memory), there are two caches:
  • L1 Cache tracks memory locations that have been read and written to.
  • Write buffer stores tentative writes (uncommitted transactions).
Cache coherence protocol is used to detect transaction conflicts. The physical size of the caches limit the size of the transactions. When a transaction is committed, the new values have to be propagated to each processor. Transactions can abort/fail for unspecified reasons.

No comments: