Adventures with concurrent programming in Java: A quest for predictable latency

Concurrent programming with locks is hard. Concurrent programming without using locks can be really hard. Concurrent programming with relaxed memory ordering and predictable latency semantics is said to be only for wizards. This focuses on a decade long quest to discover algorithms that provide very high throughput while keeping latency low and predictable.

We will cover some fundamental theory of concurrency and then compare various approaches to the same problem so that we can measure the impact on throughput and latency. We'll also show how some of these algorithm implementations get way more interesting given the new features in Java 8.

This talk is aimed at programmers interested in advanced concurrency who want to develop algorithms with very predictable response times at all levels of throughput which push our modern CPUs to the limit.


Martin Thompson 

Martin is a high-performance and low-latency specialist, with experience gained over two decades working on the bleeding edge of large transactional and big-data systems. He believes in Mechanical Sympathy, i.e. applying an understanding of the hardware to the creation of software as being fundamental to delivering elegant high-performance solutions. The Disruptor framework is just one example of what his mechanical sympathy has created.

Martin was the co-founder and CTO of LMAX. He blogs at mechanical-sympathy.blogspot.com, and can be found giving training courses on performance and concurrency when he is not cutting code to make systems better.