Sunday, December 13, 2009

Concurrency Hides Latency

To paraphrase Brian Goetz's excellent The Concurrency Revolution: The Hardware Story talk at Devoxx09: "concurrency hides latency". In other words: use concurrency to combat latency. This really seems to be the common theme in current-day software development.

In his talk, Brian explained how CPU speeds have increased at a much faster pace than memory speeds the last few decades. This has resulted in a situation where going to main memory is now extremely expensive in terms of CPU time. It could take several hundred clock cycles to fetch some data from main memory. Brian captured the problem in anther very quotable statement: "memory is the new disk". Hardware designers have been trying to minimize the impact of memory latency on computer performance by clever tricks like speculative execution and extensive caching. However, we've now come to a point where these techniques are breaking down because the gap between CPU and memory speed is just too big. As a result we're seeing more and more multi-core CPUs: slower in absolute terms but designed for concurrency so latency is less of a problem.

In web applications, a round-trip to the server involves a lot of latency, so web applications are doing more things on the client in JavaScript and hide this latency by concurrently doing other things: think AJAX. In a typical back-end business processing system, a round-trip to the database also involves a lot of latency, so we use techniques like event driven architectures to process several transactions concurrently, again hiding the latency cost.

More and more performance is about data, not code (another quote from Brian's talk)! I've seen several situations where the efficiency of a particular piece of software is completely determined by it's data access strategy. Relatively speaking, fetching data from the database is so expensive that it doesn't matter that you process that data in a naive or unoptimized way. All of this of course also implies that a key design question is the data access patterns and data structures used by the application. If you want a fast application, data is your key concern, and you can fight the latency of getting to the data by using concurrency.

3 comments:

  1. Latency is the time between stimulus and response. The 150 clock cycles or so that is spent waiting for a fetch is dead time. Today dead time can be a huge component of response time and when it happens due to memory bandwidth issues, it can be immeasurable. To this end I have been suggesting to people that we need to be paying attention to memory bandwidth in order to gain some understanding of when we are being bitten by this particular type of response time rot. For example, on the laptop I'm using to write this response, my memory bandwidth is ~10Giga bits / second.

    Kirk

    ReplyDelete
  2. Another concern is code path length. Our code paths continue to get longer and longer with more and more abstraction, and this erodes the ability of concurrency to alleviate latency.

    ReplyDelete
  3. Could you elaborate a bit on how a long code path influences the ability of concurrency to hide latency?

    ReplyDelete