Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.
See all our reports

Java Concurrency under the hood by Gleb Smirnov

Gleb Smirnov profile pic

The second VirtualJUG session in August saw Gleb Smirnov give his first session on the vJUG with a hardcore technical look at concurrency with “Java Concurrency Under the Covers”. And yeh, we went deep under the covers! In the session Gleb looks at concurrency issues and why we need a Java Memory Model. Incidentally, the vJUG also had a chance to learn about the Java Memory Model from Aleksey Shipilev earlier in the year. You can catch up with that session and read what we learned from it in a write up and a full replay on RebelLabs.

Below is the full video of Gleb’s session for your convenience. If you don’t have time to view it all, scroll down and we’ll give you our TL;DR version!


Gleb started with an excellent summary of what’s often referred to as Mechanical Sympathy. The idea being, you must understand the underlying technology in order to build on top of it proficiently. Martin Thompson, a big fan of mechanical sympathy, refers to it as a formula 1 racing driver understanding their car. They don’t need to know the exact mechanics of the car and precisely how it works, but if they have a reasonable working knowledge they will be able to get more out of the car when driving it. Similarly, as a developer, it’s important to understand the underlying Java Memory Model, or how hardware uses its cores to give you a better understanding of how your code is actually going to execute.

Gleb starts with a simple bit of code and simply asks the question, can the assertion in this code fail? How about if it were run on the x86 architecture? What if the architecture is different? The code Gleb showed is below.

Example of concurrent code which requires a memory model

The theoretical answer lays in the Java Memory Model. The JMM says you need the volatile keyword on the finished keyword to be assured the assertion succeeds. As we know of course the theoretical answer is the sober relative of the drunken real life in which practice lives in. Gleb takes a glimpse into the practical world and runs the code an extreme amount of times to see how many times the assertion fails, using a tool called jcstress. Well actually it wasn’t exactly the same but it was very similar. Essentially it did some writes and reads and the question was, would they be in order. Oh heck, I might as well just show you the code!

  int value;
  int finished;

  @Actor
  public void actor1() {
    value = 1;
    value = 2;
    finished = 1;
    value = 3;
  }

  @Actor
  public void actor2(IntResult2 r) {
    int sy = finished;
    int sx = value;
    r.r1 = sy;
    r.r2 = sx;
  }

So in this instance, we would expect (naively) that the actor2 method could never see finished as 1 and value as 0, or finished as 1 and value as 1. The reason is of course because if the lines in actor1 were invoked sequentially, these should not be possible combinations. Well, given what we’ve already said, you’ll not be surprised to hear that Gleb does show a number of cases where finished is 0 and value is 3 as well as finished being 1 and value being 0, albeit a small number, it is possible.

Why does this happen? Well, because of optimizations at every layer, that doesn’t intend to change behaviour, but can do so. Gleb uses Cache Coherency as an example of this. He explains a multi-processor scenario which each holds a cache of 2 variables, finished and value, as shown below.

Cache Consistency illustration

Now, if we send an instruction to set the variable called value to 10, this might go to the first CPU. This CPU would send an instruction to the second CPU to tell it to invalidate its value variable. That takes time to communicate and run, so before we then make a call to set the finished variable to true, we’re already late! The underlying layers might choose to run the calls asynchronously, both for setting the finished and value variables. Uh oh, did someone say asynchronous? Dragons be there! This uncertainty is the reason as to why the Java Memory Model exists and why such an exciting and enticing specification might exist.

A little later, Gleb goes much much further under the covers. So far in fact it’s hard to even look back and even see the covers! He first adds the volatile keyword to the finished variable in the previous code example and compiles it. When we look at the class file using a regular javap operation, we see the putfield and getfield operations in the bytecode in each of the methods as shown below.

Javap output of the code from the post above

Gleb then navigates through the jdk source code to track down the putfield operation usage. He finds some interesting code in the GraphBuilder.cpp file which uses putfield. OMG, .cpp! Yeah, now we’re under the covers alright! Deeper through the code, Gleb digs further to notice a call which checks if the field references is volatile or not and shows the additional actions which occur as a result, including a call to add a membar release. At this stage, it’s better to watch the video and watch Gleb navigate through the code rather than read about it.

Go watch the session, it’s educational, technical and inspiring!

Speaker Interview

As Oleg is still out in the forest learning about natural language processing, I had the pleasure of interviewing Gleb as well this week! Here’s our interview, enjoy!


That was a good write up, right? Well you must have enjoyed it, otherwise you wouldn’t have got this far! Why not subscribe to our feed and we’ll send you more great content as quick as we write it! Enter your email below to join our RebelLabs community.