Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.
See all our reports

Java Concurrency Flavors Followup and Survey Results


About a month ago, we looked at different models of dealing with concurrency and parallel computations in Java. The result was this blog post discussing how the same sample problem can be approached with using different tools:

  • Bare threads
  • Executors
  • Parallelisation using ForkJoin pool
  • Akka actors

We didn’t touch on the subject of Fibers (lightweight threads), but you can read on what Fibers are, how, when and why to use them in another post.

It turns out that the amount of code that you have to produce to solve a simple parallel computation problem is almost the same whichever method you use to organise your code. However, this is clearly not true for larger systems. The amount of effort one has to put into the  configuration and management overhead will play a much larger role as your system complexity grows.

In this post I want to follow up the discussion we had previously, show the data collected by the survey (by the way, you can still respond, we’re likely to look at the results again in the future: 1-click survey what do you use for concurrency in Java?) and show one more modern way to organise the asynchronous code execution for parallelisation purposes.

Let’s get down to numbers

Just to get this out of the way here’s the data, the original post featured a one-question survey for the readers about which technology they usually use when they need to spice up their code with some concurrency. The response choices we offered mostly reflected the options that we explored in our original post. While we don’t have a huge sample size, there’s still a lot we can take out of the results, as follows:


Two thirds of the respondents prefer executor or completion services based models for concurrent computation.

Task based design is when you submit individual independent units of work to be processed in a common way. This approach typically suits modern application well, as by limiting the resources that the processing can consume simultaneously, you can easily configure the system for the required throughput, without risking the stability of the entire JVM, as you would if you were to create your own threads.

Surprisingly enough, using the built-in executor in the form of the ForkJoinPool does not enjoy the same level of popularity staying just at 13%. The only reasons against using it are the requirement to upgrade to Java 8 to enjoy the more performant FJP and the danger to hang the whole system with an inaccurate and overoptimistic use of the common FJP.

The second most popular method is to use bare threads, manually creating them when needed. This approach can be incredibly memory intensive, as well as being hard to configure and maintain on larger systems. However, for one-off tasks it is probably the easiest option to implement and forget about. If there will be any performance problems with having too many threads later, you can always come back and refactor, right?

Other options, like Akka actors or other actor framework seem less popular. However, given the size of the respondent base, Akka actors are about as popular as ForkJoinPool. However, given that any Scala user is most probably using Akka actors, I cannot comment on how many Java projects enjoy it.

That’s it for the survey data analysis. It was an ad-hoc 1-question survey, created just for the blogpost and not promoted in any way, so the result represent maybe an average reader of RebelLabs, but I definitely don’t recommend extrapolating them boldly onto the whole Java community.

Completable Future

When discussing parallelisation and hiding the execution into the background, one should not miss the recent improvements Java 8 offers. While it was released almost a year ago, software does have inertia and despite the number of projects actively migrating to Java 8, codebases might not get automatically updated. Especially if you count in how low the hanging fruit of rewriting everything using lambdas is.

However, making use of CompletableFutures can make your concurrent code readable and maybe more efficient. Let’s implement a CompletableFuture based solution to the same simple problem that we looked at in the previous post.

The task: Implement a method that takes a message and a list of strings that correspond to some search engine query page, issues http requests to query the message and returns the first result available, preferably as soon as it is available.

Here’s how to do it leveraging the completion based design of async computations with CompletableFuture:

private static String getFirstResultCompletableFuture(String question, List<String> engines) {

 CompletableFuture result = CompletableFuture.anyOf(
   (base) -> {
     return CompletableFuture.supplyAsync(() -> {
       String url = base + question;
       return WS.url(url).get();
 ).collect(Collectors.toList()).toArray(new CompletableFuture[0]));

 try {
   return (String) result.get();
 catch (InterruptedException | ExecutionException e) {
   return null;

See how easy and readable the code is. A couple of things happen there, CompletableFuture.supplyAsync executes the query in the background using the common ForkJoinPool. CompletableFuture.anyOf returns the first result that’s available. And in the end we do wait for a result with a blocking get(), but essentially, we could use the result asynchronously as well, so the waiting would be completely eliminated.

In addition, JDK 9 allegedly will include some improvements to the CompletableFuture design. The most obvious missing things probably are timeouts and delayed execution. Almost 100% of use cases for asynchronous computation have some notion of timeout.

So this is one of the changes in the incoming Java 9 that makes me very excited. Not everything should be included in the platform, some things are better to implement in the libraries. But given the growing interest to parallelisation and implicit platform level parallelisation (think parallel streams or automatic parallel processing), these changes are very welcome!

Entreprise concurrency

All in all, doing concurrency in Java is very pleasant. We have support of the platform, various locks, atomic data structures, parallelisation support and libraries that allow us to easily leverage the potential on modern machines.

However, as soon as you enter the lovely world of enterprise Java, application servers and multiple deployments on a single JVM, the world becomes much fuzzier.

First of all, problems with overusing the common ForkJoinPool are now the server’s problem and less under your control. Indeed, if you don’t control all the deployments onto the server, including the admin apps, consoles, etc. Then whatever they do to the common pool can affect your application performance. Our post about parallel streams dangers provides examples of how stalling the FJP can affect your performance.

The effort to specify the portably configurable ManagedExecutorService allows to configure the executor services in the Java EE application servers, however, if you pick something more lightweight, like Jetty, you’ll have to port the configuration manually.

In general though, as the industry moves towards microservices and smaller deployments, where you control everything and can use just plain Java SE concurrency and parallelisation tools without worrying about the cohabitants.

To sum it up

In this post we looked at the numbers obtained in the survey about the different Java concurrency approaches developers use, added a CompletableFuture based flavor to the library of examples started in the previous post and mentioned the problems with parallelising tasks in a larger enterprise environments.

So how do you do parallel or background computations? Share your recipe in the comments below or ping me on Twitter: @shelajev.