Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.
See all our reports

10 Reasons Why Java Now Rocks More Than Ever: Part 1 – The Java Compiler

When I was away from Java to work on the software of the Eigenharp instruments, I had to develop cross-platform real-time audio software with massive data throughput and constant latency. The development was mainly done in C++ with the Juce library and some glue code in CPython. Given the requirements to develop musical performance software, these technology choices were inevitable but they did make me realize how many things I had taken for granted in the Java ecosystem.

But could I think of ten things that make Java rock? It wasn’t too terribly hard to prepare a list of 10 aspects of Java that maybe we take for granted. You’re most certainly aware of all these, but saying them out loud can serve as a nice feel-good moment about your platform of choice when times get tricky during development. Here are my favorite things about Java:

  1. The Java Compiler
  2. The Core API
  3. Open-Source
  4. The Java Memory Model
  5. High-Performance JVM
  6. Bytecode
  7. Intelligent IDEs
  8. Profiling Tools
  9. Backwards Compatibility
  10. Maturity With Innovation

Let’s now take a look at #1, the Java Compiler, in more detail and we’ll cover the other reasons in follow-up posts.

What rocks about the Java Compiler

The Java compiler is probably the first component of the platform that you encounter as a developer. You’ll have to use it to compile your first ‘hello world’-type examples and when using the Java language, it’s your gateway towards turning your source code into an executable format.

Obviously, the Java compiler can’t exist without bytecode. Apart from the many benefits of bytecode itself, which we’ll cover in detail later, this intermediate representation also allows the JIT (Just In Time compiler) to kick in at runtime.

The JVM’s JITting like a champ

You might think that the JIT is just a performance improvement to leave interpreters in the dust and rival with native performance speeds, but it goes much further than that. When compiling straight to native, like with C++, you have to deal with static optimization that has to be decided upon at compilation-time. Having to enable profiling up front drastically differentiates builds with run-time inspection capabilities from those that are targeted at distribution. The ahead-of-time optimization shifts around many instructions, moving the compiled form sometimes very far away from what you expressed in your source code, sometimes even introducing new problems that have no clear relation to logic that you’ve written.

It’s quite common practice to actually just step through the optimization levels with manual testing of the produced binaries, hoping to find one that works. Without deep understanding of the compiler internals, it’s often impossible to predict how the profilers will change the execution of your code and which optimization levels will work or not. Sometimes, the only way to identify why some features stopped working is to step backwards in time through changesets, since there’s no relationship anymore between your code and problems that are manifested. As projects grow over time, it’s quite common to be stuck in less aggressive optimization levels since it has become almost impossible to determine why higher optimizations introduce problems.

All this is only worsened by the ‘black box’ nature of the optimization level switches. For example, this is everything you’ll find about optimization in the clang manual, good luck figuring out what exactly is going on:

Code Generation Options
  -O0 -O1 -O2 -Os -Oz -O3 -Ofast -O4
    Specify which optimization level to use.  -O0 means "no
    optimization": this level compiles the fastest and generates the
    most debuggable code.  -O2 is a moderate level of optimization
    which enables most optimizations.  -Os is like -O2 with extra
    optimizations to reduce code size.  -Oz is like -Os (and thus -O2),
    but reduces code size further.  -O3 is like -O2, except that it
    enables optimizations that take longer to perform or that may
    generate larger code (in an attempt to make the program run
    faster).  -Ofast enables all the optimizations from -O3 along with
    other aggressive optimizations that may violate strict compliance
    with language standards. On supported platforms, -O4 enables link-
    time optimization; object files are stored in the LLVM bitcode file
    format and whole program optimization is done at link time. -O1 is
    somewhere between -O0 and -O2.

Focus on your code, not on the compiler architecture

Since the Java compiler only has to transform source code into bytecode, it’s very simple to operate the javac command itself. Usually, all you have to be concerned about is providing the correct classpath information, deciding about the VM version compatibility and tell it where you want the class files to be placed, a total of three different compiler options: -classpath, -target and -d.

With C++ this is vastly more complex, here’s an example of a relatively simple g++ compiler invocation. Granted there are some project-specific define flags in there, as well as -I flags for the header inclusion paths, which could be compared to the javac classpath. These are all standard practice in C++ and often the only way to modularize your builds or get platform independence.

g++-4.2 -o tmp/obj/eigend-gpl/piagent/src/pia_buffer.os -c -arch i386
  -DDEBUG_DATA_ATOMICITY_DISABLED
  -DPI_PREFIX=\"/usr/pi/Python.framework/Versions/2.5\"
  -mmacosx-version-min=10.6 -ggdb -Werror -Wall -Wno-deprecated-declarations
  -Wno-format -O4 -fmessage-length=0 -falign-loops=16 -msse3 -DALIGN_16
  -DBUILDING_PIA -fvisibility=hidden -fPIC -Isteinberg -Ieigend-gpl/steinberg
  -Ieigend-gpl -I. -I/usr/pi/Python.framework/Versions/2.5/include/python2.5
  -Itmp/exp
  eigend-gpl/piagent/src/pia_buffer.cpp

The main problem though is that all this is different for each compiler that you use. G++ is different from Clang, which is different from the Intel C++ compiler, which is again different from the Visual Studio C++ compiler, … They all have their own names for common command-line switches, support a different version or subset of the C++ standard, and have their own flags to tune the compiler features.

If you really want to get the best performance, you’ll have to wade through hundreds of options for each compiler, sometimes requiring very low-level knowledge of the target hardware. Worse still is that you have to do all this up front and hope that it will work on all the platforms or processor architectures that you want to support–all without any real visibility to track down what went wrong when your program fails. I dreaded every release since it’s a major undertaking to figure out why your software crashes from customer reports. On a few occasions we literally had to track down the exact same processor architecture to be able to reproduce the problem and experiment with compiler options to make the binary stable.

No static nor dynamic linking, just run-time linking

When you produce native binaries, you have the option to decide how you’re going to link with libraries or modules of your own. Static linking packages everything into a single executable but prevents you from independently updating the libraries and can generate very large binaries. It has the advantage to be easier to distribute and to get none of the problems that are associated with dynamic linking. In practice though, it’s impractical for larger products to ship a single statically-linked executable. Sooner or later you will have to deal with dynamic linking, even if it’s just to split your build up or to deal with third-party libraries.

Dynamic linking is complex and quite troublesome. Independently from the visibility of code inside your sources (private, public, …) you have to explicitly compile parts of your API so that the appropriate symbols are exported by the dynamic library that you create. On the other hand, you’ll have to declare that you want to import these symbols into the code that actually uses the library. If you use the same header files for the library and the client, as most people do, you’ll have to parametrize the declaration of your API based on the compilation phase you’re in.  Additionally, the exact semantics of how all this works is different on MacOSX, Linux and Windows, you’ll be ripping your hair out and littering your code with macros to handle all of this, certainly if you’re concerned with maintaining a common codebase that compiles to different platforms. In no time, you’ll see that linking, an actual compiler concern, has bled into every declaration of your source code and sometimes even dictates the actual class structure you end up using.

Even with all this in place, there are many complications with shared libraries, particularly on Windows platforms, where every user has had to deal with DLL version incompatibilities. While later versions of Windows are less prone to this, usually your binary is bound to a particular version of a DLL and some seemingly minor changes to an API can make the dynamic linking incompatible. Some ways around this are to privately bundle the DLLs with your application, if you’re allowed to distribute them, or to explicitly dynamically load them at runtime. The latter requires particular attention to linking and symbol resolution inside your application, while you really should not have to be concerned with any of this.

Java bypasses all of this by deferring linking to run-time. Everything is dynamic and since the symbols are exported through bytecode while being isolated with packages, you rarely have to be concerned with the linking phase at all. If needed, you can still dynamically load classes or methods, but that’s very uncommon. The bytecode symbol representation is very stable and as versions of classes evolve, your older code will continue run without problems unless the library author purposely changes the API.

Next up

This was Java’s Rockin’ Reason One … please leave comments below or tweet @gbevin to connect with me. Next up will be The Core API and has even more profound implications … stay tuned!