Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.
See all our reports

Sparkjava is an amazing Java web framework. Do you really need it?

From time to time, I try out a new Java web framework to check out the new developments that have occurred and to get the feel of what tools I’d use if I need to create a small web application.

We haven’t talked about web frameworks for a while, since we processed the results of a survey we ran a while back.

spark java web framework logo

Today, I want to talk to you about Spark, which is as they themselves put it, is a micro framework for creating web applications in Java 8 with minimal effort. A few days ago, I took Spark out for a test drive. I’ve tried to create a small web application that does nothing really functional but explores the features offered by Spark framework, so I get more comfortable with its API and to see if Spark fits my style. In this blogpost I want to share the takeaways I learned while exploring the Spark framework.

Basic setup

I started off with a fresh Gradle Java project. It was really straightforward to get the project up to the minimal Hello World functionality. If you’re curious about the end result or want to clone the project and try Spark yourself, it’s available on Github: spark-intro.

All you need to do to get it up and running is: clone the project, run the Gradle build and then run the application using the built jar.

 
git clone https://github.com/shelajev/spark-intro
./gradlew build
java -jar ./build/libs/spark-intro-1.0-SNAPSHOT-all.jar

Now to use Spark, you just need to declare a couple of dependencies, shown below.

dependencies {
   compile 'com.sparkjava:spark-core:2.3'
   compile 'com.sparkjava:spark-template-thymeleaf:2.3'
}

In fact, if you just want a regular web-app, you can make with just the spark-core library. The thymeleaf library is a template library to produce HTML output more easily than writing HTML by hand to the response object.

Now, to turn your Java application into a web application you just need to register the handlers on some URLs using the Spark methods.

import spark.Request;
import spark.Response;
import static spark.Spark.get;
import static spark.Spark.staticFileLocation;

public class SparkApplication {

 public static void main(String[] args) {
   staticFileLocation("/public");
   get("/hello", SparkApplication::helloWorld);

 }

 public static String helloWorld(Request req, Response res) {
    return "hello world!";
 }
}

The example above is fully featured Spark application, which when run will start an embedded Jetty server. When you visit localhost:4567/hello, you’ll see the hello world output.

This is actually pretty amazing, if you ask me. The best part about Spark is that it has a very consistent API which consists of just calling static methods from within your code. Indeed, it is a framework, in the sense that you specify the code to run and it wraps it in its own functionality, but it really feels more like a library with a touch of magic happening behind the scenes. You control what route mappings you wish to declare, what code is the responsible for handling the requests and how you like to handle everything else. So far, Spark seems excellent for tiny applications or API backends.

Let’s look at other Spark features that are necessary to create a web application. Namely, a web framework should make it easy to specify the routes from URLs to the code, offer a nice API for request and response handling, the sessions, filters for the requests and the output transformers. Also a pluggable choice of a templating library is a great bonus!

Routes

To specify a mapping between the URLs that your server is handing and the Java code that actually handles the request you need to specify the routes. A route consists of the following pieces:

  • A verb (get, post, put, delete, head, trace, connect, options)
  • A path, which can include the parameter placeholders or wildcards: /hello, /users/:name, "/say/*/to/*"
  • A callback, which is just a Java function of the request and response pair (request, response) -> { }

To specify a route, you need to call a static method which make coincides with the HTTP verb you want to handle, for instance, spark.Spark.get("/hello", SparkApplication::helloWorld); in the example above.

The routes are matched in the order in which they are specified. The first route that is matched to the incoming request will be executed. All in all, the route specification in Spark is simple and quite flexible. However, if you’re not careful, you might lose yourself in these definitions, as your application grows. I believe that an external file for the routes, like for example the way the Play framework achieves it, is a cleaner way to define the routes. Or you could go full convention over configuration and store all the routes in the annotations on the actual classes, like the Spring framework.

Request, Response, and Parameters

Now we’re mostly done with the web-server part. The application is up and running and we can redirect the execution flow to a particular class or method of our choosing. So here comes the most important part of any web framework: working with the request and response objects.

Let’s start with the response, because it’s simpler. Naturally, the response allows you to set the status, the headers, the content of the body, or redirect the browser to another page. However, working with the response objects directly is not the most convenient way of serving the content.

That’s why you most probably want either to provide a response transformer, to say convert the data you want to send into a different format, like json, or render templates.

In the sample application I used Thymeleaf templates, because thymeleaf is amazing. To utilize templates, you need to provide Spark with a template engine, which are available as libraries for almost any template library imaginable. You’ll need to rewrite your handlers to return ModelAndView objects. Here’s a snippet from our sample application:

public static void main(String[] args) {
 get("/hello", SparkApplication::helloWorld, new ThymeleafTemplateEngine());
}

public static ModelAndView helloWorld(Request req, Response res) {
 Map<String, Object> params = new HashMap<>();
 params.put("name", req.queryParams("name"));
 return new ModelAndView(params, "hello");
}

The templates for the Thymeleaf template engine are located by default in the resources/templates directory, and the ModelAndView object references the template by its name relative to that directory.

Now the template itself is just a simple example, but the engine supports all the glorious features that thymeleaf offers.

<!DOCTYPE html>
<html lang="en" xmlns="http://www.w3.org/1999/xhtml" xmlns:th="http://www.thymeleaf.org">
<head>
   <meta charset="UTF-8"></meta>
   <title>Hello world</title>
</head>
<body>
<p th:inline="text">Hello, [[${name}]]!</p>
</body>
</html>

Most probably, if you intend to use Spark to serve an actual application, rather than an API backend, you’ll use some sort of template engine. The request object is not that interesting by itself. It’s not like you can come up with a new version of the class to model the HTTP request. However, in addition to the normal API for accessing the query parameters, the body, the headers and attributes that you can see below, Spark has a cool api called querymaps.

request.body();
request.attribute("name"); 
request.headers("name");
request.params("name");

Query maps take a parameter name and give you a collection of the parameters with that prefix. Then you can group the related parameters, say user[name] and user[age] into a single map.

request.queryMap("user").get("age").integerValue();
request.queryMap("user").toMap();

This makes handling parameters much easier, since you can always treat them as maps. What I really like is that there’s no implicit parameter conversions going on, so you’re fully in charge of how to process the query. However, the downside is that you won’t be adding the validation code as easily as you might using alternative approaches.

Static files, Filters, Response transformers and so on

First of all, let’s talk about filters. More often than not, in the web application some functionality comes across the all entry points. For example you want to check if the user is logged in, or maybe log the request, or set some data into a ThreadLocal storage. This mean that when you’re further down the line you’ll have easy access to it. Or perhaps you just want to compress the results using gzip. All these cases require implementing horizontal functionality across the whole app. The Spark API for filters is really consistent with the rest of the framework.

Spark offers the before() and after() methods where you can specify the logic for the requests as shown:

before((request, response) -> {
  log.trace("request: {}", request); 
});

after((request, response) -> {
    response.header("Content-Encoding", "gzip");
});

The example above will make Spark log the requests and enable the gzip compression on the output.

In general, Spark’s API for various things is pretty straightforward. For example adding a ResponseTransformer instance into the route method will apply the transformation to the returned object. Here’s the example of transforming the output into a json object.

Gson gson = new Gson();
get("/hello", (request, response) -> "Hello World", gson::toJson);

The ResponseTransformer interface has just one method, so you won’t be muddling the code with complicated solutions.

public interface ResponseTransformer {
  String render(Object model);
}

Serving static files is even easier, usually you’ll have two sources of the static files: internal — packaged with the app, and external — provisioned on the server.

staticFileLocation("/public");
externalStaticFileLocation("/path/to/static/files");

If feels amazing, because when working with Sparkjava your code is really concise, flexible and easy to understand even at first glance.

What if I need more?

Spark is a tiny web framework, which is both its main strength and its main weakness. It does what it claims to do really well. Spark has an API which is consistent, simple, understandable, and flexible for handling requests, responses, filters and so on. Spark is amazing for creating small web applications or API backends. It doesn’t add much blackmagic into your code, so you always know what you expect from the application without any surprises. At the same time it’s extendable and you can plug in any template engine of your liking.

However, if you’re writing a more substantial web application you most probably will want to consider other aspects including the database, validation, web-services invocations, nosql databases etc. In that case, I’d prefer something that comes with the batteries included, such as the Playframework or Spring framework.

However, for a simple API endpoint Spark really managed to surprise me with how awesome it is. No wonder that the 2015 survey by Spark showed us that over 50% of Spark users use Spark to create REST APIs.

Do you use Sparkjava? What projects do you build with it? Tell us in the comments what do you like or dislike about it the most or what frameworks do you prefer for web apps?


Read next:

  • tomoro

    Recently I seen spark-Java used to teach a MongoDB course for java devs. Combined with a freemarker, jetty, mongoDB driver it compiled blitz-fast and exe jar was around 5mb. Seems really suitable for simple microservices.

  • There is one big issue. You cant change form size https://github.com/perwendel/spark/issues/314
    I prefer Dropwizard but sparkjava is awesome if you need “mock” server :D

  • Mustafa Halil Yıldız

    We use it for calculating millions of invoices asyncronously. We created some microservices and put what we did into a docker container and we can easily scale our architecture horizantally. We use rabbitmq as jms, mongodb as database and morphia as orm with spark and we did not face up with any problem. We have two frontend modules that uses spark for some CRUD’s, one of them is written with Play+React+Flight and other is with JSF+PrimeFaces and think it is extremely simple and very suitable for microservices.

  • arshad halageri

    I am trying to process both multipart zip file and also the json object
    from the request using SparkJava, but the request becomes empty once we
    use it parse the zip file and the same cannot be used to parse the json
    object.
    Is there any way to accomplish this using SparkJava

  • Trey

    I started playing around with and created a simple webservice to test it out. I think this will be my go-to framework