Imagine a bacon-wrapped Ferrari. Still not better than our free technical reports.
See all our reports

LiveRebel How-to: Taming GlassFish Clusters

UPDATE: Not applicable to LiveRebel version 2.x and later

It’s been a busy summer so far with LiveRebel – previously we looked at LiveRebel’s command line interface (CLI), something we hope more people learn about, and scripting Java EE hot updates using LiveRebel and Bash. You may have already checked out our LiveRebel Amazon AMI for really quick LiveRebel testing, and we’ve whipped up a really tasty Chef LiveRebel Recipe, a single-command LiveRebel installation.

I’ve spent some time recently getting funky with GlassFish clusters using LiveRebel, our recently-released Java-Hot-Update-Done-Right tool that allows you to do fully automated hot updates to a running application without downtime.

Most recently, we added support for GlassFish application servers, so I figured it was about time to show how LiveRebel could be used to roll updates to an application that runs in a Glassfish cluster. In this post we show how to configure a basic GlassFish cluster and how to connect the nodes to LiveRebel and run hot updates for your application.

Configuring a Glassfish cluster

We will be using a GlassFish Server 3.1 Open Source Edition Full Platform version, which is available from java.net. After downloading and extracting the archive we have everything ready to start our first GlassFish instance. We are going to create a cluster that consists of two instances, running on the same node. To do that, we need to follow a procedure described in a tutorial video by an Oracle employee. Later, we will change the configuration of our cluster, but for now, it is sufficient. To create a cluster using the GlassFish web-console, follow these steps.

  • Run glassfish3/glassfish/asadmin start-domain --verbose.
  • Navigate to localhost:4848 and log into GlassFish console.
  • Click Cluster menu item, New... specify cluster name (we will use RebelCluster) and create two new server instances. It is important to select Make a Copy of the Selected Configuration, because later we will change that and we don’t want to change the default configuration file.
  • Start the cluster using the start cluster button.

Now we can check if the cluster is alive. Select menu Clusters->ClusterName in the GlassFish web-console, choose the Instances tab. Both instances should have Running status.

Cool. We have a healthy cluster of GlassFish servers up. Now we can proceed with deploying an application there.

Deploying an Application to Your GlassFish cluster

We need to perform an initial deploy of the application. Before we deploy an application, we need to supply it with a liverebel.xml file which serves as a marker for LiveRebel that this application can be managed by LiveRebel.

If you manage your application with maven, you don’t even need to write this file by hand — see how to generate this with maven automatically.

If you do not manage your application with maven, liverebel.xml is a small file that specifies the application name and its version. Here we will use RebelChat demo application, the liverebel.xml file for version 1.0, which looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<application>
  <name>RebelChat</name>
  <version>1.0</version>
</application>

The liverebel.xml file should be put into WEB-INF/classes directory of your WAR. If you want to run your tests with prepackaged WAR files, then consider using our rebel-chat demo versions:

Now we have an application ready to be deployed. Select the Applications tab in your cluster menu. Click the Deploy... button, specify the path to the WAR file, context path and application name. Also, make sure that Selected Targets where you deploy this application contains your cluster. In our case, it looks like this:

Click OK and in a few seconds your application is deployed. We can check it by navigating to:

Port values 28080 and 28081 are defaults, where GlassFish cluster instances serve http requests. You can check if these are correct by exploring your cluster Instances tab and clicking on the instance name. You are interested in the middle of the HTTP ports, i.e. 28081 in the following image:

Note that in a real-life situation, you will have a load balancer in front of the GlassFish cluster, so there will be no need to specify ports or jump from one port to another if we want to check both servers. But for our purposes, it is enough. Now go and check that RebelChat is indeed deployed.

We can see that despite the fact that RebelChat application doesn’t use all the features offered by clustering, both instances in RebelCluster are running the chat.

Now it’s time to get the LiveRebel Command Center running and investigate what features it offers you.

Configuring LiveRebel Command Center

It’s time to set up the LiveRebel Command Center web-console, which we will use later to modify GlassFish cluster configurations.
Now you need to download the LiveRebel archive if you haven’t already. We are currently providing everyone with a three-month evaluation license, which is more than enough for the purposes of this, and all the other, LiveRebel How-tos. Unzip the archive and you will see the following directory layout:

Start the LiveRebel Command Center by running lr-command-center.sh (lr-command-center.cmd on Windows) script from the bin folder. The initial configuration of the LiveRebel Command Center includes registration of LiveRebel (as a product) and user profile creation. Rest assured, both processes are trivial form submissions and are described in the LiveRebel walkthrough. By now, you should have the LiveRebel Command Center running and configured. Next, it’s time to configure your servers and applications.

Click on the Add Server button, and download lr-agent.jar to some location. This jar is a –javaagent file that can be attached to an application server and managed from the LiveRebel Command Center. To attach it to GlassFish instances that form our cluster, we need to modify their configuration.

Go back to the GlassFish web-console, open Configurations » YourClusterName-config menu. This item was created when we specified it to copy the default configuration at cluster creation time. We need to add two JVM options, so open the JVM settings menu item, then JVM Options tab. Using the Add JVM Option button, add the following options:

  • -Xverify:none
  • -javaagent:path/to/downloaded/lr-agent.jar

Option -javaagent will tell GlassFish, where to find LiveRebel agent and -Xverify:none allows us to make the necessary bytecode operations for updating application versions.

Save this configuration change. Notice how the GlassFish web-console notifies us, that we need to restart cluster instances. Stop and start these instances using the buttons on GlassFish web-console.

Now we go back to the LiveRebel Command Center and see that LiveRebel took up – you can see that the RebelChat application deployed on two servers.

Next, we show how to update an application version from the LiveRebel Command Center.

Updating an Application with LiveRebel

Updating clustered application with LiveRebel is done in the same manner as for a usual application. This process is described in detail in LiveRebel Installation and configuration walkthrough. But we also briefly describe it here.

When you open the LiveRebel Command Center, you see a list of all the applications managed by LiveRebel. Currently, we see that RebelChat is deployed on two servers. Click on the application name, which is RebelChat at the moment.

Click the Upload new version button and proceed with uploading rebel-chat-1.1.war. LiveRebel takes us back to the Show all deployments screen. Here, we select both servers, as we want to update the application in both of them, and click Prepare Update.

We proceed with checking compatibility between currently deployed 1.0 version of RebelChat and version 1.1, to which we want to update. Expectedly, these versions are compatible with warnings, which means that we can proceed with update.

Click Proceed with update and Update buttons, as we are quite happy with default configuration for current update. LiveRebel Command Center sends series of commands to both GlassFish servers and shows us a short log of what has been done.

As we see Version updated successfully, we go to the browser, where chats are open to check that.

Yay! Both chats are updated. Both preserved previous states. And it is equally easy to revert the application to version 1.0 from LiveRebel Command Center or to proceed with updating to a version 1.2.

This wraps up a guide about how to configure a GlassFish cluster to work with LiveRebel. Happy updating!

  • This sounds nice!

    Have you tried this on multiple machines (as opposed to different instances on the same machine)?Getting lr-agent.jar to the remote machines may be tricky and it would be great to make it a GlassFish module which folks can install using the update center.
    Integrating LiveRebel Command Center in the GlassFish Admin Console would also be very neat.

  • Oleg Shelajev

    Haven’t done clustering with nodes on different machines for the sake of simplicity. But I’m sure it must work as expected. If you have any questions about that, I can check it out. 

    Usually, getting lr-agent.jar to machines is a matter of wget/scp and fortunately must be done only once. Creating a GlassFish module sounds like a nice idea, didn’t look at them yet. Can you give any hints or links about developing one?

    As for integration between consoles, we have thought about that, but we aim to support most application servers, and currently console integration with GlassFish doesn’t seem feasible.

  • Oleg Shelajev

    Haven’t done clustering with nodes on different machines for the sake of simplicity. But I’m sure it must work as expected. If you have any questions about that, I can check it out. 

    Usually, getting lr-agent.jar to machines is a matter of wget/scp and fortunately must be done only once. Creating a GlassFish module sounds like a nice idea, didn’t look at them yet. Can you give any hints or links about developing one?

    As for integration between consoles, we have thought about that, but we aim to support most application servers, and currently console integration with GlassFish doesn’t seem feasible.

  • GlassFish 3.1 comes with SSH-based provisioning to ship the actual appserver bits to the remote machines, you may be able to leverage this infrastructure.

    As for building a GlassFish Module, you can start here : http://blogs.oracle.com/alexismp/entry/ips_pkg_5_crash_course
    The only glitch is that you’ll need to run your own update center repo. There is no on-disk format.

    As for integrating in the admin console, it’s not that hard. The Play! Framework folks did that recently – http://www.playframework.org/documentation/1.2/releasenotes-1.1#glassfish

  • GlassFish 3.1 comes with SSH-based provisioning to ship the actual appserver bits to the remote machines, you may be able to leverage this infrastructure.

    As for building a GlassFish Module, you can start here : http://blogs.oracle.com/alexismp/entry/ips_pkg_5_crash_course
    The only glitch is that you’ll need to run your own update center repo. There is no on-disk format.

    As for integrating in the admin console, it’s not that hard. The Play! Framework folks did that recently – http://www.playframework.org/documentation/1.2/releasenotes-1.1#glassfish

  • Oleg Shelajev

    Thanks for the insight, including LiveRebel as a GlassFish module may be a nice opportunity. I’ll share this idea with the team and we’ll see what can we do about it.

  • Oleg

    What about working LiveRebel with cluster on different machines? Does it work in version 1.0.1?
    How many agents should be on server if it has more than 1 instance?

  • Yes, they can be on multiple machines but they need to be able to connect to the LiveRebel Command Center. A single JVM managed by LiveRebel needs to have 1 agent.

  • 555

    good job.

  • eugene

    Hi Oleg,
    Glassfish 3.1 does  In-memory replication of server session data. Now, assume I have a cluster of 2 server like you did. Session replicated. I want to do A/B deployment without LiveRebel. I deploy new version of my application to one machine let say A, – during this time server B will take over all sessions from server A. Then I deploy new apps to servers B. I did not experiment with it yet, but in this scenario will I accomplish the same goal of uninterrupted deployment as with LiveRebel? 
    I think LiveRebel just made it simpler – am I right? 

  • Oleg Shelajev

    hi, eugene

    There are other solutions that allow you to update your cluster. 
    Starting with scripts and a clusterdance with a load balancer to session replication mechanisms and A/B deployment techniques. 
    All of them have their own drawbacks. Clusterdance is errorprone, A/B requires doubling the resources you have and session replication/migration rarely works without any issues. 

    LiveRebel (version 2.0.2 is out now) offers different strategies for the update. Most incredible is the hotpatching one. That is instant update all over across your cluster, without dropping requests or consuming an enormous amount of additional resources. It is also rollbackable. And supports 3-rd party updates. 

    In case of incremental updates, although we do not yet support session migration as in the scenario you described, we can drain sessions and also pull off that. So yeah, in this case LiveRebel provides you a single interface that works with all the application servers in the same easy way. 

  • Rvt

    Oleg,

    don’t try to take a pizza apart… just eat it. Pizza’s should be researched in a assembly stage, not during disassembly.
    Pizza’s also should be home made (yes including the dough) and should never ever be attempted to be order only, by phone or in a store frozen or un-frozen.