Kaspersky Internet Security 2012 -->

Monday, August 26, 2013

Server load balancing architectures, Part 2: Application-level load balancing

Server load balancing architectures, Part 2: Application-level load balancing

Operating with application knowledge

By Gregor Roth, JavaWorld.com, 10/21/08
The transport-level server load balancing architectures described in the first half of this article are more than adequate for many Web sites, but more complex and dynamic sites can't depend on them. Applications that rely on cache or session data must be able to handle a sequence of requests from the same client accurately and efficiently, without failing. In this follow up to his introduction to server load balancing, Gregor Roth discusses various application-level load balancing architectures, helping you decide which one will best meet the business requirements of your Web site.
The first half of this article describes transport-level server load balancing solutions, such as TCP/IP-based load balancers, and analyzes their benefits and disadvantages. Load balancing on the TCP/IP level spreads incoming TCP connections over the real servers in a server farm. It is sufficient in most cases, especially for static Web sites. However, support for dynamic Web sites often requires higher-level load balancing techniques. For instance, if the server-side application must deal with caching or application session data, effective support for client affinity becomes an important consideration. Here in Part 2, I'll discuss techniques for implementing server load balancing at the application level to address the needs of many dynamic Web sites.

Intermediate server load balancers

In contrast to low-level load balancing solutions, application-level server load balancing operates with application knowledge. One popular load-balancing architecture, shown in Figure 1, includes both an application-level load balancer and a transport-level load balancer.
Load balancing on transport and application levels

Figure 1. Load balancing on transport and application levels (click to enlarge)

The application-level load balancer appears to the transport-level load balancer as a normal server. Incoming TCP connections are forwarded to the application-level load balancer. When it retrieves an application-level request, it determines the target server on the basis of the application-level data and forwards the request to that server.
Listing 1 shows an application-level load balancer that uses a HTTP request parameter to decide which back-end server to use. In contrast to the transport-level load balancer, it makes the routing decision based on an application-level HTTP request, and the unit of forwarding is a HTTP request. Similarly to the memcached approach I discussed in Part 1, this solution uses a "hash key"-based partitioning algorithm to determine the server to use. Often, attributes such as user ID or session ID are used as the partitioning key. As a result, the same server instance always handles the same user. The user's client is affine or "sticky" to the server. For this reason the server can make use of a local HttpRequest cache I discussed in Part 1.

Listing 1. Intermediate application-level load balancer

class LoadBalancerHandler implements IHttpRequestHandler, ILifeCycle {
   private final List servers = new ArrayList();
   private HttpClient httpClient;

   /*
    * this class does not implement server monitoring or healthiness checks
    */

   public LoadBalancerHandler(InetSocketAddress... srvs) {
      servers.addAll(Arrays.asList(srvs));
   }

  public void onInit() {
      httpClient = new HttpClient();
      httpClient.setAutoHandleCookies(false);
}


   public void onDestroy() throws IOException {
      httpClient.close();
   }

   public void onRequest(final IHttpExchange exchange) throws IOException {
      IHttpRequest request = exchange.getRequest();

      // determine the business server based on the id's hashcode
      Integer customerId = request.getRequiredIntParameter("id");
      int idx = customerId.hashCode() % servers.size();
      if (idx < 0) {
         idx *= -1;
      }

      // retrieve the business server address and update the Request-URL of the request
      InetSocketAddress server = servers.get(idx);
      URL url = request.getRequestUrl();
      URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
      request.setRequestUrl(newUrl);

      // proxy header handling (remove hop-by-hop headers, ...)
      // ...


      // create a response handler to forward the response to the caller
      IHttpResponseHandler respHdl = new IHttpResponseHandler() {

         @Execution(Execution.NONTHREADED)
         public void onResponse(IHttpResponse response) throws IOException {
            exchange.send(response);
         }

         @Execution(Execution.NONTHREADED)
         public void onException(IOException ioe) throws IOException {
            exchange.sendError(ioe);
         }
      };

      // forward the request in a asynchronous way by passing over the response handler
      httpClient.send(request, respHdl);
   }
}



class LoadBalancer {

   public static void main(String[] args) throws Exception {
      InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
      HttpServer loadBalancer = new HttpServer(8080, new LoadBalancerHandler(srvs));
      loadBalancer.run();
   }
}
In Listing 1, the LoadBalancerHandler reads the HTTP id request parameter and computes the hash code. Going beyond this simple example, in some cases load balancers must read (a part of) the HTTP body to retrieve the required balancing algorithm information. The request is forwarded based on the result of the modulo operation. This is done by the HttpClient object. This HttpClient also pools and reuses (persistent) connections to the servers for performance reasons. The response is handled in an asynchronous way through the use of an HttpResponseHandler. This non-blocking, asynchronous approach minimizes the load balancer's system requirements. For instance, no outstanding thread is required during a call. For a more detailed explanation of asynchronous, non-blocking HTTP programming, read my article "Asynchronous HTTP and Comet architectures."
Another intermediate application-level server load balancing technique is cookie injection. In this case the load balancer checks if the request contains a specific load balancing cookie. If the cookie is not found, a server is selected using a distribution algorithm such as round-robin. A load balancing session cookie is added to the response before the response is sent. When the browser receives the session cookie, the cookie is stored in temporary memory and is not retained after the browser is closed. The browser adds the cookie to all subsequent requests in that session, which are sent to the load balancer. By storing the server slot as cookie value, the load balancer can determine the server that is responsible for this request (in this browser session). Listing 2 implements a load balancer based on cookie injection.

Listing 2. Cookie-injection based application-level load balancer

class CookieBasedLoadBalancerHandler implements IHttpRequestHandler, ILifeCycle {
   private final List servers = new ArrayList();
   private int serverIdx = 0;
   private HttpClient httpClient;

   /*
    * this class does not implement server monitoring or healthiness checks
    */

   public CookieBasedLoadBalancerHandler(InetSocketAddress... realServers) {
      servers.addAll(Arrays.asList(realServers));
   }

   public void onInit() {
      httpClient = new HttpClient();
      httpClient.setAutoHandleCookies(false);
}

   public void onDestroy() throws IOException {
      httpClient.close();
   }

   public void onRequest(final IHttpExchange exchange) throws IOException {
      IHttpRequest request = exchange.getRequest();


      IHttpResponseHandler respHdl = null;
      InetSocketAddress serverAddr = null;

      // check if the request contains the LB_SLOT cookie
      cl : for (String cookieHeader : request.getHeaderList("Cookie")) {
         for (String cookie : cookieHeader.split(";")) {
            String[] kvp = cookie.split("=");
            if (kvp[0].startsWith("LB_SLOT")) {
               int slot = Integer.parseInt(kvp[1]);
               serverAddr = servers.get(slot);
               break cl;
            }
         }
      }

      // request does not contains the LB_SLOT -> select a server
      if (serverAddr == null) {
         final int slot = nextServerSlot();
         serverAddr = servers.get(slot);

         respHdl = new IHttpResponseHandler() {

            @Execution(Execution.NONTHREADED)
            public void onResponse(IHttpResponse response) throws IOException {
               // set the LB_SLOT cookie
               response.setHeader("Set-Cookie", "LB_SLOT=" + slot + ";Path=/");
               exchange.send(response);
            }

            @Execution(Execution.NONTHREADED)
            public void onException(IOException ioe) throws IOException {
               exchange.sendError(ioe);
            }
         };

      } else {
         respHdl = new IHttpResponseHandler() {

            @Execution(Execution.NONTHREADED)
            public void onResponse(IHttpResponse response) throws IOException {
               exchange.send(response);
            }

            @Execution(Execution.NONTHREADED)
            public void onException(IOException ioe) throws IOException {
               exchange.sendError(ioe);
            }
         };
      }

      // update the Request-URL of the request
      URL url = request.getRequestUrl();
      URL newUrl = new URL(url.getProtocol(), serverAddr.getHostName(), serverAddr.getPort(), url.getFile());
      request.setRequestUrl(newUrl);

      // proxy header handling (remove hop-by-hop headers, ...)
      // ...

      // forward the request
      httpClient.send(request, respHdl);
   }

   // get the next slot by using the using round-robin approach
   private synchronized int nextServerSlot() {
      serverIdx++;
      if (serverIdx >= servers.size()) {
         serverIdx = 0;
      }
      return serverIdx;
   }
}


class LoadBalancer {

   public static void main(String[] args) throws Exception {
      InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
      CookieBasedLoadBalancerHandler hdl = new CookieBasedLoadBalancerHandler(srvs);
      HttpServer loadBalancer = new HttpServer(8080, hdl);
      loadBalancer.run();
   }
}
Unfortunately, the cookie-injection approach only works if the browser accepts cookies. If the user deactivates cookies, the client loses stickiness.
In general, the drawback of intermediate application-level load balancer solutions is that they require an additional node or process. Solutions that integrate a transport-level and an application-level server load balancer solve this problem but are often very expensive, and the flexibility gained by accessing application-level data is limited.

HTTP redirect-based server load balancer

One way to avoid additional network hops is to make use of the HTTP redirect directive. With the help of the redirect directive, the server reroutes a client to another location. Instead of returning the requested object, the server returns a redirect response such as 303 See Other. The client recognizes the new location and reissues the request. Figure 2 shows this architecture.
Http redirect-based application-level load balancing

Figure 2. HTTP redirect-based application-level load balancing

Listing 3 implements an HTTP redirect-based application-level load balancer. The load balancer in Listing 3 doesn't forward the request. Instead, it sends a redirect status code, which contains an alternate location. According to the HTTP specification, the client repeats the request by using the alternate location. If the client uses the alternate location for further requests, the traffic goes to that server directly. No extra network hops are required.

Listing 3. HTTP redirect-based application-level load balancer

class RedirectLoadBalancerHandler implements IHttpRequestHandler {
   private final List servers = new ArrayList();

   /*
    * this class does not implement server monitoring or healthiness checks
    */

   public RedirectLoadBalancerHandler(InetSocketAddress... realServers) {
      servers.addAll(Arrays.asList(realServers));
   }

   @Execution(Execution.NONTHREADED)
   public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
      IHttpRequest request = exchange.getRequest();

      // determine the business server based on the id´s hashcode
      Integer customerId = request.getRequiredIntParameter("id");
      int idx = customerId.hashCode() % servers.size();
      if (idx < 0) {
         idx *= -1;
      }

      // create a redirect response -> status 303
      HttpResponse redirectResponse = new HttpResponse(303, "text/html", "....");

      // ... and add the location header
      InetSocketAddress server = servers.get(idx);
      URL url = request.getRequestUrl();
      URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
      redirectResponse.setHeader("Location", newUrl.toString());

      // send the redirect response
      exchange.send(redirectResponse);
   }
}


class Server {

   public static void main(String[] args) throws Exception {
      InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
      RedirectLoadBalancerHandler hdl = new RedirectLoadBalancerHandler(srvs);
      HttpServer loadBalancer = new HttpServer(8080, hdl);
      loadBalancer.run();
   }
}
The HTTP redirect approach has two weaknesses. First, the whole server infrastructure becomes visible to the client. This could be a security problem if the client is an anonymous client on the Internet. Providers often try to minimize the attack surface by hiding their server infrastructure. Second, this approach does little for high availability. Similarly to DNS-based load balancing (discussed in Part 1), the clients do not switch to another server if this server fails. The client has no easy way to recognize the dead server and keeps trying to reach it. If the client uses the original request for further calls, the number of network hops stays the same, because the request goes to the load balancer and is redirected to the server each time.

Server-side server load balancer interceptor

Another way to avoid additional network hops is to move the application-level server load balancer logic to the server side. As shown in Figure 3, the load balancer becomes an interceptor.
Server-side load balancer interceptor

Figure 3. Server-side load balancer interceptor

Listing 4 implements a server-side application-level load balancer interceptor. The code is almost the same as for Listing 1's LoadBalancerHandler. The difference is that if the request target is identified as the local server, the request is forwarded locally instead of using the HttpClient.

Listing 4. Server-side application-level load balancer interceptor

class LoadBalancerRequestInterceptor implements IHttpRequestHandler, ILifeCycle {
   private final List servers = new ArrayList();
   private InetSocketAddress localServer;
   private HttpClient httpClient;

   /*
    * this class does not implement server monitoring or healthiness checks
    */

   public LoadBalancerRequestInterceptor(InetSocketAddress localeServer, InetSocketAddress... srvs) {
      this.localServer = localeServer;
      servers.addAll(Arrays.asList(srvs));
   }

   public void onInit() {
      httpClient = new HttpClient();
      httpClient.setAutoHandleCookies(false);
}


   public void onDestroy() throws IOException {
      httpClient.close();
   }


   public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
      IHttpRequest request = exchange.getRequest();

      Integer customerId = request.getRequiredIntParameter("id");

      int idx = customerId.hashCode() % servers.size();
      if (idx < 0) {
         idx *= -1;
      }

      InetSocketAddress server = servers.get(idx);

      // local server?
      if (server.equals(localServer)) {
         exchange.forward(request);

      // .. no
      } else {
         URL url = request.getRequestUrl();
         URL newUrl = new URL(url.getProtocol(), server.getHostName(), server.getPort(), url.getFile());
         request.setRequestUrl(newUrl);

         // proxy header handling (remove hop-by-hop headers, ...)
         // ...

         IHttpResponseHandler respHdl = new IHttpResponseHandler() {

            @Execution(Execution.NONTHREADED)
            public void onResponse(IHttpResponse response) throws IOException {
               exchange.send(response);
            }

            @Execution(Execution.NONTHREADED)
            public void onException(IOException ioe) throws IOException {
               exchange.sendError(ioe);
            }
         };
         httpClient.send(request, respHdl);
      }
   }
}


class Server {

   public static void main(String[] args) throws Exception {
      RequestHandlerChain handlerChain = new RequestHandlerChain();
      InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("srv1", 8030), new InetSocketAddress("srv2", 8030)};
      handlerChain.addLast(new LoadBalancerRequestInterceptor(new InetSocketAddress("srv1", 8030), srvs));
      handlerChain.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
      handlerChain.addLast(new MyRequestHandler());

      HttpServer httpServer = new HttpServer(8030, handlerChain);
      httpServer.run();
   }
}
This approach reduces additional network hops. On average, the percentage of requests handled locally equals 100 divided by the number of servers. Unfortunately, this approach helps only when you have a small number of servers.

Client-side server load balancer interceptor

Load balancing logic equivalent to that of a server-side load balancer interceptor can be implemented as an interceptor on the client side. In this case no transport-level load balancer is required. Figure 4 illustrates this architecture.
Client-side load balancer interceptor

Figure 4. Client-side load balancer interceptor

Listing 5 adds an interceptor to the HttpClient. Because the load balancing code is written as an interceptor, the load balancing is invisible to the client application.

Listing 5. Client-side application-level load balancer interceptor

class LoadBalancerRequestInterceptor implements IHttpRequestHandler, ILifeCycle {
   private final Map> serverClusters = new HashMap>();
   private HttpClient httpClient;

   /*
    * this class does not implement server monitoring or healthiness checks
    */

   public void addVirtualServer(String virtualServer, InetSocketAddress... realServers) {
      serverClusters.put(virtualServer, Arrays.asList(realServers));
   }

   public void onInit() {
      httpClient = new HttpClient();
      httpClient.setAutoHandleCookies(false);
}

   public void onDestroy() throws IOException {
      httpClient.close();
   }

   public void onRequest(final IHttpExchange exchange) throws IOException, BadMessageException {
      IHttpRequest request = exchange.getRequest();

      URL requestUrl = request.getRequestUrl();
      String targetServer = requestUrl.getHost() + ":" + requesrUrl.getPort();

      // handle a virtual address
      for (Entry> serverCluster : serverClusters.entrySet()) {
         if (targetServer.equals(serverCluster.getKey())) {
            String id = request.getRequiredStringParameter("id");

            int idx = id.hashCode() % serverCluster.getValue().size();
            if (idx < 0) {
               idx *= -1;
            }

            InetSocketAddress realServer = serverCluster.getValue().get(idx);
            URL newUrl = new URL(requesrUrl.getProtocol(), realServer.getHostName(), realServer.getPort(), requesrUrl.getFile());
            request.setRequestUrl(newUrl);

            // proxy header handling (remove hop-by-hop headers, ...)
            // ...

            IHttpResponseHandler respHdl = new IHttpResponseHandler() {

               @Execution(Execution.NONTHREADED)
               public void onResponse(IHttpResponse response) throws IOException {
                  exchange.send(response);
               }

               @Execution(Execution.NONTHREADED)
               public void onException(IOException ioe) throws IOException {
                  exchange.sendError(ioe);
               }
            };

            httpClient.send(request, respHdl);
            return;
         }
      }

      // request address is not virtual one -> do nothing by forwarding request for standard handling
      exchange.forward(request);
   }
}



class SimpleTest {

   public static void main(String[] args) throws Exception {

      // start the servers
      RequestHandlerChain handlerChain1 = new RequestHandlerChain();
      handlerChain1.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
      handlerChain1.addLast(new MyRequestHandler());

      HttpServer httpServer1 = new HttpServer(8040, handlerChain1);
      httpServer1.start();


      RequestHandlerChain handlerChain2 = new RequestHandlerChain();
      handlerChain2.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
      handlerChain2.addLast(new MyRequestHandler());

      HttpServer httpServer2 = new HttpServer(8030, handlerChain2);
      httpServer2.start();


      // create the client
      HttpClient httpClient = new HttpClient();

      // ... and add the load balancer interceptor
      LoadBalancerRequestInterceptor lbInterceptor = new LoadBalancerRequestInterceptor();
      InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("localhost", 8030), new InetSocketAddress("localhost", 8030) };
      lbInterceptor.addVirtualServer("customerService:8080", srvs);
      httpClient.addInterceptor(lbInterceptor);

      // run some tests
      GetRequest request = new GetRequest("http://customerService:8080/price?id=2336&amount=5656");
      IHttpResponse response = httpClient.call(request);
      assert (response.getHeader("X-Cached") == null);

      request = new GetRequest("http://customerService:8080/price?id=2336&amount=5656");
      response = httpClient.call(request);
      assert (response.getHeader("X-Cached").equals("true"));

      request = new GetRequest("http://customerService:8080/price?id=2337&amount=5656");
      response = httpClient.call(request);
      assert (response.getHeader("X-Cached") == null);

      request = new GetRequest("http://customerService:8080/price?id=2337&amount=5656");
      response = httpClient.call(request);
      assert (response.getHeader("X-Cached").equals("true"));

      // ...
   }
}
The client-side approach is high efficient, highly available, and highly scalable. Unfortunately, some serious disadvantages exist for Internet-based clients. Similarly to the HTTP redirect-based load balancer, the whole server infrastructure becomes visible to the client. Furthermore, this approach often forces client-side Web applications to perform cross-domain calls. For security reasons, Web browsers and browser-based containers such as a Flash runtime or a JavaScript runtime will block calls to different domains. This means some workarounds must be implemented on the client side. (See Resources for a link to an article describing some strategies that address this issue.)
The client-side load balancing approach is not restricted to HTTP-based applications. For instance, JBoss supports smart stubs. A stub is an object that is generated by the server and implements a remote service's business interface. The client makes local calls against the stub object. In a load balanced environment, the server-generated stub object acts also as an interceptor that understands how to route calls to the appropriate server.

Application session data support

As I discussed in Part 1, application session data represents the state of a user-specific application session. For classic ("WEB 1.0") Web applications, application session data is stored on the server side, as shown in Listing 6.

Listing 6. Session-based server

class MySessionBasedRequestHandler implements IHttpRequestHandler {

   @SynchronizedOn(SynchronizedOn.SESSION)
   public void onRequest(IHttpExchange exchange) throws IOException {
      IHttpRequest request = exchange.getRequest();
      IHttpSession session = exchange.getSession(true);

      //..

      Integer countRequests = (Integer) session.getAttribute("count");
      if (countRequests == null) {
         countRequests = 1;
      } else {
         countRequests++;
      }

      session.setAttribute("count", countRequests);

      // and return the response
      exchange.send(new HttpResponse(200, "text/plain", "count=" + countRequests));
   }
}


class Server {

   public static void main(String[] args) throws Exception {
      HttpServer httpServer = new HttpServer(8030, new MySessionBasedRequestHandler());
      httpServer.run ();
   }
}
In Listing 6, the application session data (container) is accessed by the getSession(...) method. When true is passed as an argument, a new session is created if one doesn't exist already. According to the Servlet API, a cookie named JSESSIONID is sent to the client. The value of the JSESSIONID cookie is the unique session ID. This ID is used to identify the session object, which is stored on the server side. When it receives subsequent client requests, the server can fetch the associated session object based on the client request's cookie header. To support clients that do not accept cookies, URL rewriting can be used for session tracking. With URL rewriting, every local URL of the response page is dynamically rewritten to include the session ID.
In contrast to cached data, application session data is not redundant by definition. If the server crashes, the application session data will be lost and in most cases will be unrecoverable. As a consequence, application session data must either be stored in a global place or be replicated between the involved servers.
If the data is replicated, normally all the servers involved hold the application data of all sessions. For this reason this approach scales only for a small group of servers. The server memory is limited, and updates must be replicated to all involved servers. To support larger numbers of servers, the servers must be partitioned into several smaller server groups. In contrast to the full-replication approach, the global-store approach uses a database, a file system, or in-memory session servers to store the session data in a global place.
In general, application session data handling does not force you to make the clients affine to the server. If the replication approach is used, normally all servers will hold the application session data. If session data is modified, the changes must be replicated to all servers. In the case of a global-store approach, the application data is fetched before the request is handled. Sending the response writes the changes of the session data back to the global store. The store must be highly available and represents one of the total system's hot-spot components. If the store is unavailable, the server can't handle the requests.
However, the locality caused by client affinity makes it easier to synchronize concurrent requests for the same session. For a more detailed explanation of threading issues with session state management, read "Java theory and practice: Are all stateful Web applications broken?" (see Resources). Furthermore, if clients are affine to the server, more-efficient techniques can be implemented. For instance, if session servers are used, the session server's responsibility can be reduced to a backup role. Figure 5 illustrates this architecture. Often the session ID is used as load balancing key for such architectures.
Backup session server based application session data support

Figure 5. Backup session server based application session data support

When the response is written, modifications to the application session data are written to the session server. In contrast to the non-affine case, the servers read application session data only in the event of a failover.
Listing 7 defines a custom ISessionManager based on the xLightweb HTTP library (see Resources) to implement this behavior.

Listing 7. Session management

class BackupBasedSessionManager implements ISessionManager {

   private ISessionManager delegee =  null;
   private HttpClient httpClient = null;

   public BackupBasedSessionManager(HttpClient httpClient, ISessionManager delegee) {
      this.httpClient = httpClient;
      this.delegee = delegee;
   }


   public boolean isEmtpy() {
      return delegee.isEmtpy();
   }

   public String newSession(String idPrefix) throws IOException {
      return delegee.newSession(idPrefix);
   }


   public void registerSession(HttpSession session) throws IOException {
      delegee.registerSession(session);
   }

   public HttpSession getSession(String sessionId) throws IOException {
      HttpSession session = delegee.getSession(sessionId);

      // session not available? -> try to get it from the backup location
      if (session == null) {
         String id = URLEncoder.encode(sessionId);
         IHttpResponse response = httpClient.call(new GetRequest("http://sessionservice:8080/?id=" + id));
         if (response.getStatus() == 200) {
            try {
               byte[] serialized = response.getBlockingBody().readBytes();
               ObjectInputStream in = new ObjectInputStream(new ByteArrayInputStream(serialized));
               session = (HttpSession) in.readObject();
               registerSession(session);
            } catch (ClassNotFoundException cnfe) {
               throw new IOException(cnfe);
            }
         }
      }

      return session;
   }

   public void saveSession(String sessionId) throws IOException {
      delegee.saveSession(sessionId);

      HttpSession session = delegee.getSession(sessionId);

      ByteArrayOutputStream bos = new ByteArrayOutputStream() ;
      ObjectOutputStream out = new ObjectOutputStream(bos) ;
      out.writeObject(session);
      out.close();
      byte[] serialized = bos.toByteArray();

      String id = URLEncoder.encode(session.getId());
      PostRequest storeRequest = new PostRequest("http://sessionservice:8080/?id=" + id + "&ttl=600", "application/octet-stream", serialized);
      httpClient.send(storeRequest, null);  // send the store request asynchronous and ignore result
   }

   public void removeSession(String sessionId) throws IOException {
      delegee.removeSession(sessionId);
      String id = URLEncoder.encode(sessionId);
      httpClient.call(new DeleteRequest("http://sessionservice:8080/?id=" + id));
   }

   public void close() throws IOException {
      delegee.close();
   }
}


class Server {

   public static void main(String[] args) throws Exception {

      // set the server's handler
      HttpServer httpServer = new HttpServer(8030, new MySessionBasedRequestHandler());

      // create a load balanced http client instance
      HttpClient sessionServerHttpClient = new HttpClient();
      LoadBalancerRequestInterceptor lbInterceptor = new LoadBalancerRequestInterceptor();
      InetSocketAddress[] srvs = new InetSocketAddress[] { new InetSocketAddress("sessionSrv1", 5010), new InetSocketAddress("sessionSrv2", 5010)};
      lbInterceptor.addVirtualServer("sessionservice:8080", srvs);
      sessionServerHttpClient.addInterceptor(lbInterceptor);

      // wrap the local built-in session manager by backup aware session manager
      ISessionManager nativeSessionManager = httpServer.getSessionManager();
      BackupBasedSessionManager sessionManager = new BackupBasedSessionManager(sessionServerHttpClient, nativeSessionManager);
      httpServer.setSessionManager(sessionManager);

      // start the server
      httpServer.start();
    }
}
In Listing 7, the BackupBasedSessionManager is responsible for managing the sessions on the server side. The BackupBasedSessionManager implements the ISessionManager interface to intercept the container's session management. If the session is not found locally, the BackupBasedSessionManager tries to retrieve the session from the session server. This should only occur after a server failover. If the session state is changed, the BackupBasedSessionManager's saveSession() method is called to store the session on the backup session server. A client-side server load balancing approach is used to access the session servers.

Apache Tomcat load balancing architectures

Why haven't I used the current Java Servlet API for the preceding examples? The answer is simple. In contrast to HTTP libraries such as xLightweb, the Servlet API is designed as a pure synchronous, blocking API. Insufficient asynchronous, non-blocking support makes load balancer implementations based on the Servlet API inefficient. This is true for both the intermediate load balancer approach and the server-side load balancer approach. Client-side interceptor-based load balancing is out of the scope of the Servlet API, which is a server-side-only API.
What you can do is to implement a HTTP redirect-based server load balancer based on the Servlet API. Tomcat 5 ships with such an application, named balancer. (The balancer application is not included in the Tomcat 6 distribution.)
A popular load balancing approach for Tomcat is to run Apache HTTP Server as a Web server and send the request to one of the Tomcat instances over the Apache Tomcat Connector (AJP) protocol. Figure 6 illustrates this approach.

Figure 6. Popular Apache Tomcat infrastructure

The Web server acts as an application-level server load balancer by using the Apache mod_proxy_balancer module. Client affinity is implemented based on the cookie/path JSESSIONID parameter. As I discussed earlier, the JSESSIONID cookie parameter is created implicitly by retrieving the HttpSession within a servlet.
Adding the necessary routing information to JSESSIONID's value modifies the server's response, to determine the target server. If the client sends a subsequent request, this routing information is extracted from that request's JSESSIONID value. Based on this information, the request is forwarded to the target server.
To make the application session data highly available, a Tomcat cluster must be set up. Tomcat provides two basic paths for doing this: saving the session to a shared file system or database, or using in-memory replication. In-memory replication is the more popular Tomcat clustering approach.
As an alternative, you are also free to write your own Apache application-level load balancer module to distribute the load over the Tomcat instances. Or, you can use other hardware/software-based load balancing solutions like the ones shown in the preceding portions of this article.

In conclusion

Client-side server load balancing is a simple and powerful technique. No intermediate server load balancers are required. The client communicates with the servers in a direct way. However, the scope of client-side server load balancing is limited. Cross-domain-calls must be supported for Internet clients, which introduces complexity and restrictions.
As you learned in Part 1, pure transport-level server load balancer architectures are simple, flexible, and highly efficient. In contrast to client-side server load balancing, no restrictions exist for the client side. Often such architectures are combined with distributed cache or session servers to handle application-level caching and session data issues. However, if the overhead caused by moving data from and to the cache or session servers grows, such architectures become increasingly inefficient. By implementing client affinity based on an application-level server load balancer, you can avoid copying large datasets between servers. This is not the only use case for application-level server load balancing. For instance, requests from specific premium users can be forwarded to dedicated servers that support high quality of service. Or specific business-function groups can be forwarded to specialized servers.
Although commercial and hardware-based solutions have not been discussed in this article, they should also be considered when you design a server load balancing architecture. As always, the concrete server load balancing architecture you choose depends on your infrastructure's specific business requirements and restrictions.

About the author

Gregor Roth, creator of the xLightweb HTTP library, works as a software architect at United Internet group, a leading European Internet service provider to which GMX, 1&1, and Web.de belong. His areas of interest include software and system architecture, enterprise architecture management, object-oriented design, distributed computing, and development methodologies.
Read more about Enterprise Java in JavaWorld's Enterprise Java section.

All contents copyright 1995-2013 Java World, Inc. http://www.javaworld.com

Server load balancing architectures, Part 1: Transport-level load balancing

Server load balancing architectures, Part 1: Transport-level load balancing

High scalability and availability for server farms

By Gregor Roth, JavaWorld.com, 10/21/08
Server farms achieve high scalability and high availability through server load balancing, a technique that makes the server farm appear to clients as a single server. In this two-part article, Gregor Roth explores server load balancing architectures, with a focus on open source solutions. Part 1 covers server load balancing basics and discusses the pros and cons of transport-level server load balancing. Part 2 covers application-level server load balancing architectures, which address some of the limitations of the architectures discussed in Part 1.
The barrier to entry for many Internet companies is low. Anyone with a good idea can develop a small application, purchase a domain name, and set up a few PC-based servers to handle incoming traffic. The initial investment is small, so the start-up risk is minimal. But a successful low-cost infrastructure can become a serious problem quickly. A single server that handles all the incoming requests may not have the capacity to handle high traffic volumes once the business becomes popular. In such a situations companies often start to scale up: they upgrade the existing infrastructure by buying a larger box with more processors or add more memory to run the applications.
Scaling up, though, is only a short-term solution. And it's a limited approach because the cost of upgrading is disproportionately high relative to the gains in server capability. For these reasons most successful Internet companies follow a scale out approach. Application components are processed as multiple instances on server farms, which are based on low-cost hardware and operating systems. As traffic increases, servers are added.
The server-farm approach has its own unique demands. On the software side, you must design applications so that they can run as multiple instances on different servers. You do this by splitting the application into smaller components that can be deployed independently. This is trivial if the application components are stateless. Because the components don't retain any transactional state, any of them can handle the same requests equally. If more processing power is required, you just add more servers and install the application components.
A more challenging problem arises when the application components are stateful. For instance, if the application component holds shopping-cart data, an incoming request must be routed to an application component instance that holds that requester's shopping-cart data. Later in this article, I'll discuss how to handle such application-session data in a distributed environment. However, to reduce complexity, most successful Internet-based application systems try to avoid stateful application components whenever possible.
On the infrastructure side, the processing load must be distributed among the group of servers. This is known as server load balancing. Load balancing technologies also pertain to other domains, for instance spreading work among components such as network links, CPUs, or hard drives. This article focuses on server load balancing.

Availability and scalability

Server load balancing distributes service requests across a group of real servers and makes those servers look like a single big server to the clients. Often dozens of real servers are behind a URL that implements a single virtual service.
How does this work? In a widely used server load balancing architecture, the incoming request is directed to a dedicated server load balancer that is transparent to the client. Based on parameters such as availability or current server load, the load balancer decides which server should handle the request and forwards it to the selected server. To provide the load balancing algorithm with the required input data, the load balancer also retrieves information about the servers' health and load to verify that they can respond to traffic. Figure 1 illustrates this classic load balancer architecture.
Classic load balancer architecture (load dispatcher) t

Figure 1. Classic load balancer architecture (load dispatcher)

The load-dispatcher architecture illustrated in Figure 1 is just one of several approaches. To decide which load balancing solution is the best for your infrastructure, you need to consider availability and scalability.
Availability is defined by uptime -- the time between failures. (Downtime is the time to detect the failure, repair it, perform required recovery, and restart tasks.) During uptime the system must respond to each request within a predetermined, well-defined time. If this time is exceeded, the client sees this as a server malfunction. High availability, basically, is redundancy in the system: if one server fails, the others take over the failed server's load transparently. The failure of an individual server is invisible to the client.
Scalability means that the system can serve a single client, as well as thousands of simultaneous clients, by meeting quality-of-service requirements such as response time. Under an increased load, a high scalable system can increase the throughput almost linearly in proportion to the power of added hardware resources.
In the scenario in Figure 1, high scalability is reached by distributing the incoming request over the servers. If the load increases, additional servers can be added, as long as the load balancer does not become the bottleneck. To reach high availability, the load balancer must monitor the servers to avoid forwarding requests to overloaded or dead servers. Furthermore, the load balancer itself must be redundant too. I'll discuss this point later in this article.

Server load balancing techniques

In general, server load balancing solutions are of two main types:
  • Transport-level load balancing -- such as the DNS-based approach or TCP/IP-level load balancing -- acts independently of the application payload.
  • Application-level load balancing uses the application payload to make load balancing decisions.
Load balancing solutions can be further classified into software-based load balancers and hardware-based load balancers. Hardware-based load balancers are specialized hardware boxes that include application-specific integrated circuits (ASICs) customized for a particular use. ASICs enable high-speed forwarding of network traffic without the overhead of a general-purpose operating system. Hardware-based load balancers are often used for transport-level load balancing. In general, hardware-based load balancers are faster than software-based solutions. Their drawback is their cost.
In contrast to hardware load balancers, software-based load balancers run on standard operating systems and standard hardware components such as PCs. Software-based solutions runs either within a dedicated load balancer hardware node as in Figure 1, or directly in the application.

DNS-based load balancing

DNS-based load balancing represents one of the early server load balancing approaches. The Internet's domain name system (DNS) associates IP addresses with a host name. If you type a host name (as part of the URL) into your browser, the browser requests that the DNS server resolve the host name to an IP address.
The DNS-based approach is based on the fact that DNS allows multiple IP addresses (real servers) to be assigned to one host name, as shown in the DNS lookup example in Listing 1.

Listing 1. Example DNS lookup

>nslookup amazon.com
Server:   ns.box
Address:  192.168.1.1

Name:       amazon.com
Addresses:  72.21.203.1, 72.21.210.11, 72.21.206.5
If the DNS server implements a round-robin approach, the order of the IP addresses for a given host changes after each DNS response. Usually clients such as browsers try to connect to the first address returned from a DNS query. The result is that responses to multiple clients are distributed among the servers. In contrast to the server load balancing architecture in Figure 1, no intermediate load balancer hardware node is required.
DNS is an efficient solution for global server load balancing, where load must be distributed between data centers at different locations. Often the DNS-based global server load balancing is combined with other server load balancing solutions to distribute the load within a dedicated data center.
Although easy to implement, the DNS approach has serious drawbacks. To reduce DNS queries, client tend to cache the DNS queries. If a server becomes unavailable, the client cache as well as the DNS server will continue to contain a dead server address. For this reason, the DNS approach does little to implement high availability.

TCP/IP server load balancing

TCP/IP server load balancers operate on low-level layer switching. A popular software-based low-level server load balancer is the Linux Virtual Server (LVS). The real servers appear to the outside world as a single "virtual" server. The incoming requests on a TCP connection are forwarded to the real servers by the load balancer, which runs a Linux kernel patched to include IP Virtual Server (IPVS) code.
To ensure high availability, in most cases a pair of load balancer nodes are set up, with one load balancer node in passive mode. If a load balancer fails, the heartbeat program that runs on both load balancers activates the passive load balancer node and initiates the takeover of the Virtual IP address (VIP). While the heartbeat is responsible for managing the failover between the load balancers, simple send/expect scripts are used to monitor the health of the real servers.
Transparency to the client is achieved by using a VIP that is assigned to the load balancer. If the client issues a request, first the requested host name is translated into the VIP. When it receives the request packet, the load balancer decides which real server should handle the request packet. The target IP address of the request packet is rewritten into the Real IP (RIP) of the real server. LVS supports several scheduling algorithms for distributing requests to the real servers. It is often is set up to use round-robin scheduling, similar to DNS-based load balancing. With LVS, the load balancing decision is made on the TCP level (Layer 4 of the OSI Reference Model).
After receiving the request packet, the real server handles it and returns the response packet. To force the response packet to be returned through the load balancer, the real server uses the VIP as its default response route. If the load balancer receives the response packet, the source IP of the response packet is rewritten with the VIP (OSI Model Layer 3). This LVS routing mode is called Network Address Translation (NAT) routing. Figure 2 shows an LVS implementation that uses NAT routing.
LVS implemented with NAT routing

Figure 2. LVS implemented with NAT routing

LVS also supports other routing modes such as Direct Server Return. In this case the response packet is sent directly to the client by the real server. To do this, the VIP must be assigned to all real servers, too. It is important to make the server's VIP unresolvable to the network; otherwise, the load balancer becomes unreachable. If the load balancer receives a request packet, the MAC address (OSI Model Layer 2) of the request is rewritten instead of the IP address. The real server receives the request packet and processes it. Based on the source IP address, the response packet is sent to the client directly, bypassing the load balancer. For Web traffic this approach can reduce the balancer workload dramatically. Typically, many more response packets are transferred than request packets. For instance, if you request a Web page, often only one IP packet is sent. If a larger Web page is requested, several response IP packets are required to transfer the requested page.

Caching

Low-level server load balancer solutions such as LVS reach their limit if application-level caching or application-session support is required. Caching is an important scalability principle for avoiding expensive operations that fetch the same data repeatedly. A cache is a temporary store that holds redundant data resulting from a previous data-fetch operation. The value of a cache depends on the cost to retrieve the data versus the hit rate and required cache size.
Based on the load balancer scheduling algorithm, the requests of a user session are handled by different servers. If a cache is used on the server side, straying requests will become a problem. One approach to handle this is to place the cache in a global space. memcached is a popular distributed cache solution that provides a large cache across multiple machines. It is a partitioned, distributed cache that uses consistent hashing to determine the cache server (daemon) for a given cache entry. Based on the cache key's hash code, the client library always maps the same hash code to the same cache server address. This address is then used to store the cache entry. Figure 3 illustrates this caching approach.

Figure 3. Load balancer architecture enhanced by a partitioned, distributed cache

Listing 2 uses spymemcached, a memcached client written in Java, to cache HttpResponse messages across multiple machines. The spymemcached library implements the required client logic I just described.

Listing 2. memcached-based HttpResponse cache

interface IHttpResponseCache {

   IHttpResponse put(String key, IHttpResponse response) throws IOException;

   void remove(String key) throws IOException;

   IHttpResponse get(String key) throws IOException;
}



class RemoteHttpResponseCache implements IHttpResponseCache {

   private MemcachedClient memCachedClient;

   public RemoteHttpResponseCache(InetSocketAddress... cacheServers) throws IOException {
      memCachedClient = new MemcachedClient(Arrays.asList(cacheServers));
   }

   public IHttpResponse put(String key, IHttpResponse response) throws IOException {
      byte[] bodyData = response.getBlockingBody().readBytes();

      memCachedClient.set(key, 3600, bodyData);
      return null;
   }


   public IHttpResponse get(String key) throws IOException {
      byte[] bodyData = (byte[]) memCachedClient.get(key);
      if (bodyData != null) {
         return new HttpResponse(200, "text/plain", bodyData);
      } else {
         return null;
      }
   }


   public void remove(String key) throws IOException {
      memCachedClient.delete(key);
   }
}
Listing 2 and the rest of this article's example code also uses the xLightweb HTTP library. Listing 3 shows an example business service implementation. The onRequest(...) method -- similar to the Servlet API's goGet(...) or doPost(...) method -- is called each time a request header is received. The exchange.send() method sends the response.

Listing 3. Example business service implementation

class MyRequestHandler implements IHttpRequestHandler {

   public void onRequest(IHttpExchange exchange) throws IOException {

      IHttpRequest request = exchange.getRequest();

      int customerId = request.getRequiredIntParameter("id");
      long amount = request.getRequiredLongParameter("amount");
      //...


      // perform some operations
      //..
      String response = ...

      // and return the response
      exchange.send(new HttpResponse(200, "text/plain", response));
   }
}


class Server {

   public static void main(String[] args) throws Exception {
      HttpServer httpServer = new HttpServer(8180, new MyRequestHandler());
      httpServer.run();
   }
}
Based on the HttpResponse cache, a simple caching solution can be implemented that caches the HTTP response for an HTTP request. If the same request is received twice, the corresponding response can be taken from the cache, without calling the business service. This requires intercepting the request-handling flow. This can be done by the interceptor shown in Listing 4.

Listing 4. Cache-supported business service example

class CacheInterceptor implements IHttpRequestHandler {

   private IHttpResponseCache cache;

   public CacheInterceptor(IHttpResponseCache cache) {
      this.cache = cache;
   }


   public void onRequest(final IHttpExchange exchange) throws IOException {

      IHttpRequest request = exchange.getRequest();

      // check if request is cacheable (Cache-Control header, ...)
      // ...
      boolean isCacheable = ...


      // if request is not cacheable forward it to the next handler of the chain
      if (!isCacheable) {
         exchange.forward(request);
         return;
      }

      // create the cache key
      StringBuilder sb = new StringBuilder(request.getRequestURI());
      TreeSet sortedParamNames = new TreeSet(request.getParameterNameSet());
      for (String paramName : sortedParamNames) {
         sb.append(URLEncoder.encode(paramName) + "=");

         List paramValues = Arrays.asList(request.getParameterValues(paramName));
         Collections.sort(paramValues);
         for (String paramValue : paramValues) {
            sb.append(URLEncoder.encode(paramValue) + ", ");
         }
      }
      final String cacheKey = URLEncoder.encode(sb.toString());

      // is request in cache?
      IHttpResponse cachedResponse = cache.get(cacheKey);
      if (cachedResponse != null) {
         IHttpResponse response = HttpUtils.copy(cachedResponse);
         response.setHeader("X-Cached", "true");
         exchange.send(response);

      // .. no -> forward it to the next handler of the chain
      } else {

         // define a intermediate response handler to intercept and copy the response
         IHttpResponseHandler respHdl = new IHttpResponseHandler() {

            @InvokeOn(InvokeOn.MESSAGE_RECEIVED)
            public void onResponse(IHttpResponse response) throws IOException {
               cache.put(cacheKey, HttpUtils.copy(response));
               exchange.send(response);  // forward the response to the client
            }

            public void onException(IOException ioe) throws IOException {
               exchange.sendError(ioe);  // forward the error to the client
            }
         };

         // forward the request to the next handler of the chain
         exchange.forward(request, respHdl);
      }
   }
}


class Server {

   public static void main(String[] args) throws Exception {
      RequestHandlerChain handlerChain = new RequestHandlerChain();
      handlerChain.addLast(new CacheInterceptor(new RemoteHttpResponseCache(new InetSocketAddress(cachSrv1, 11211), new InetSocketAddress(cachSrv2, 11211))));
      handlerChain.addLast(new MyRequestHandler());

      HttpServer httpServer = new HttpServer(8180, handlerChain);
      httpServer.run();
   }
}
The CacheInterceptor in Listing 4 uses the memcached-based implementation to cache responses, based on the hashcode of dedicated header attributes. If the cache contains a response for this hashcode, the request is not forwarded to the business-service handler. Instead, the response is returned from the cache. If the cache does not contain a response, the request is forwarded by adding a response handler to intercept the response flow. If a response is received from the business-service handler, the response is added to the cache. (Note that Listing 4 does not show cache invalidation. Often dedicated business operations require the cache entry to be invalidated.)
The consistent-hashing approach leads to high scalability. Based on consistent hashing, the memcached client implements a failover strategy to support high availability. But if a daemon crashes, the cache data is lost. This is minor problem, because cache data is redundant by definition.
A simple approach to make the memcached architecture fail-safe is to store the cache entry on a primary and a secondary cache server. If the primary cache server goes down, the secondary server probably contains the entry. If not, the required (cached) data must be recovered from the underlying data source.

Application session data support

Supporting application session data in a fail-safe way is more problematic. Application session data represents the state of a user-specific application session. Examples include the ID of a selected folder or the articles in a user's shopping cart. The application session data must be maintained across requests. In classic ("WEB 1.0") Web applications, such session data must be held on the server side. Storing it in the client by using cookies or hidden fields has two major weaknesses. It exposes internal session data, such as the price fields in shopping cart data, to attack on the client side, so you must address this security risk. And this approach works only for a small amount of data that's limited by the maximum size of the HTTP cookie header and the overhead of transferring the application session data to and from the client.
Similarly to the memcached architecture, session servers can be used to store the application session data on the server side. However, in contrast to cached data, application session data is not redundant by definition. For this reason application session data is not removed to make room for new data if the maximum memory size is reached. Caches are free to remove cache entries for memory-management reasons at any time. Caching algorithms such as last recently used (LRU) remove cache entries if the maximum cache size is reached.
If the session server crashes, the application session data is lost. In contrast to cached data, application session data is not recoverable in most cases. For this reason it is important that failover solutions support application session data in a fail-safe way.

Client affinity

The disadvantage of the cache and session server approach is that each request leads to an additional network call from the server to the cache or session server. In most cases call latency is not a problem because the cache or session server and the business servers are placed in the same, fast network segment. But latency can become problematic if the size of the data entries increases. To avoid moving large sets of data between the business server and cache/session servers again and again, requests of a dedicated client must always be forwarded to the same server. This means all of a user session's requests are handled by the same server instance.
In the case of caching, a local cache can be used instead of the distributed memcached server infrastructure. This approach, known as client affinity, does not require cache servers. Client affinity always directs the client to "its" particular server.
The example in Listing 5 implements a local cache and requires client affinity.

Listing 5. Local cached-based example requiring client affinity

class LocalHttpResponseCache extends LinkedHashMap implements IHttpResponseCache {

   public synchronized IHttpResponse put(String key, IHttpResponse value) {
      return super.put(key, value);
   }

   public void remove(String key) {
      super.remove(key);
   }

   public synchronized IHttpResponse get(String key) {
      return super.get(key);
   }

   protected boolean removeEldestEntry(Entry<String, IHttpResponse> eldest) {
      return size() > 1000;   // cache up to 1000 entries
   }
}


class Server {

   public static void main(String[] args) throws Exception {
      RequestHandlerChain handlerChain = new RequestHandlerChain();
      handlerChain.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
      handlerChain.addLast(new MyRequestHandler());

      HttpServer httpServer = new HttpServer(8080, handlerChain);
      httpServer.run();
   }
}
LVS supports affinity by enabling persistence -- remembering the last connection for a predefined period of time. It makes a particular client connect to the same real server for different TCP connections. But persistence doesn't really help in case of incoming dial-up links. If a dial-up link comes through a provider proxy, it can use different TCP connections within the same session.

Conclusion to Part 1

Infrastructures based on pure transport-level server load balancers are common. They are simple, flexible, and highly efficient, and they present no restrictions on the client side. Often such architectures are combined with distributed cache or session servers to handle application-level caching and session data issues. However, if the overhead caused by moving data from and to the cache or session servers grows, such architectures become increasingly inefficient. By implementing client affinity based on application-level server load balancer, you can avoid copying large datasets between servers. Read Server load balancing architectures, Part 2 for a discussion of application-level load balancing.

About the author

Gregor Roth, creator of the xLightweb HTTP library, works as a software architect at United Internet group, a leading European Internet service provider to which GMX, 1&1, and Web.de belong. His areas of interest include software and system architecture, enterprise architecture management, object-oriented design, distributed computing, and development methodologies.
Read more about Enterprise Java in JavaWorld's Enterprise Java section.

All contents copyright 1995-2013 Java World, Inc. http://www.javaworld.com

The awesomeness behind LESS

The awesomeness behind LESS :-
If you have never heard of LESS before, let me introduce you it. LESS is a preprocessor for CSS; basically it is a dynamic language, which helps you write CSS more efficiently.
You can also think of it as an addition to CSS because it adds in new behaviors such as variables and mixins which are the driving forces that make writing CSS through less that much simpler and therefore better.

Why you ought to use it

I wouldn’t be writing about LESS if it did not have plenty of advantages. It’s two biggest selling points, if you will, are that it makes CSS programming easier and faster. Let’s explore those two thoughts a little big more.

Easier and Faster

CSS is fairly simple to begin with because it doesn’t require a lot of logic; it just has a bunch of rule to define. However, defining those rules can become tricky and tedious at times. You then run into numerous limitations and obstacles that make your life much harder – and there, right there, is not nice. LESS’ features such as variables and functions help you get the hold of your CSS with ease just like they do in other languages such as JavaScript.
For instance, having variables helps with extensive style sheets where replacing a single hex colour with another could take hours as it is all over the place. When you have a variable in place, you then change the hex for the variable and the value is adjusted automatically for you. And that takes only a few seconds in comparison. You can only images how efficiency and easy of use will then lead to faster development time.

Similarity in syntax

One other thing that makes LESS great is its similarity in syntax to CSS. This will be especially helpful to people who are new to CSS or have a hard time grasping other language syntax. I think something this obvious is amazing – why create a new syntax for something that is supposed to be a helpful add-on, right? It also means that once LESS is installed, you are ready to go because there is no need to figure out how it works – as long as you know CSS, you know LESS too.

Installing this bad boy

Before I explain the installation process for you, I need you to understand something. LESS runs on both the client side and server side. The difference between the two is that on a client-side, the code is run at a user’s computer – like a typical style sheet does. On the server-side, the code runs on the web server first, and then enters a user’s computer all nice and prepared. All this means is that there are two ways to install LESS.

Client-side

If you’ve ever linked a style sheet, you are already familiar with how to make LESS work through a client-side method. However, this is a bit more complicated then just linking a CSS style sheet. Let me explain.
First, you need to make sure that all of your LESS files are saved as .less, like so: example.less. It is just like a CSS file but with .less not .css.
  1. styles.css
Link the LESS file like you typically would a CSS file. Now, in the link make sure you specify the relation to be “stylesheet/less.”
  1. rel="stylesheet/less" type="text/css" href="styles.css" />    
When you download LESS form their website, you are given a JavaScript file. It is the file tht makes less work, actually. It is crucial that you also include it in your HTML file like any other JavaScript file

Code Coverage Tools (JaCoCo, Cobertura, Emma) Comparison in Sonar

For those that are not familiar with Sonar, ( I hope this post will make you at least try it or see it in action at http://nemo.sonarsource.org )  you can take a look at an earlier post I’ve written some time ago. In one sentence Sonar is an open source platform that allows you to track and improve the quality of your source code. One of the key aspects when talking about software quality is the test coverage or code coverage which is how much of your source code is tested by Unit tests. Sonar integrates with the most popular open source code coverage tools ( JaCoCo, Cobetura, Emma ) and the well-known commercial Clover by Attlassian. By default it uses the JaCoCo (Java Code Coverage) engine and you’ll shortly find out why :)
Before we move on, I’d like to give many kudos to Evgeny Mandrikov. This article is inspired by one of his older posts and its intention is to present a more updated comparison of the supported code coverage tools by Sonar and point out some differences regarding their results and the way they work. Recently Sonar changed its default code coverage tool to JaCoCo and this post tries to explain the reasons behind that decision. Some of the information is borrowed by Evgeny’s post and the image is also taken from Evgeny’s presentation about JaCoCo. So thanks a lot Evgeny!
Now let’s go to the meat. For the comparison you’ll see, I’ve used the latest available Sonar version 3.3, Maven 2.2.1, Java 1.6 and all analysis launched in a Windows 7 machine (Intel Core i3-2120 CPU @ 3.30GHz)  with 8GB RAM. The projects were carefully selected ( a small, medium-sized and a large one – not that large as Java code base but large enough to extract some results ). I ran five analysis for each open source code coverage tool ( I excluded the commercial Clover from my comparison version ) and another five by disabling the code coverage mechanism. So that’s a total of 60 analysis ). In the following tables you can find some information about the code coverage tools and some basic metrics about the selected projects. Pay attention to the date of the latest stable release. Emma hasn’t been updated since dinosaurs era and cobertura is almost three years inactive. One might think that this isn’t an issue  if they are stable and don’t need any new release. Well, the truth is that both of them have bugs that frustrate end-users and there’s no one to fix them. On the other hand JaCoCo is continuously evolving and improving…
code_coverage_comparison_table_1

code_coverage_comparison_table_2

The results of the analysis are displayed next. Some important notices. Emma doesn’t support Branch coverage that’s why you’re not seeing any metrics. Furthermore there are differences in the results of Line and Branch coverage, which are more concrete for larger projects. For instance in Sonar Jira plugin all three tools produce the same results whereas in Sonar analysis and Commons Lang projects you can see that the numbers are not the same.

Now take a look at a graph that illustrates in a more readable way which tool is the fastest.
code_coverage_comparison_graph_1
It seems that Emma and JaCoCo need the same amount of time to compute their metrics… but… as we already mentioned there’s a huge difference. There’s no branch coverage in Emma reports.  Cobertura is always slower than JaCoCo so again the winner is JaCoCo. Of course you can get even faster results by running a Sonar analysis without computing code coverage metrics :)
One last thing: JaCoCo, as the following figure shows is the only tool that analyses bytecode on-the-fly which is more . Cobertura and Emma run an offline analysis and use a class loader whereas JaCoCo has its own java agent for analysis code. This configuration allows JaCoCo to be very flexible, possible integrated with many other tools and frameworks and can be used with any language in a JVM environment.
code_coverage_jacoco_way

So, to sum up, if you’re using Sonar ( if you don’t , you SHOULD ), then it strongly advisable to keep the default code coverage engine ( JaCoCo) , unless you have really important reasons for that.
Finally don’t forge to check Sonar’s Community 2013 unofficial survey and the upcoming book about Sonar by Manning Publications. The release date is in about 3-4 months but you can get an early access version here.

As always, feel free to comment or suggest improvements about the article and its content
Popup Generator
Popup generator software, popunder, dhtml popup, dhtml window. Bodog
Use bodog bonus code 1349384 to get a huge 110% bonus at bodog poker. Additionally the same code can be used at bodoglife property for an exclusive bonus. Chicago Limousine Service
Chicago limousine service offers professional limousine tours and airport transportation. Wedding Cars West Yorkshire
Bliss wedding cars is an independent family run business based in wakefield, offering chauffeur driven transport covering huddersfield, dewsbury, pontefract & the surrounding district. Rolls royce silver shadow, silver spur and jaguar sovereign. Casino Bonus Code
The highest exclusive online casino and online poker bonus codes. Also featuring daily updated poker and casino news as well as strategy articles.Add Url To Health And Medicine Directory
Health directory including health article, health rescources, man health, woman health, addiction, nutrition, herb, weight loss products Search Engine Optimization Dallas Internet Marketing Services
Multilingual search engine marketing consulting, and search friendly web design. Catanich internet marketing dallas texas is a team of internet marketing consultants providing website marketing consulting services to online internet companiesAmpliación Del Pene
Por los 6 años pasados vigrx ha contenido como el suplemento masculino del virility del número 1 se convirtió científico para dar resultados seguros y eficaces ustedAdd Your Website To 100 Web Directories.
Add your website to 100 web directories. Herbal Viagra
Herbal supplements at an affordable price! Home Speakers
Search through this great collection of brand name speakers Truck Graphics
Customize rear window graphics for your vehicle, truck, suv or car! Truck window decals and more are available. Our categories are: art, fantasy, fishing, horses, hunting, military, nature, patriotic, racing, snow sports, and wildlife. Singapore Seminars
Singaporeseminars. Com is singapore's no. 1 seminar, events & conference portal. Looking for the best resources, here's the one stop portal in singapore. Chicago Limousine Service
Chicago limousine service offers professional limousine tours and airport transportation. Classic Cars Market Place Advertising Broker Services
Classic cars market place advertising brokers services will sell or find the car of your dreams. Antique custom vehicle,vintage car,hot rod,hemi engines. The american hot muscle car dream just a click away. We are your no1 marketing classiccar center Domalot Web Hosting
Offers cheap shared, virtual dedicated and dedicated web hosting services: linux or windows hosts. Also provides search, registration and transfers for domain names.