Java 2 Ada

Rest API Benchmark comparison between Ada and Java

By stephane.carrez

Arcadius Ahouansou from made an interesting benchmark to compare several Java Web servers: Java REST API Benchmark: Tomcat vs Jetty vs Grizzly vs Undertow, Round 3. His benchmark is not as broad as the TechEmpower Benchmark but it has the merit to be simple to understand and it can be executed very easily by everyone. I decided to make a similar benchmark for Ada Web servers with the same REST API so that it would be possible to compare Ada and Java implementations.

The goal is to benchmark the following servers and have an idea of how they compare with each others:

The first three are implemented in Ada and the last one in Java.

REST Server Implementation

The implementation is different for each server but they all implement the same REST GET operation accessible from the /api base URL. They return the same JSON content:

{"greeting":"Hello World!"}

Below is an extract of the server implementation for each server.

AWS Rest API Server

function Get_Api (Request : in AWS.Status.Data) return AWS.Response.Data is
   return AWS.Response.Build ("application/json", "{""greeting"":""Hello World!""}");
end Get_Api;

ASF Rest API Server

procedure Get (Req    : in out ASF.Rest.Request'Class;
               Reply  : in out ASF.Rest.Response'Class;
               Stream : in out ASF.Rest.Output_Stream'Class) is
   Stream.Write_Entity ("greeting", "Hello World!");
end Get;

EWS Rest API Server

function Get (Request : EWS.HTTP.Request_P) return EWS.Dynamic.Dynamic_Response'Class is
   Result : EWS.Dynamic.Dynamic_Response (Request);
   EWS.Dynamic.Set_Content_Type (Result, To => EWS.Types.JSON);
   EWS.Dynamic.Set_Content (Result, "{""greeting"":""Hello World!""}");
   return Result;
end Get;

Java Rest API Server

public class ApiResource {
  public static final String RESPONSE = "{\"greeting\":\"Hello World!\"}";
  public Response test() {
      return ok(RESPONSE).build();

Benchmark Strategy and Results

The Ada and Java servers are started on the same host (one at a time), a Linux Ubuntu 14.04 64-bit powered by an Intel i7-33770S CPU @3.10Ghz with 8-cores. The benchmark is made by using Siege executed on a second computer running Linux Ubuntu 15.04 64-bit powered by an Intel i7-4720HQ CPU @2.60Ghz with 8-cores. Client and server hosts are connected through a Gigabit Ethernet link.

Siege makes an intensive use of network connections which results in exhaustion of TCP/IP port to connect to the server. This is due to the TCP TIME_WAIT that prevents the TCP/IP port from being re-used for future connections. To avoid such exhaustion, the network stack is tuned on both the server and the client hosts with the sysctl commands:

sudo sysctl -w net.ipv4.tcp_tw_recycle=1
sudo sysctl -w net.ipv4.tcp_tw_reuse=1

The benchmark tests are executed by running the script and then making GNUplot graphs using plot-perf.gpi script. The benchmark gives the number of REST requests which are made per second for different level of concurrency.

  • The Embedded Web Server targets embedded platforms and it uses only one task to serve requests. Despite this simple configuration, it gets some honorable results as it reaches 8000 requests per second.
  • The Ada Server Faces provides an Ada implementation of Java Server Faces. It uses the Ada Web Server. The benchmark shows a small overhead (arround 4%).
  • The Ada Web Server is the fastest server in this configuration. As for the Ada Server Faces it is configured to only have 8 tasks that serve requests. Increasing the number of tasks does not bring better performance.
  • The Java Grizzly server is the faster Java server reported by Arcadius's benchmark. It uses 62 threads. It appears to serve 7% less requests than the Ada Web Server.


On the memory side, the process Resident Set Size (RSS) is measured once the benchmark test ends and graphed below. The Java Grizzly server uses arround 580 Mb, followed by Ada Server Faces that uses 5.6Mb, Ada Web Server 3.6Mb and the EWS only 1 Mb.


Conclusion and References

The Ada Web Server has comparable performance with the Java Grizzly server (it is even a little bit faster). But as far a memory is concerned, Ada has a serious advantage since it cuts the memory size by a factor of 100. Ada has other advantages that make it an alternative choice for web development (safety, security, realtime capabilities, ...).

Sources of the benchmarks are available in the following two GitHub repositories:

To add a comment, you must be connected. Login to add a comment

Process creation in Java and Ada

By stephane.carrez

When developping and integrating applications together it is often useful to launch an external program. By doing this, many integration and implementation details are simplified (this integration technic also avoids license issues for some open source software integration).

This article explains how to launch an external program in Java and Ada and be able to read the process output to get the result.

Java Process Creation

The process creation is managed by the ProcessBuilder Java class. An instance of this class holds the necessary information to create and launch a new process. This includes the command line and its arguments, the standard input and outputs, the environment variables and the working directory.

The process builder instance is created by specifying the command and its arguments. For this the constructor accepts a variable list of parameters of type String.

import java.lang.ProcessBuilder;
  final String cmd = "ls";
  final String arg1 = "-l";
  final ProcessBuilder pb = new ProcessBuilder(cmd, arg1);

When the process builder is initialized, we can invoke the start method to create a new process. Each process is then represented by an instance of the Process class. It is possible to invoke start serveral times and each call creates a new process. It is necessary to catch the IOException which can be raised if the process cannot be created.

import java.lang.Process;
  try {
    final Process p = pb.start();

  } catch (final IOException ex) {
    System.err.println("IO error: " + ex.getLocalizedMessage());

The Process class gives access to the process output through an input stream represented by the InputStream class. With this input stream, we can read what the process writes on its output. We will use a BufferedReader class to read that output line by line.

  final InputStream is = p.getInputStream();
  final BufferedReader reader = new BufferedReader(new InputStreamReader(is));

By using the readLine method, we can read a new line after each call. Once the whole stream is read, we have to close it. Closing the BufferedReader will close the InputStream associated with it.

  String line;
  while ((line = reader.readLine()) != null) {

Last step is to wait for the process termination and get the exit status: we can use the waitFor method. Since this method can be interrupted, we have to catch the InterruptedException.

  try {
    final int exit = p.waitFor();
    if (exit != 0) {
      System.err.printf("Command exited with status %d\n", exit);
  }  catch (final InterruptedException ex) {
    System.err.println("Launch was interrupted...");

You can get the complete source from the file:

Ada Process Creation

For the Ada example, we will create an application that invokes the nslookup utility to resolve a set of host names. The list of host names is provided to nslookup by writing on its standard input and the result is collected by reading the output.

We will use the Pipe_Stream to launch the process, write on its input and read its output at the same time. The process is launched by calling the Open procedure and specifying the pipe redirection modes: READ is for reading the process output, WRITE is for writing to its input and READ_WRITE is for both.

with Util.Processes;
with Util.Streams.Pipes;
   Pipe    : aliased Util.Streams.Pipes.Pipe_Stream;

   Pipe.Open ("nslookup", Util.Processes.READ_WRITE);

We can read or write on the pipe directly but using a Print_Stream to write the text and the Buffered_Stream to read the result simplifies the implementation. Both of them are connected to the pipe: the Print_Stream will use the pipe output stream and the Buffered_Stream will use the pipe input stream.

with Util.Streams.Buffered;
with Util.Streams.Texts;
   Buffer  : Util.Streams.Buffered.Buffered_Stream;
   Print   : Util.Streams.Texts.Print_Stream;
      --  Write on the process input stream
   Buffer.Initialize (null, Pipe'Unchecked_Access, 1024);
   Print.Initialize (Pipe'Unchecked_Access);

Before reading the process output, we send the input data to be solved by the process. By closing the print stream, we also close the pipe output stream, thus closing the process standard input.

   Print.Write ("" & ASCII.LF);
   Print.Write ("set type=NS" & ASCII.LF);
   Print.Write ("" & ASCII.LF);
   Print.Write ("set type=MX" & ASCII.LF);
   Print.Write ("" & ASCII.LF);

We can now read the program output by using the Read procedure and get the result in the Content string. The Close procedure is invoked on the pipe to close the pipe (input and output) and wait for the application termination.

   Content : Unbounded_String;

   --  Read the 'nslookup' output.
   Buffer.Read (Content);

Once the process has terminated, we can get the exit status by using the Get_Exit_Status function.

   Ada.Text_IO.Put_Line ("Exit status: "
       & Integer'Image (Pipe.Get_Exit_Status));



To add a comment, you must be connected. Login to add a comment

Thread safe object pool to manage scarce resource in application servers

By stephane.carrez

Problem Description

In application servers some resources are expensive and they must be shared. This is the case for a database connection, a frame buffer used for image processing, a connection to a remote server, and so on. The problem is to make available these scarce resources in such a way that:

  • a resource is used by only one thread at a time,
  • we can control the maximum number of resources used at a time,
  • we have some flexibility to define such maximum when configuring the application server,
  • and of course the final solution is thread safe.

The common pattern used in such situation is to use a thread-safe pool of objects. Objects are picked from the pool when needed and restored back to the pool when they are no longer used.

Java thread safe object pool

Let's see how to implement our object pool in Java. We will use a generic class declaration into which we define a fixed array of objects. The pool array is allocated by the constructor and we will assume it will never change (hence the final keyword).

public class Pool<T> {
  private final T[] objects;
  public Pool<T>(int size) {
     objects = new T[size];

First, we need a getInstance method that picks an object from the pool. The method must be thread safe and it is protected by the synchronized keyword. It there is no object, it has to wait until an object is available. For this, the method invokes wait to sleep until another thread releases an object. To keep track of the number of available objects, we will use an available counter that is decremented each time an object is used.

  private int available = 0;
  private int waiting = 0;
  public synchronized T getInstance() {
     while (available == 0) {
     return objects[available];

To know when to wakeup a thread, we keep track of the number of waiters in the waiting counter. A loop is also necessary to make sure we have an available object after being wakeup. Indeed, there is no guarantee that after being notified, we have an available object to return. The call to wait will release the lock on the pool and puts the thread is wait mode.

Releasing the object is provided by release. The object is put backed in the pool array and the available counter incremented. If some threads are waiting, one of them is awaken by calling notify.

  public synchronized void release(T obj) {
     objects[available] = obj;
     if (waiting) {

When the application is started, the pool is initialized and some pre-defined objects are inserted.

   class Item { ... };
   Pool<Item> pool = new Pool<Item>(10);
   for (int i = 0; i < 10; i++) {
       pool.release(new Item());

Ada thread safe pool

The Ada object pool will be defined in a generic package and we will use a protected type. The protected type will guarantee the thread safe behavior of the implementation by making sure that only one thread executes the procedures.

   type Element_Type is private;
package Util.Concurrent.Pools is
     type Element_Array_Access is private;
     Null_Element_Array : constant Element_Array_Access;
    type Element_Array is array (Positive range <>) of Element_Type;
    type Element_Array_Access is access all Element_Array;
     Null_Element_Array : constant Element_Array_Access := null;
end Util.Concurrent.Pools;   

The Ada protected type is simple with three procedures, we get the Get_Instance and Release as in the Java implementation. The Set_Size will take care of allocating the pool array (a job done by the Java pool constructor).

protected type Pool is
  entry Get_Instance (Item : out Element_Type);
  procedure Release (Item : in Element_Type);
  procedure Set_Size (Capacity : in Positive);
  Available     : Natural := 0;
  Elements      : Element_Array_Access := Null_Element_Array;
end Pool;

First, the Get_Instance procedure is defined as an entry so that we can define a condition to enter in it. Indeed, we need at least one object in the pool. Since we keep track of the number of available objects, we will use it as the entry condition. Thanks to this entry condition, the Ada implementation is a lot easier.

protected body Pool is
  entry Get_Instance (Item : out Element_Type) when Available > 0 is
     Item := Elements (Available);
     Available := Available - 1;
  end Get_Instance;
end Pool;

The Release operation is also easier as there is no need to wakeup any thread: the Ada runtime will do that for us.

protected body Pool is
   procedure Release (Item : in Element_Type) is
      Available := Available + 1;
      Elements (Available) := Item;
   end Release;
end Pool;

The pool is instantiated:

type Connection is ...;
package Connection_Pool is new Util.Concurrent.Pools (Connection);

And a pool object can be declared and initialized with some default object:

P : Connection_Pool.Pool;
C : Connection;
   P.Set_Size (Capacity => 10);
   for I in 1 .. 10 loop
         P.Release (C);
   end loop;



To add a comment, you must be connected. Login to add a comment

Thread safe cache updates in Java and Ada

By stephane.carrez 2 comments

Problem Description

The problem is to update a cache that is almost never modified and only read in multi-threaded context. The read performance is critical and the goal is to reduce the thread contention as much as possible to obtain a fast and non-blocking path when reading the cache.

Cache Declaration

Java Implementation

Let's define the cache using the HashMap class.

public class Cache {
   private HashMap<String,String> map = new HashMap<String, String>();

Ada Implementation

In Ada, let's instantiate the Indefinite_Hashed_Maps package for the cache.

with Ada.Strings.Hash;
with Ada.Containers.Indefinite_Hashed_Maps;
  package Hash_Map is
    new Ada.Containers.Indefinite_Hashed_Maps (Key_Type => String,
                       Element_Type => String,
                       Hash => Hash,
                       "=" => "=");

  Map : Hash_Map.Map;

Solution 1: safe and concurrent implementation

This solution is a straightforward solution using the language thread safe constructs. In Java this solution does not allow several threads to look at the cache at the same time. The cache access will be serialized. This is not a problem with Ada, where multiple concurrent readers are allowed. Only writing locks the cache object

Java Implementation

The thread safe implementation is protected by the synchronized keyword. It guarantees mutual exclusions of threads invoking the getCache and addCache methods.

   public synchronized String getCache(String key) {
      return map.get(key);
   public synchronized void addCache(String key, String value) {
      map.put(key, value);

Ada Implementation

In Ada, we can use the protected type. The cache could be declared as follows:

  protected type Cache is
    function Get(Key : in String) return String;
    procedure Put(Key, Value: in String);
    Map : Hash_Map.Map;
  end Cache;

and the implementation is straightforward:

  protected body Cache is
    function Get(Key : in String) return String is
       return Map.Element (Key);
    end Get;
    procedure Put(Key, Value: in String) is
       Map.Insert (Key, Value);
    end Put;
  end Cache;

Pros and Cons

+: This implementation is thread safe.

-: In Java, thread contention is high as only one thread can look in the cache at a time.

-: In Ada, thread contention occurs only if another thread updates the cache (which is far better than Java but could be annoying for realtime performance if the Put operation takes time).

-: Thread contention is high as only one thread can look in the cache at a time.

Solution 2: weak but efficient implementation

The Solution 1 does not allow multiple threads to access the cache at the same time, thus providing a contention point. The second solution proposed here, removes this contention point by relaxing some thread safety condition at the expense of cache behavior.

In this second solution, several threads can read the cache at the same time. The cache can be updated by one or several threads but the update does not guarantee that all entries added will be present in the cache. In other words, if two threads update the cache at the same time, the updated cache will contain only one of the new entry. This behavior can be acceptable in some cases and it may not fit for all uses. It must be used with great care.

Java Implementation

A cache entry can be added in a thread-safe manner using the following code:

   private volatile HashMap<String, String> map = new HashMap<String, String>();
   public String getCache(String key) {
      return map.get(key);
   public void addCache(String key, String value) {
      HashMap<String, String> newMap = new HashMap<String, String>(map);

      newMap.put(newKey, newValue);
      map = newMap;

This implementation is thread safe because the hash map is never modified. If a modification is made, it is done on a separate hash map object. The new hash map is then installed by the map = newMap assignment operation which is atomic. Again this code extract does not guarantee that all the cache entries added will be part of the cache.

Ada Implementation

The Ada implementation is slightly more complex basically because there is no garbage collector. If we allocate a new hash map and update the access pointer, we still have to free the old hash map when no other thread is accessing it.

The first step is to use a reference counter to automatically release the hash table when the last thread finishes its work. The reference counter will handle memory management issues for us. An implementation of thread-safe reference counter is provided by Ada Util. In this implementation, counters are updated using specific instruction (See Showing multiprocessor issue when updating a shared counter).

with Util.Refs;
   type Cache is new Util.Refs.Ref_Entity with record
      Map : Hash_Map.Map;
   end record;
   type Cache_Access is access all Cache;

   package Cache_Ref is new Util.Refs.References (Element_Type => Cache,
                Element_Access => Cache_Access);

  C : Cache_Ref.Atomic_Ref;

Source:, Util.Refs.adb

The References package defines a Ref type representing the reference to a Cache instance. To be able to replace a reference by another one in an atomic manner, it is necessary to use the Atomic_Ref type. This is necessary because the Ada assignment of an Ref type is not atomic (the assignment copy and the call to the Adjust operation to update the reference counter are not atomic). The Atomic_Ref type is a protected type that provides a getter and a setter. Their use guarantees the atomicity.

    function Get(Key : in String) return String is
      R : constant Cache_Ref.Ref := C.Get;
       return R.Value.Map.Element (Key); -- concurrent access
    end Get;
    procedure Put(Key, Value: in String) is
       R : constant Cache_Ref.Ref := C.Get;
       N : constant Cache_Ref.Ref := Cache_Ref.Create;
       N.Value.all.Map := R.Value.Map;
       N.Value.all.Insert (Key, Value);
       C.Set (N); -- install the new map atomically
    end Put;

Pros and Cons

+: high performance in SMP environments

+: no thread contention in Java

-: cache update can loose some entries

-: still some thread contention in Ada but limited to copying a reference (C.Set)

To add a comment, you must be connected. Login to add a comment

Fault tolerant EJB interceptor: a solution to optimistic locking errors and other transient faults

By stephane.carrez 4 comments

Fault tolerance is often necessary in application servers. The J2EE standard defines an interceptor mechanism that can be used to implement the first steps for fault tolerance. The pattern that I present in this article is the solution that I have implemented for the Planzone service and which is used with success for the last two years.

Identify the Fault to recover

The first step is to identify the faults that can be recovered from others. Our application is using MySQL and Hibernate and we have identified the following three transient faults (or recoverable faults).

StaleObjectStateException (Optimistic Locking)

Optimistic locking is a pattern used to optimize database transactions. Instead of locking the database tables and rows when values are updated, we allow other transactions to access these values. Concurrent writes are possible and they must be detected. For this optimistic locking uses a version counter, or a timestamp or state comparison to detect concurrent writes.

When a concurrent write is detected, Hibernate raises a StaleObjectStateException exception. When such exception occurs, the state of objects associated with the current hibernate session is unknown. (See Transactions and Concurrency)

As far as Planzone is concerned, we get 3 exceptions per 10000 calls.

LockAcquisitionException (Database deadlocks)

On the database side, the server can detect deadlock situation and report an error. When a deadlock is detected between two clients, the server generates an error for one client and the second one can proceed. When such error is reported, the client can retry the operation (See InnoDB Lock Modes).

As far as Planzone is concerned, we get 1 or 2 exceptions per 10000 calls.

JDBCConnectionException (Connection failure)

Sometimes the connection to the database is lost either because the database server crashed or because it was restarted due to maintenance reasons. Server crash is rare but it can occur. For Planzone, we had 3 crashes during the last 2 years (one crash every 240 day). During the same period we also had to stop and restart the server 2 times for a server upgrade.

Restarting the call after a database connection failure is a little bit more complex. It is necessary to sleep some time before retrying.

EJB Interceptor

To create our fault tolerant mechanism we use an EJB interceptor which is invoked for each EJB method call. For this the interceptor defines a method marked with the @ArroundInvoke annotation. Its role is to catch the transient faults and retry the call. The example below retries the call at most 10 times.

The EJB interceptor method receives an InvocationContext parameter which allows to have access to the target object, parameters and method to invoke. The proceed method allows to transfer the control to the next interceptor and to the EJB method. The real implementation is a little bit more complex due to logging but the overall idea is here.

class RetryInterceptor {
  public Object retry(InvocationContext context) throws Exception {
    for (int retry = 0; ; retry++) {
      try {
        return context.proceed();

      } catch (LockAcquisitionException ex) {
         if (retry > 10) {
          throw ex;

     } catch (StaleObjectStateException ex) {
       if (retry > 10) {
        throw ex;

    } catch (final JDBCConnectionException ex) {
      if (retry > 10) {
        throw ex;
      Thread.sleep(500L + retry * 1000L);

EJB Interface

For the purpose of this article, the EJB interface is declared as follows. Our choice was to define an ILocal and an IRemote interface to allow the creation of local and remote services.

public interface Service {
    interface ILocal extends Service {

    interface IRemote extends Service {

EJB Declaration

The interceptor is associated with the EJB implementation class by using the @Interceptors annotation. The same interceptor class can be associated with several EJBs.

@Stateless(name = "Service")
public class ServiceBean
  implements Service.ILocal, Service.IRemote {


To test the solution, I recommend to write a unit test. The unit test I wrote did the following:

  • A first thread executes the EJB method call.
  • The transaction commit operation is overriden by the unit test.
  • When the commit is called, a second thread is activated to simulate the concurrent call before committing.
  • The second thread performs the EJB method call in such a way that it will trigger the StaleObjectStateException when the first thread resumes
  • When the second thread finished, the first thread can perform the real commit and the StaleObjectStateException is raised by Hibernate because the object was modified.
  • The interceptor catches the exception and retries the call which will succeed.

The full design of such test is outside of the scope of this article. It is also specific to each application.

To add a comment, you must be connected. Login to add a comment