Tag - Java

Ada, Java and Python database access

By Stephane Carrez

How does Ada, Java and Python compare with each other when they are used to connect to a database? This was the main motivation for me to write the SQL Benchmark and write this article.

Read more
To add a comment, you must be connected. Login to add a comment

Rest API Benchmark comparison between Ada and Java

By Stephane Carrez 3 comments

Arcadius Ahouansou from Menelic.com made an interesting benchmark to compare several Java Web servers: Java REST API Benchmark: Tomcat vs Jetty vs Grizzly vs Undertow, Round 3. His benchmark is not as broad as the TechEmpower Benchmark but it has the merit to be simple to understand and it can be executed very easily by everyone. I decided to make a similar benchmark for Ada Web servers with the same REST API so that it would be possible to compare Ada and Java implementations.

Read more
3 comments
To add a comment, you must be connected. Login to add a comment

Process creation in Java and Ada

By Stephane Carrez

When developping and integrating applications together it is often useful to launch an external program. By doing this, many integration and implementation details are simplified (this integration technic also avoids license issues for some open source software integration).

This article explains how to launch an external program in Java and Ada and be able to read the process output to get the result.

Java Process Creation

The process creation is managed by the ProcessBuilder Java class. An instance of this class holds the necessary information to create and launch a new process. This includes the command line and its arguments, the standard input and outputs, the environment variables and the working directory.

The process builder instance is created by specifying the command and its arguments. For this the constructor accepts a variable list of parameters of type String.

import java.lang.ProcessBuilder;
...
  final String cmd = "ls";
  final String arg1 = "-l";
  final ProcessBuilder pb = new ProcessBuilder(cmd, arg1);

When the process builder is initialized, we can invoke the start method to create a new process. Each process is then represented by an instance of the Process class. It is possible to invoke start serveral times and each call creates a new process. It is necessary to catch the IOException which can be raised if the process cannot be created.

import java.lang.Process;
...
  try {
    final Process p = pb.start();
    ...

  } catch (final IOException ex) {
    System.err.println("IO error: " + ex.getLocalizedMessage());
  }

The Process class gives access to the process output through an input stream represented by the InputStream class. With this input stream, we can read what the process writes on its output. We will use a BufferedReader class to read that output line by line.

import java.io.*;
...
  final InputStream is = p.getInputStream();
  final BufferedReader reader = new BufferedReader(new InputStreamReader(is));

By using the readLine method, we can read a new line after each call. Once the whole stream is read, we have to close it. Closing the BufferedReader will close the InputStream associated with it.

  String line;
  while ((line = reader.readLine()) != null) {
    System.out.println(line);
  }
  reader.close();

Last step is to wait for the process termination and get the exit status: we can use the waitFor method. Since this method can be interrupted, we have to catch the InterruptedException.

  try {
    ...
    final int exit = p.waitFor();
    if (exit != 0) {
      System.err.printf("Command exited with status %d\n", exit);
    }
  }  catch (final InterruptedException ex) {
    System.err.println("Launch was interrupted...");
  }

You can get the complete source from the file: Launch.java

Ada Process Creation

For the Ada example, we will create an application that invokes the nslookup utility to resolve a set of host names. The list of host names is provided to nslookup by writing on its standard input and the result is collected by reading the output.

We will use the Pipe_Stream to launch the process, write on its input and read its output at the same time. The process is launched by calling the Open procedure and specifying the pipe redirection modes: READ is for reading the process output, WRITE is for writing to its input and READ_WRITE is for both.

with Util.Processes;
with Util.Streams.Pipes;
...
   Pipe    : aliased Util.Streams.Pipes.Pipe_Stream;

   Pipe.Open ("nslookup", Util.Processes.READ_WRITE);

We can read or write on the pipe directly but using a Print_Stream to write the text and the Buffered_Stream to read the result simplifies the implementation. Both of them are connected to the pipe: the Print_Stream will use the pipe output stream and the Buffered_Stream will use the pipe input stream.

with Util.Streams.Buffered;
with Util.Streams.Texts;
...
   Buffer  : Util.Streams.Buffered.Buffered_Stream;
   Print   : Util.Streams.Texts.Print_Stream;
begin
      --  Write on the process input stream
   Buffer.Initialize (null, Pipe'Unchecked_Access, 1024);
   Print.Initialize (Pipe'Unchecked_Access);

Before reading the process output, we send the input data to be solved by the process. By closing the print stream, we also close the pipe output stream, thus closing the process standard input.

   Print.Write ("www.google.com" & ASCII.LF);
   Print.Write ("set type=NS" & ASCII.LF);
   Print.Write ("www.google.com" & ASCII.LF);
   Print.Write ("set type=MX" & ASCII.LF);
   Print.Write ("www.google.com" & ASCII.LF);
   Print.Close;

We can now read the program output by using the Read procedure and get the result in the Content string. The Close procedure is invoked on the pipe to close the pipe (input and output) and wait for the application termination.

   Content : Unbounded_String;

   --  Read the 'nslookup' output.
   Buffer.Read (Content);
   Pipe.Close;

Once the process has terminated, we can get the exit status by using the Get_Exit_Status function.

   Ada.Text_IO.Put_Line ("Exit status: "
       & Integer'Image (Pipe.Get_Exit_Status));


References

launch.adb
util-streams-pipes.ads
util-streams-buffered.ads
util-streams-texts.ads

To add a comment, you must be connected. Login to add a comment

Thread safe object pool to manage scarce resource in application servers

By Stephane Carrez

Problem Description

In application servers some resources are expensive and they must be shared. This is the case for a database connection, a frame buffer used for image processing, a connection to a remote server, and so on. The problem is to make available these scarce resources in such a way that:

  • a resource is used by only one thread at a time,
  • we can control the maximum number of resources used at a time,
  • we have some flexibility to define such maximum when configuring the application server,
  • and of course the final solution is thread safe.

The common pattern used in such situation is to use a thread-safe pool of objects. Objects are picked from the pool when needed and restored back to the pool when they are no longer used.

Java thread safe object pool

Let's see how to implement our object pool in Java. We will use a generic class declaration into which we define a fixed array of objects. The pool array is allocated by the constructor and we will assume it will never change (hence the final keyword).

public class Pool<T> {
  private final T[] objects;
 
  public Pool<T>(int size) {
     objects = new T[size];
  }
  ...
}

First, we need a getInstance method that picks an object from the pool. The method must be thread safe and it is protected by the synchronized keyword. It there is no object, it has to wait until an object is available. For this, the method invokes wait to sleep until another thread releases an object. To keep track of the number of available objects, we will use an available counter that is decremented each time an object is used.

  private int available = 0;
  private int waiting = 0;
  public synchronized T getInstance() {
     while (available == 0) {
        waiting++;
        wait();
        waiting--;
     }
     available--;
     return objects[available];
  }

To know when to wakeup a thread, we keep track of the number of waiters in the waiting counter. A loop is also necessary to make sure we have an available object after being wakeup. Indeed, there is no guarantee that after being notified, we have an available object to return. The call to wait will release the lock on the pool and puts the thread is wait mode.

Releasing the object is provided by release. The object is put backed in the pool array and the available counter incremented. If some threads are waiting, one of them is awaken by calling notify.

  public synchronized void release(T obj) {
     objects[available] = obj;
     available++;
     if (waiting) {
        notify();
     }
  }

When the application is started, the pool is initialized and some pre-defined objects are inserted.

   class Item { ... };
...
   Pool<Item> pool = new Pool<Item>(10);
   for (int i = 0; i < 10; i++) {
       pool.release(new Item());
   }

Ada thread safe pool

The Ada object pool will be defined in a generic package and we will use a protected type. The protected type will guarantee the thread safe behavior of the implementation by making sure that only one thread executes the procedures.

generic
   type Element_Type is private;
package Util.Concurrent.Pools is
     type Element_Array_Access is private;
     Null_Element_Array : constant Element_Array_Access;
  ...
private
    type Element_Array is array (Positive range <>) of Element_Type;
    type Element_Array_Access is access all Element_Array;
     Null_Element_Array : constant Element_Array_Access := null;
end Util.Concurrent.Pools;   

The Ada protected type is simple with three procedures, we get the Get_Instance and Release as in the Java implementation. The Set_Size will take care of allocating the pool array (a job done by the Java pool constructor).

protected type Pool is
  entry Get_Instance (Item : out Element_Type);
  procedure Release (Item : in Element_Type);
  procedure Set_Size (Capacity : in Positive);
private
  Available     : Natural := 0;
  Elements      : Element_Array_Access := Null_Element_Array;
end Pool;

First, the Get_Instance procedure is defined as an entry so that we can define a condition to enter in it. Indeed, we need at least one object in the pool. Since we keep track of the number of available objects, we will use it as the entry condition. Thanks to this entry condition, the Ada implementation is a lot easier.

protected body Pool is
  entry Get_Instance (Item : out Element_Type) when Available > 0 is
  begin
     Item := Elements (Available);
     Available := Available - 1;
  end Get_Instance;
...
end Pool;

The Release operation is also easier as there is no need to wakeup any thread: the Ada runtime will do that for us.

protected body Pool is
   procedure Release (Item : in Element_Type) is
   begin
      Available := Available + 1;
      Elements (Available) := Item;
   end Release;
end Pool;

The pool is instantiated:

type Connection is ...;
package Connection_Pool is new Util.Concurrent.Pools (Connection);

And a pool object can be declared and initialized with some default object:

P : Connection_Pool.Pool;
C : Connection;
...
   P.Set_Size (Capacity => 10);
   for I in 1 .. 10 loop
         ...
         P.Release (C);
   end loop;

References

util-concurrent-pools.ads
util-concurrent-pools.adb

To add a comment, you must be connected. Login to add a comment

Thread safe cache updates in Java and Ada

By Stephane Carrez 2 comments

Problem Description

The problem is to update a cache that is almost never modified and only read in multi-threaded context. The read performance is critical and the goal is to reduce the thread contention as much as possible to obtain a fast and non-blocking path when reading the cache.

Cache Declaration

Java Implementation

Let's define the cache using the HashMap class.

public class Cache {
   private HashMap<String,String> map = new HashMap<String, String>();
}

Ada Implementation

In Ada, let's instantiate the Indefinite_Hashed_Maps package for the cache.

with Ada.Strings.Hash;
with Ada.Containers.Indefinite_Hashed_Maps;
...
  package Hash_Map is
    new Ada.Containers.Indefinite_Hashed_Maps (Key_Type => String,
                       Element_Type => String,
                       Hash => Hash,
                       "=" => "=");

  Map : Hash_Map.Map;

Solution 1: safe and concurrent implementation

This solution is a straightforward solution using the language thread safe constructs. In Java this solution does not allow several threads to look at the cache at the same time. The cache access will be serialized. This is not a problem with Ada, where multiple concurrent readers are allowed. Only writing locks the cache object

Java Implementation

The thread safe implementation is protected by the synchronized keyword. It guarantees mutual exclusions of threads invoking the getCache and addCache methods.

   public synchronized String getCache(String key) {
      return map.get(key);
   }
   public synchronized void addCache(String key, String value) {
      map.put(key, value);
   }

Ada Implementation

In Ada, we can use the protected type. The cache could be declared as follows:

  protected type Cache is
    function Get(Key : in String) return String;
    procedure Put(Key, Value: in String);
  private
    Map : Hash_Map.Map;
  end Cache;

and the implementation is straightforward:

  protected body Cache is
    function Get(Key : in String) return String is
    begin
       return Map.Element (Key);
    end Get;
    procedure Put(Key, Value: in String) is
    begin
       Map.Insert (Key, Value);
    end Put;
  end Cache;

Pros and Cons

+: This implementation is thread safe.

-: In Java, thread contention is high as only one thread can look in the cache at a time.

-: In Ada, thread contention occurs only if another thread updates the cache (which is far better than Java but could be annoying for realtime performance if the Put operation takes time).

-: Thread contention is high as only one thread can look in the cache at a time.

Solution 2: weak but efficient implementation

The Solution 1 does not allow multiple threads to access the cache at the same time, thus providing a contention point. The second solution proposed here, removes this contention point by relaxing some thread safety condition at the expense of cache behavior.

In this second solution, several threads can read the cache at the same time. The cache can be updated by one or several threads but the update does not guarantee that all entries added will be present in the cache. In other words, if two threads update the cache at the same time, the updated cache will contain only one of the new entry. This behavior can be acceptable in some cases and it may not fit for all uses. It must be used with great care.

Java Implementation

A cache entry can be added in a thread-safe manner using the following code:

   private volatile HashMap<String, String> map = new HashMap<String, String>();
   public String getCache(String key) {
      return map.get(key);
   }
   public void addCache(String key, String value) {
      HashMap<String, String> newMap = new HashMap<String, String>(map);

      newMap.put(newKey, newValue);
      map = newMap;
   }

This implementation is thread safe because the hash map is never modified. If a modification is made, it is done on a separate hash map object. The new hash map is then installed by the map = newMap assignment operation which is atomic. Again this code extract does not guarantee that all the cache entries added will be part of the cache.

Ada Implementation

The Ada implementation is slightly more complex basically because there is no garbage collector. If we allocate a new hash map and update the access pointer, we still have to free the old hash map when no other thread is accessing it.

The first step is to use a reference counter to automatically release the hash table when the last thread finishes its work. The reference counter will handle memory management issues for us. An implementation of thread-safe reference counter is provided by Ada Util. In this implementation, counters are updated using specific instruction (See Showing multiprocessor issue when updating a shared counter).

with Util.Refs;
...
   type Cache is new Util.Refs.Ref_Entity with record
      Map : Hash_Map.Map;
   end record;
   type Cache_Access is access all Cache;

   package Cache_Ref is new Util.Refs.References (Element_Type => Cache,
                Element_Access => Cache_Access);

  C : Cache_Ref.Atomic_Ref;

Source: Util.Refs.ads, Util.Refs.adb

The References package defines a Ref type representing the reference to a Cache instance. To be able to replace a reference by another one in an atomic manner, it is necessary to use the Atomic_Ref type. This is necessary because the Ada assignment of an Ref type is not atomic (the assignment copy and the call to the Adjust operation to update the reference counter are not atomic). The Atomic_Ref type is a protected type that provides a getter and a setter. Their use guarantees the atomicity.

    function Get(Key : in String) return String is
      R : constant Cache_Ref.Ref := C.Get;
    begin
       return R.Value.Map.Element (Key); -- concurrent access
    end Get;
    procedure Put(Key, Value: in String) is
       R : constant Cache_Ref.Ref := C.Get;
       N : constant Cache_Ref.Ref := Cache_Ref.Create;
    begin
       N.Value.all.Map := R.Value.Map;
       N.Value.all.Insert (Key, Value);
       C.Set (N); -- install the new map atomically
    end Put;

Pros and Cons

+: high performance in SMP environments
+: no thread contention in Java
-: cache update can loose some entries
-: still some thread contention in Ada but limited to copying a reference (C.Set)

2 comments
To add a comment, you must be connected. Login to add a comment

Fault tolerant EJB interceptor: a solution to optimistic locking errors and other transient faults

By Stephane Carrez 4 comments

Fault tolerance is often necessary in application servers. The J2EE standard defines an interceptor mechanism that can be used to implement the first steps for fault tolerance. The pattern that I present in this article is the solution that I have implemented for the Planzone service and which is used with success for the last two years.

Identify the Fault to recover

The first step is to identify the faults that can be recovered from others. Our application is using MySQL and Hibernate and we have identified the following three transient faults (or recoverable faults).

StaleObjectStateException (Optimistic Locking)

Optimistic locking is a pattern used to optimize database transactions. Instead of locking the database tables and rows when values are updated, we allow other transactions to access these values. Concurrent writes are possible and they must be detected. For this optimistic locking uses a version counter, or a timestamp or state comparison to detect concurrent writes.

When a concurrent write is detected, Hibernate raises a StaleObjectStateException exception. When such exception occurs, the state of objects associated with the current hibernate session is unknown. (See Transactions and Concurrency)

As far as Planzone is concerned, we get 3 exceptions per 10000 calls.

LockAcquisitionException (Database deadlocks)

On the database side, the server can detect deadlock situation and report an error. When a deadlock is detected between two clients, the server generates an error for one client and the second one can proceed. When such error is reported, the client can retry the operation (See InnoDB Lock Modes).

As far as Planzone is concerned, we get 1 or 2 exceptions per 10000 calls.

JDBCConnectionException (Connection failure)

Sometimes the connection to the database is lost either because the database server crashed or because it was restarted due to maintenance reasons. Server crash is rare but it can occur. For Planzone, we had 3 crashes during the last 2 years (one crash every 240 day). During the same period we also had to stop and restart the server 2 times for a server upgrade.

Restarting the call after a database connection failure is a little bit more complex. It is necessary to sleep some time before retrying.

EJB Interceptor

To create our fault tolerant mechanism we use an EJB interceptor which is invoked for each EJB method call. For this the interceptor defines a method marked with the @ArroundInvoke annotation. Its role is to catch the transient faults and retry the call. The example below retries the call at most 10 times.

The EJB interceptor method receives an InvocationContext parameter which allows to have access to the target object, parameters and method to invoke. The proceed method allows to transfer the control to the next interceptor and to the EJB method. The real implementation is a little bit more complex due to logging but the overall idea is here.

class RetryInterceptor {
 @AroundInvoke
  public Object retry(InvocationContext context) throws Exception {
    for (int retry = 0; ; retry++) {
      try {
        return context.proceed();

      } catch (LockAcquisitionException ex) {
         if (retry > 10) {
          throw ex;
        }

     } catch (StaleObjectStateException ex) {
       if (retry > 10) {
        throw ex;
      }

    } catch (final JDBCConnectionException ex) {
      if (retry > 10) {
        throw ex;
      }
      Thread.sleep(500L + retry * 1000L);
   }
 }
}

EJB Interface

For the purpose of this article, the EJB interface is declared as follows. Our choice was to define an ILocal and an IRemote interface to allow the creation of local and remote services.

public interface Service {
    ...
    @Local
    interface ILocal extends Service {
    }

    @Remote
    interface IRemote extends Service {
    }
}

EJB Declaration

The interceptor is associated with the EJB implementation class by using the @Interceptors annotation. The same interceptor class can be associated with several EJBs.

@Stateless(name = "Service")
@Interceptors(RetryInterceptor.class)
public class ServiceBean
  implements Service.ILocal, Service.IRemote {
  ...
}

Testing

To test the solution, I recommend to write a unit test. The unit test I wrote did the following:

  • A first thread executes the EJB method call.
  • The transaction commit operation is overriden by the unit test.
  • When the commit is called, a second thread is activated to simulate the concurrent call before committing.
  • The second thread performs the EJB method call in such a way that it will trigger the StaleObjectStateException when the first thread resumes
  • When the second thread finished, the first thread can perform the real commit and the StaleObjectStateException is raised by Hibernate because the object was modified.
  • The interceptor catches the exception and retries the call which will succeed.

The full design of such test is outside of the scope of this article. It is also specific to each application.

4 comments
To add a comment, you must be connected. Login to add a comment