Java 2 Ada

Generating a REST Ada client with OpenAPI and Swagger Codegen

By stephane.carrez

The OpenAPI initiative aims at defining a standard for the specification of REST API. The OpenAPI Specification (OAS) defines a programming language-agnostic interface to describe a REST API. The Swagger Codegen generator supports more than 28 different languages (including Ada) and it is able to read an OpenAPI document and generate either the documentation or the client and server REST code for several target languages.

swagger-ada-generator.png

Writing an OpenAPI document

The OpenAPI document is either a JSON or a YAML file that describes the REST API operations. The document can be used both for the documentation of the API and for the code generation in several programming language. We will see briefly through the Petstore example how the OpenAPI document is organized. The full OpenAPI document is available in petstore.yaml.

General description

A first part of the OpenAPI document provides a general description of the API. This includes the general description, the terms of service, the license and some contact information.

swagger: '2.0'
info:
  description: 'This is a sample server Petstore server.  You can find out more about Swagger at [http://swagger.io](http://s
wagger.io) or on [irc.freenode.net, #swagger](http://swagger.io/irc/).  For this sample, you can use the api key `special-key
` to test the authorization filters.'
  version: 1.0.0
  title: Swagger Petstore
  termsOfService: 'http://swagger.io/terms/'
  contact:
    email: apiteam@swagger.io
  license:
    name: Apache 2.0
    url: 'http://www.apache.org/licenses/LICENSE-2.0.html'
host: petstore.swagger.io
basePath: /v2

Type description

The OpenAPI document can also describe types which are used by the REST operations. These types provide a description of how the data is organized and passed through the API operations.

It is possible to describe almost all possible types from simple properties, group of properties up to complex types including arrays. For example a Pet type is made of several properties each of them having a name, a type and other information to describe how the type is serialized.

definitions:
  Pet:
    title: a Pet
    description: A pet for sale in the pet store
    type: object
    required:
      - name
      - photoUrls
    properties:
      id:
        type: integer
        format: int64
      category:
        $ref: '#/definitions/Category'
      name:
        type: string
        example: doggie
      photoUrls:
        type: array
        xml:
          name: photoUrl
          wrapped: true
        items:
          type: string
      tags:
        type: array
        xml:
          name: tag
          wrapped: true
        items:
          $ref: '#/definitions/Tag'
      status:
        type: string
        description: pet status in the store
        enum:
          - available
          - pending
          - sold
    xml:
      name: Pet

In this example, the Pet type contains 6 properties (id, category, name, photoUrls, tags, status) and refers to two other types Category and Tag.

Operation description

Operations are introduced by the paths object in the OpenAPI document. This section describes the possible paths that can be used by URL and the associated operation. Some operations receive their parameter within the path and this is represented by the {name} notation.

The operation description indicates the HTTP method that is used get, post, put or delete.

The following definition describes the getPetById operation.

paths:
  '/pet/{petId}':
    get:
      tags:
        - pet
      summary: Find pet by ID
      description: Returns a single pet
      operationId: getPetById
      produces:
        - application/xml
        - application/json
      parameters:
        - name: petId
          in: path
          description: ID of pet to return
          required: true
          type: integer
          format: int64
      responses:
        '200':
          description: successful operation
          schema:
            $ref: '#/definitions/Pet'
        '400':
          description: Invalid ID supplied
        '404':
          description: Pet not found
      security:
        - api_key: []

The summary and description are used for the documentation purposes. The operationId is used by code generators to provide an operation name that a target programming language can use. The produces section indicates the media types that are supported by the operation and which are generated for the response. The parameters section represents all the operation parameters. Some parameters can be extracted from the path (which is the case for the petId parameter) and some others can be passed as query parameter.

The responses section describes the possible responses for the operation as well as the format used by the response. In this example, the operation returns an object described by the Pet type.

Using Swagger Codegen

The documentation and the Ada client are generated from the OpenAPI document by using the Swagger Codegen generator. The generator is a Java program that is packaged within a jar file. It must be launched by the Java 7 or Java 8 runtime.

Generating the documentation

The HTML documentation is generated from the OpenAPI document by using the following command:

 java -jar swagger-codegen-cli.jar generate -l html -i petstore.yaml -o doc

Generating the Ada client

To generate the Ada client, you will use the -l ada option to use the Ada code generator. The OpenAPI document is passed with the -i option.

 java -jar swagger-codegen-cli.jar generate -l ada -i petstore.yaml -o client \
       -DprojectName=Petstore --model-package Samples.Petstore

The Ada generator uses two options to control the generation. The -DprojectName=Petstore option allows to control the name of the generated GNAT project and the --model-package option controls the name of the Ada package for the generated code.

The Ada generator will create the following Ada packages:

  • Samples.Petstore.Models is the package that contains all the types described in the OpenAPI document. Each OpenAPI type is represented by an Ada record and it is also completed by an instantiation of the Ada.Containers.Vectors package for the representation of arrays of the given type. The Models package also provides Serialize and Deserialize procedures for the serialization and deserialization of the data over JSON or XML streams.
  • Samples.Petstore.Clients is the package that declares the Client_Type tagged record which provides all the operations for the OpenAPI document.

For the Pet type describe previously, the Ada generator produces the following code extract:

package Samples.Petstore.Models is
   ...
   type Pet_Type is
     record
       Id : Swagger.Long;
       Category : Samples.Petstore.Models.Category_Type;
       Name : Swagger.UString;
       Photo_Urls : Swagger.UString_Vectors.Vector;
       Tags : Samples.Petstore.Models.Tag_Type_Vectors.Vector;
       Status : Swagger.UString;
     end record;
     ...
end Samples.Petstore.Models;

and for the operation it generates the following code:

package Samples.Petstore.Clients is
   ...
   type Client_Type is new Swagger.Clients.Client_Type with null record;
   procedure Get_Pet_By_Id
      (Client : in out Client_Type;
       Pet_Id : in Swagger.Long;
       Result : out Samples.Petstore.Models.Pet_Type);
   ...
end Samples.Petstore.Clients;

Using the REST Ada client

Initialization

The HTTP/REST support is provided by Ada Util and encapsulated by Swagger Ada. The Ada Util library also takes care of the JSON and XML serialization and deserialization. If you want to use Curl, you should initialize with the following:

with Util.Http.Clients.Curl;
...
   Util.Http.Clients.Curl.Register;

But if you want to use AWS, you will initialize with:

with Util.Http.Clients.Web;
...
   Util.Http.Clients.Web.Register;

After the initialization is done, you will declare a client instance to access the API operations:

with Samples.Petstore.Clients;
...
   C : Samples.Petstore.Clients.Client_Type;

And you should initialize the server base URL you want to connect to. To use the live Swagger Petstore service you can set the server base URL as follows:

  C.Set_Server ("http://petstore.swagger.io/v2");

At this stage, you can use the generated operation by calling operations on the client.

Calling a REST operation

Let's retrieve some pet information by calling the Get_Pet_By_Id operation described previously. This operation needs an integer as input parameter and returns a Pet_Type object that contains all the pet information. You will first declare the pet instance as follows:

with Samples.Petstore.Models;
...
  Pet  : Samples.Petstore.Models.Pet_Type;

And then call the Get_Pet_By_Id operation:

  C.Get_Pet_By_Id (768, Pet);

At this stage, you can access information from the Pet instance:

with Ada.Text_IO;
...
  Ada.Text_IO.Put_Line ("Id      : " & Swagger.Long'Image (Pet.Id));
  Ada.Text_IO.Put_Line ("Name    : " & Swagger.To_String (Pet.Name));
  Ada.Text_IO.Put_Line ("Status  : " & Swagger.To_String (Pet.Status));

The Swagger Ada Petstore illustrates other uses of the generated operations. It allows to list the inventory, list the pets with a given status, add a pet and so on...

Conclusion and references

The OpenAPI Specification provides a standard way to describe REST operations. The Swagger Codegen is the generator to be used to simplify the implementation of REST clients in many programming languages and to generate the documentation of the API. The Ada code generator only supports the client side but the server code generation is under work.

The sources of the petstore samples are available:

The APIs.guru lists more than 550 API descriptions from various providers such as Amazon, Google, Microsoft and many other online services. They are now available to the Ada community!

To add a comment, you must be connected. Login to add a comment

Using the Gnome and KDE Secret Service API in Ada

By stephane.carrez

The Gnome and KDE desktop environments have designed a shared service API to allow applications to protect, retrieve and manage their secret data such as passwords and private keys. The Secret Service API defines the mechanisms and operations that can be used by applications to use the service.

The libsecret is the C library that gives access to the Secret Service API. The Ada Libsecret is an Ada binding for the C library. The Ada binding does not allow to access and use all of the functionalities implemented by the C library but it implements the most useful operations allowing to store, retrieve and delete some application secret data.

Understanding the Secret Service API

At first glance, the Secret Service API is not easy to use. Each secret is stored together with lookup attributes and a label. Lookup attributes are formed of key/value pairs. The label is the user friendly name that desktop key manager will use to display some information to the end user.

ada-libsecret-dbus.png

The Secret Service API is implemented by a keyring manager such as gnome-keyring-daemon or kwalletd. This is a daemon that is started when a user opens a desktop session. It manages the application secrets and protects their access. The secret database can be locked in which case the access to secrets is forbidden. Unlocking is possible but requires authentication by the user (in most cases a dialog popup window opens and asks to unlock the keyring).

When a client application wishes to retrieve one of its secret, it builds the lookup attributes that correspond to the secret to retrieve. The lookup attributes are not encrypted and they are not part of the secret. The client application uses the D-Bus IPC mechanism to ask the keyring manager for the secret. The keyring manager will manage for unlocking the database by asking the user to confirm the access. The keyring manager will then look in its database for the secret associated with the lookup attributes.

Note that the label cannot be used as a key to retrieve the secret since the same label can be associated with different lookup attributes.

Using the Ada Secret Service API

Setting up the project

After building and installing the Ada Libsecret library you will add the following line to your GNAT project file:

with "secret";

This definition will give you access to the Secret package and will handle the build and link support to use the libsecret C library.

Setting the lookup attributes

Attributes are defined by the Secret.Attributes package which provides the Map type that represents the lookup attributes. First, you will add the following with clause:

with Secret.Attributes;

to make available the operations and types provided by the package. Then, you will declare the attributes instance by using:

   List : Secret.Attributes.Map;

At this stage, the lookup attributes are empty and you can check that by using the Is_Null function that will return True in that case. You must now add at least one key/value pair in the attributes by using the Insert procedure:

   List.Insert ("secret-tool", "key-password");
   List.Insert ("user", "joe");

Applications are free about what attributes they use. The attributes have to be unique so that the application can identify and retrieve them. For example, the svn command uses two attributes to store the password to authenticate to svn remote repositories: domain and user. The domain represents the server URL and the user represents the user name to use for the connection. By using these two attributes, it is possible to store several passwords for different svn accounts.

Storing a secret

To store a secret, we will use the operations and types from the Secret.Services and Secret.Values packages. The following definitions:

with Secret.Services;
with Secret.Values;

will bring such definitions to the program. The secret service is represented by the Service_Type type and we will declare an instance of it as follows:

   Service : Secret.Services.Service_Type;

This service instance is a proxy to the Secret Service API and it communicates to the gnome-keyring-daemon by using the D-Bus protocol.

The secret value itself is represented by the Secret_Type and we can define and create such secret by using the Create function as follows:

   Value : Secret.Values.Secret_Type := Secret.Values.Create ("my-secret");

Storing the secret is done by the Store operation which associates the secret value to the lookup attributes and a label. As explained before, the lookup attributes represent the unique key to identify the secret. The label is used to give a user friendly name to the association. This label is used by the desktop password and key manager to give information to the user.

   Service.Store (List, "Secret tool password", Value);

Retreiving a secret

Retreiving a secret follows the same steps but involves using the Lookup function that returns the secret value from the lookup attributes. Care must be made to provide the same lookup attributes that were used during the store phase.

   Value : Secret.Values.Secret_Type := Service.Lookup (List);

The secret value should be checked by using the Is_Null function to verify that the value was found. The secret value itself is accessed by using the Get_Value function.

   if not Value.Is_Null then
      Ada.Text_IO.Put_Line (Value.Get_Value);
   end if;

Conclusion

By using the Ada Secret Service API, Ada applications can now securely store private information and protect resources for their users. The API is fairly simple and can be used to store OAuth access tokens, database passwords, and more...

Read the Ada Libsecret Documentation to learn more about the API.

To add a comment, you must be connected. Login to add a comment

Rest API Benchmark comparison between Ada and Java

By stephane.carrez 3 comments

Arcadius Ahouansou from Menelic.com made an interesting benchmark to compare several Java Web servers: Java REST API Benchmark: Tomcat vs Jetty vs Grizzly vs Undertow, Round 3. His benchmark is not as broad as the TechEmpower Benchmark but it has the merit to be simple to understand and it can be executed very easily by everyone. I decided to make a similar benchmark for Ada Web servers with the same REST API so that it would be possible to compare Ada and Java implementations.

The goal is to benchmark the following servers and have an idea of how they compare with each others:

The first three are implemented in Ada and the last one in Java.

REST Server Implementation

The implementation is different for each server but they all implement the same REST GET operation accessible from the /api base URL. They return the same JSON content:

{"greeting":"Hello World!"}

Below is an extract of the server implementation for each server.

AWS Rest API Server

function Get_Api (Request : in AWS.Status.Data) return AWS.Response.Data is
begin
   return AWS.Response.Build ("application/json", "{""greeting"":""Hello World!""}");
end Get_Api;

ASF Rest API Server

procedure Get (Req    : in out ASF.Rest.Request'Class;
               Reply  : in out ASF.Rest.Response'Class;
               Stream : in out ASF.Rest.Output_Stream'Class) is
begin
   Stream.Start_Document;
   Stream.Write_Entity ("greeting", "Hello World!");
   Stream.End_Document;
end Get;

EWS Rest API Server

function Get (Request : EWS.HTTP.Request_P) return EWS.Dynamic.Dynamic_Response'Class is
   Result : EWS.Dynamic.Dynamic_Response (Request);
begin
   EWS.Dynamic.Set_Content_Type (Result, To => EWS.Types.JSON);
   EWS.Dynamic.Set_Content (Result, "{""greeting"":""Hello World!""}");
   return Result;
end Get;

Java Rest API Server

@Produces(APPLICATION_JSON_UTF8_VALUE)
@Path("/api")
@Component
public class ApiResource {
  public static final String RESPONSE = "{\"greeting\":\"Hello World!\"}";
  
  @GET
  public Response test() {
      return ok(RESPONSE).build();
  }
}

Benchmark Strategy and Results

The Ada and Java servers are started on the same host (one at a time), a Linux Ubuntu 14.04 64-bit powered by an Intel i7-33770S CPU @3.10Ghz with 8-cores. The benchmark is made by using Siege executed on a second computer running Linux Ubuntu 15.04 64-bit powered by an Intel i7-4720HQ CPU @2.60Ghz with 8-cores. Client and server hosts are connected through a Gigabit Ethernet link.

Siege makes an intensive use of network connections which results in exhaustion of TCP/IP port to connect to the server. This is due to the TCP TIME_WAIT that prevents the TCP/IP port from being re-used for future connections. To avoid such exhaustion, the network stack is tuned on both the server and the client hosts with the sysctl commands:

sudo sysctl -w net.ipv4.tcp_tw_recycle=1
sudo sysctl -w net.ipv4.tcp_tw_reuse=1

The benchmark tests are executed by running the run-load-test.sh script and then making GNUplot graphs using plot-perf.gpi script. The benchmark gives the number of REST requests which are made per second for different level of concurrency.

  • The Embedded Web Server targets embedded platforms and it uses only one task to serve requests. Despite this simple configuration, it gets some honorable results as it reaches 8000 requests per second.
  • The Ada Server Faces provides an Ada implementation of Java Server Faces. It uses the Ada Web Server. The benchmark shows a small overhead (arround 4%).
  • The Ada Web Server is the fastest server in this configuration. As for the Ada Server Faces it is configured to only have 8 tasks that serve requests. Increasing the number of tasks does not bring better performance.
  • The Java Grizzly server is the faster Java server reported by Arcadius's benchmark. It uses 62 threads. It appears to serve 7% less requests than the Ada Web Server.

ada-rest-api-benchmark.png

On the memory side, the process Resident Set Size (RSS) is measured once the benchmark test ends and graphed below. The Java Grizzly server uses arround 580 Mb, followed by Ada Server Faces that uses 5.6Mb, Ada Web Server 3.6Mb and the EWS only 1 Mb.

ada-rest-api-memory.png

Conclusion and References

The Ada Web Server has comparable performance with the Java Grizzly server (it is even a little bit faster). But as far a memory is concerned, Ada has a serious advantage since it cuts the memory size by a factor of 100. Ada has other advantages that make it an alternative choice for web development (safety, security, realtime capabilities, ...).

Sources of the benchmarks are available in the following two GitHub repositories:

3 comments
To add a comment, you must be connected. Login to add a comment

Atlas 1.0.0 the Ada Web Application demonstrator available as Docker image

By stephane.carrez

Atlas is a small application intended to show various features provided by the Ada Web Application framework. The application features:

  • A small blogging system,
  • A question and answer area,
  • A complete wiki system,
  • A document and image storage space,
  • Authentication with Google+ or Facebook.

atlas-mashup.png

Atlas is now available as a Docker image so that you can easily try it.

What is Docker ?

Docker is a container platform that allows to run applications on the host but within an isolated environment. The container has its own libraries, its own network, its own root file system but it shares the same running Linux kernel as the host. Docker is based on Linux containers which provides kernel namespaces and cgroups. Docker provides a lot of abstractions that simplifies the creation, startup and management of containers.

To learn more about Docker, you may have a look at the Get started with Docker documentation.

Using the Atlas Docker image

The Atlas Docker image is available at the Docker Hub cloud-based registry service. This registry allows you to get and synchronize your local Docker images easily by pulling them from the cloud.

Assuming that you have installed Docker, you can pull the Atlas Docker image by using the following command:

  sudo docker pull ciceron/atlas

Beware that the Docker image is a 64-bit image so it runs only on Linux x86_64 hosts. Once you have obtained the image, you can create the container and start it as follows:

  sudo docker run --name atlas -p 8080:8080 ciceron/atlas

and then point your browser to http://localhost:8080/atlas/index.html The -p 8080:8080 option tells Docker to expose the TCP/IP port 8080 from the container to the host so that you can access the web application.

The application will first display some installation page that allows you to choose the database, configure the mail server and the Google and Facebook connexions (most of the default values should be correct).

To stop and cleanup the docker container, you can use the following commands:

  sudo docker stop atlas
  sudo docker rm atlas

Learning more about Ada Web Application

You may read the following tutorials to lean more about the technical details about setting up and building an Ada Web Application:

To add a comment, you must be connected. Login to add a comment

Simple UDP Echo Server on STM32F746

By stephane.carrez

Writing a simple UDP server in Ada for a STM32F746 ARM controller is now easy with the use of the Ada Embedded Network stack. The article describes through a simple UDP echo server the different steps for the implementation of an UDP server.

Overview

The Echo server listens to the UDP port 7 on the Ethernet network and it sends back the received packet to the sender: this is the RFC 862 Echo protocol. Our application follows that RFC but it also maintains a list of the last 10 messages that have been received. The list is then displayed on the STM32 display so that we get a visual feedback of the received messages.

The Echo server uses the DHCP client to get and IPv4 address and the default gateway. We will see how that DHCP client is integrated in the application.

The application has two tasks. The main task loops to manage the refresh of the STM32 display and also to perform some network housekeeping such as the DHCP client management and ARP table management. The second task is responsible for waiting Ethernet packets, analyzing them to handle ARP, ICMP and UDP packets.

Through this article, you will see:

  1. How the STM32 board and network stack are initialized,
  2. How the board gets an IPv4 address using DHCP,
  3. How to implement the UDP echo server,
  4. How to build and test the echo server.

Initialization

STM32 Board Initialization

First of all, the STM32 board must be initialized. There is no random generator available in the Ada Ravenscar profile and we need one for the DHCP protocol for the XID generation. The STM32 provides a hardware random generator that we are going to use. The Initialize_RNG must be called once during the startup and before any network operation is called.

We will use the display to list the messages that we have received. The Display instance must be initialized and the layer configured.

with HAL.Bitmap;
with STM32.RNG.Interrupts;
with STM32.Board;
...
   STM32.RNG.Interrupts.Initialize_RNG;
   STM32.Board.Display.Initialize;
   STM32.Board.Display.Initialize_Layer (1, HAL.Bitmap.ARGB_1555);
Network stack initialization

The network stack will need some memory to receive and send network packets. As described in Using the Ada Embedded Network STM32 Ethernet Driver, we allocate the memory by using the SDRAM.Reserve function and the Add_Region procedure to configure the network buffers that will be available.

An instance of the STM32 Ethernet driver must be declared in a package. The instance must be aliased because the network stack will need to get an access to it.

with Interfaces;
with Net.Buffers;
with Net.Interfaces.STM32;
with STM32.SDRAM;
...
   NET_BUFFER_SIZE : constant Interfaces.Unsigned_32 := Net.Buffers.NET_ALLOC_SIZE * 256;
   Ifnet : aliased Net.Interfaces.STM32.STM32_Ifnet;

The Ethernet driver is initialized by calling the Initialize procedure. By doing so, the Ethernet receive and transmit rings are configured and we are ready to receive and transmit packets. On its side the Ethernet driver will also reserve some memory by using the Reserve and Add_Region operations. The buffers allocated will be used for the Ethernet receive ring.

   Net.Buffers.Add_Region (STM32.SDRAM.Reserve (Amount => NET_BUFFER_SIZE), NET_BUFFER_SIZE);
   Ifnet.Initialize;

The Ethernet driver configures the MII transceiver and enables interrupts for the receive and transmit rings.

Getting the IPv4 address with DHCP

At this stage, the network stack is almost ready but it does not have any IPv4 address. We are going to use the DHCP protocol to automatically get an IPv4 address, get the default gateway and other network configuration such as the DNS server. The DHCP client uses a UDP socket on port 68 to send and receive DHCP messages. Such DHCP client is provided by the Net.DHCP package and we need to declare an instance of it. The DHCP client is based on the UDP socket support that we are going to use for the echo server. The DHCP client instance must be declared aliased because the UDP socket layer need to get an access to it to propagate the DHCP packets that are received.

with Net.DHCP;
...
   Dhcp : aliased Net.DHCP.Client;

The DHCP client instance must be initialized and the Ethernet driver interface must be passed as parameter to correctly configure and bind the UDP socket. After the Initialize procedure is called, the DHCP state machine is ready to enter into action. We don't have an IPv4 address after the procedure returns.

   Dhcp.Initialize (Ifnet'Access);

The DHCP client is using an asynchronous implementation to maintain the client state according to RFC 2131. For this it has two important operations that are called by tasks in different contexts. First the Process procedure is responsible for sending requests to the DHCP server and to manage the timeouts used for the retransmissions, renewal and lease expiration. The Process procedure sends the DHCPDISCOVER and DHCPREQUEST messages. On the other hand, the Receive procedure is called by the network stack to handle the DHCP packets sent by the DHCP server. The Receive procedure gets the DHCPOFFER and DHCPACK messages.

Getting an IPv4 address with the DHCP protocol can take some time and must be repeated continuously due to the DHCP lease expiration. This is why the DHCP client must not be stopped and should continue forever.

Refer to the DHCP documentation to learn more about this process.

UDP Echo Server

Logger protected type

The echo server will record the message that are received. The message is inserted in the list by the receive task and it is read by the main task. We use the an Ada protected type to protect the list from concurrent accesses.

Each message is represented by the Message record which has an identifier that is unique and incremented each time a message is received. To avoid dynamic memory allocation the list of message is fixed and is represented by the Message_List array. The list itself is managed by the Logger protected type.

type Message is record
   Id      : Natural := 0;
   Content : String (1 .. 80) := (others => ' ');
end record;
type Message_List is array (1 .. 10) of Message;

protected type Logger is
   procedure Echo (Content : in Message);
   function Get return Message_List;
private
   Id   : Natural := 0;
   List : Message_List;
end Logger;

The Logger protected type provides the Echo procedure to insert a message to the list and the Get function to retrieve the list of messages.

Server Declaration

The UDP Echo Server uses the UDP socket support provided by the Net.Sockets.UDP package. The UDP package defines the Socket abstract type which represents the UDP endpoint. The Socket type is abstract because it defines the Receive procedure that must be implemented. The Receive procedure will be called by the network stack when a UDP packet for the socket is received.

The declaration of our echo server is the following:

with Net.Buffers;
with Net.Sockets;
...
   type Echo_Server is new Net.Sockets.UDP.Socket with record
      Count    : Natural := 0;
      Messages : Logger;
   end record;

It holds a counter of message as well as the messages in the Logger protected type.

The echo server must implement the Receive procedure:

overriding
procedure Receive (Endpoint : in out Echo_Server;
                   From     : in Net.Sockets.Sockaddr_In;
                   Packet   : in out Net.Buffers.Buffer_Type);

The network stack will call the Receive procedure each time a UDP packet for the socket is received. The From parameter will contain the IPv4 address and UDP port of the client that sent the UDP packet. The Packet parameter contains the received UDP packet.

Server Implementation

Implementing the server is very easy because we only have to implement the Receive procedure (we will leave the Logger protected type implementation as an exercise to the reader).

First we use the Get_Data_Size function to get the size of our packet. The function is able to return different sizes to take into account one or several protocol headers. We want to know the size of our UDP packet, excluding the UDP header. We tell Get_Data_Size we want to get the UDP_PACKET size. This size represents the size of the echo message sent by the client.

   Msg    : Message;
   Size   : constant Net.Uint16 := Packet.Get_Data_Size (Net.Buffers.UDP_PACKET);
   Len    : constant Natural
        := (if Size > Msg.Content'Length then Msg.Content'Length else Natural (Size));

Having the size we truncate it so that we get a string that fits in our message. We then use the Get_String procedure to retrieve the echo message in a string. This procedure gets from the packet a number of characters that corresponds to the string length passed as parameter.

   Packet.Get_String (Msg.Content (1 .. Len));

The Buffer_Type provides other Get operations to extract data from the packet. It maintains a position in the buffer that tells the Get operation the location to read in the packet and each Get updates the position according to what was actually read. There are also several Put operations intended to be used to write and build the packet before sending it. We are not going to use them because the echo server has to return the original packet as is. Instead, we have to tell what is the size of the packet that we are going to send. This is done by the Set_Data_Size procedure:

   Packet.Set_Data_Size (Size);

Here we want to give the orignal size so that we return the full packet.

Now we can use the Send procedure to send the packet back to the client. We use the client IPv4 address and UDP port represented by From as the destination address. The Send procedure returns a status that tells whether the packet was successfully sent or queued.

Status : Net.Error_Code;
...
   Endpoint.Send (To => From, Packet => Packet, Status => Status);
Server Initialization

Now that the Echo_Server type is implemented, we have to make a global instance of it and bind it to the UDP port 7 that corresponds to the UDP echo protocol. The port number must be defined in network byte order (as in Unix Socket API) and this is why it is converted using the To_Network function. We don't know our IPv4 address and by using 0 we tell the UDP stack to use the IPv4 address that is configured on the Ethernet interface.

Server : aliased Echo_Server;
...
   Server.Bind (Ifnet'Access, (Port => Net.Headers.To_Network (7),
                               Addr => (others => 0)));

Main loop and receive task

As explained in the overview, we need several tasks to handle the display, network housekeeping and reception of Ethernet packets. To make it simple the display, ARP table management and DHCP client management will be handled by the main task. The reception of Ethernet packet will be handled by a second task. It is possible to use a specific task for the ARP management and another one for the DHCP but there is no real benefit in doing so for our simple echo server.

The main loop repeats calls to the ARP Timeout procedure and the DHCP Process procedure. The Process procedure returns a delay that we are supposed to wait but we are not going to use it for this example. The main loop simply looks as follows:

Dhcp_Timeout : Ada.Real_Time.Time_Span;
...
   loop
      Net.Protos.Arp.Timeout (Ifnet);
      Dhcp.Process (Dhcp_Timeout);
      ...
      delay until Ada.Real_Time.Clock + Ada.Real_Time.Milliseconds (500);
   end loop;

The receive task was described in the previous article Using the Ada Embedded Network STM32 Ethernet Driver. The task is declared at package level as follows:

   task Controller with
     Storage_Size => (16 * 1024),
     Priority => System.Default_Priority;

And the implementation loops to receive packets from the Ethernet driver and calls either the ARP Receive procedure, the ICMP Receive procedure or the UDP Input procedure. The complete implementation can be found in the receive.adb file.

Building and testing the server

To build the UDP echo server and have it run on the STM32 board is a three step process:

  1. First, you will use the arm-eabi-gnatmake command with the echo GNAT project. After successful build, you will get the echo ELF binary image in obj/stm32f746disco/echo.
  2. Then, the ELF image must be converted to binary by extracting the ELF sections that must be put on the flash. This is done by running the arm-eabi-objcopy command.
  3. Finaly, the binary image produced by arm-eabi-objcopy must be put on the flash using the st-util utility. You may have to press the reset button on the board so that the st-util is able to take control of the board; then release the reset button to let st-util the flash the image.

To help in this process, you can use the Makefile and simply run the following make targets:

make echo
make flash-echo

Once the echo application is running, it displays some banner with the information of the DHCP state machine. Once the IPv4 address is obtained, it is displayed with the gateway and the DNS. Take that IPv4 address and use the following command to send message and have them written on the display:

echo -n 'Hello! Ada is great!' | socat - UDP:192.168.1.156:7

(replace 192.168.1.156 with the IPv4 address displayed on the board).

ada-enet-hello.png

The above message was printed with the following script:

#!/bin/sh
IP=$1
FILE=$2
while IFS='' read -r line ; do
  echo -n "$line" | socat - UDP:$IP:7
done < $FILE

and a text file generated with the UNIX System V banner utility.

References

Sources of the article are available in Github https://github.com/stcarrez/ada-enet and you may browse the following files and documentation:

To add a comment, you must be connected. Login to add a comment

Ethernet Traffic Monitor on a STM32F746

By stephane.carrez

EtherScope is a monitoring tool that analyzes the Ethernet traffic. It runs on a STM32F746 board, reads the Ethernet packets, do some real-time analysis and displays the results on the 480x272 touch panel. The application is completely written in Ada 2012 with:

  • The GNAT ARM embedded runtimes is the Ada 2012 ravenscar runtime that provides support for interrupts, tasks, protected objects and other Ada features.
  • The Ada Embedded Network Stack is the small network library that provides network buffer management with an Ethernet driver for the STM32F746 board.
  • The EtherScope application which performs the analysis and displays the information.

Traffic Analyzer

The traffic analyzer inspects the received packet and tries to find interesting information about it. The analyzer is able to recognize several protocols. New protocols may easily be added in the future. The first version supports:

  • Analysis of Ethernet frame to identify the devices that are part of the network with their associated IP address and network utilization.
  • Analysis of IPv4 packet to identify the main IPv4 protocols including ICMP, IGMP, UDP and TCP.
  • Analysis of IGMP with discovery of subscribed multicast groups and monitoring of the associated UDP traffic.
  • Analysis of TCP with the identification of some well known protocols such as http, https, ssh and others.

Each analyser collects the information and is able to report the number of bytes, number of packets and network bandwidth utilization. Some information is also collected in different graph tables so that we can provide some visual graph about the network bandwidth usage.

Network setup to use EtherScope

To use EtherScope, you will connect the STM32F746 board to an Ethernet switch that you insert or have on your network. By default, the switch will isolate the different ports (as opposite to a hub) and unicast traffic is directed only to the concerned port. In other words, EtherScope will only see broadcast and multi-cast traffic. In order to see the interesting traffic (TCP for example), you will need to configure the switch to do port mirroring. By doing so, you tell the switch to mirror all the traffic of a selected port to the mirror port. You will connect EtherScope to that mirror port and it will see all the mirrored traffic.

net-monitoring.png

EtherScope in action

The following 4 minutes video shows the EtherScope in action.

EtherScope Internal Design

The EtherScope has several functional layers:

  • The display layer manages the user interaction through the touch panel. It displays the information that was analyzed and manages the refresh of the display with its graphs.
  • The packet analyzer inspects the traffic.
  • The Ethernet network driver configures the Ethernet receive ring, handles interrupts and manages the reception of packets (the transmission part is not used for this project).
  • The Ada Drivers Library provides a number of utility packages from their samples to manage the display and draw text as well as some geometric forms.
  • The GNAT ARM ravenscar runtime provides low level support for the STM32 board configuration, interrupt and task management. It also brings a number of important drivers to control the touch panel, the button, SPI, I2C and other hardware components.

etheroscope-design.png

The EtherScope.Receiver is the package that has the receiver task that loops to receive a packet from the Ethernet driver and analyzer it through the analyzer. Because the result of the analysis is shared between two tasks, it is protected by the DB protected object.

The EtherScope.Display provides several operations to display the analysis in various forms depending on the user selection. Its operations are called repeatedly by the etherscope main loop. The display operation fetch the analysis from the DB protected object and format the result through the UI.Graphs or text presentations.

Conclusion

You can get the EtherScope sources at: https://github.com/stcarrez/etherscope Feel free to fork EtherScope, hack it and add new protocol analyzers.

The following analyzers could be implemented in the future:

  • A DNS analyzer that shows which DNS requests are made,
  • A DHCP analyzer to track and show IP allocation,
  • A FTP analyzer to reconcile the ftp-data stream to the ftp flow,
  • An IPv6 analyzer
To add a comment, you must be connected. Login to add a comment

Using the Ada Embedded Network STM32 Ethernet Driver

By stephane.carrez

The Ada Embedded Network is a small IPv4 network stack intended to run on STM32F746 or equivalent devices. This network stack is implemented in Ada 2012 and its architecture has been inspired by the BSD network architecture described in the book "TCP/IP Illustrated, Volume 2, The Implementation" by Gary R. Wright and W. Richard Stevens.

This article discusses the Ethernet Driver design and implementation. The IP protocol layer part will be explained in a next article.

In any network stack, the buffer management is key to obtain good performance. Let's see how it is modeled.

Net.Buffers

The Net.Buffers package provides support for network buffer management. A network buffer can hold a single packet frame so that it is limited to 1500 bytes of payload with 14 or 16 bytes for the Ethernet header. The network buffers are allocated by the Ethernet driver during the initialization to setup the Ethernet receive queue. The allocation of network buffers for the transmission is under the responsibility of the application.

Before receiving a packet, the application also has to allocate a network buffer. Upon successful reception of a packet by the Receive procedure, the allocated network buffer will be given to the Ethernet receive queue and the application will get back the received buffer. There is no memory copy.

The package defines two important types: Buffer_Type and Buffer_List. These two types are limited types to forbid copies and force a strict design to applications. The Buffer_Type describes the packet frame and it provides various operations to access the buffer. The Buffer_List defines a list of buffers.

The network buffers are kept within a single linked list managed by a protected object. Because interrupt handlers can release a buffer, that protected object has the priority System.Max_Interrupt_Priority. The protected operations are very basic and are in O(1) complexity so that their execution is bounded in time whatever the arguments.

Before anything, the network buffers have to be allocated. The application can do this by reserving some memory region (using STM32.SDRAM.Reserve) and adding the region with the Add_Region procedure. The region must be a multiple of NET_ALLOC_SIZE constant. To allocate 32 buffers, you can do the following:

 NET_BUFFER_SIZE  : constant Interfaces.Unsigned_32 := Net.Buffers.NET_ALLOC_SIZE * 32;
 ...
 Net.Buffers.Add_Region (STM32.SDRAM.Reserve (Amount => NET_BUFFER_SIZE), NET_BUFFER_SIZE);

An application will allocate a buffer by using the Allocate operation and this is as easy as:

 Packet : Net.Buffers.Buffer_Type;
 ...
 Net.Buffers.Allocate (Packet);

What happens if there is no available buffer? No exception is raised because the networks stack is intended to be used in embedded systems where exceptions are not available. You have to check if the allocation succeeded by using the Is_Null function:

 if Packet.Is_Null then
   null; --  Oops
 end if;

Net.Interfaces

The Net.Interfaces package represents the low level network driver that is capable of sending and receiving packets. The package defines the Ifnet_Type abstract type which defines the three important operations:

  • Initialize to configure and setup the network interface,
  • Send to send a packet on the network.
  • Receive to wait for a packet and get it from the network.

STM32 Ethernet Driver

The STM32 Ethernet driver implements the three important operations required by the Ifnet_Type abstraction. The Initialize procedure performs the STM32 Ethernet initialization, configures the receive and transmit rings and setup to accept interrupts. This operation must be called prior to any other.

Sending a packet

The STM32 Ethernet driver has a transmit queue to manage the Ethernet hardware transmit ring and send packets over the network. The transmit queue is a protected object so that concurrent accesses between application task and the Ethernet interrupt are safe. To transmit a packet, the driver adds the packet to the next available transmit descriptor. The packet buffer ownership is transferred to the transmit ring so that there is no memory copy. Once the packet is queued, the application has lost the buffer ownership. The buffer being owned by the DMA, it will be released by the transmit interrupt, as soon as the packet is sent (3).

ada-driver-send.png

When the transmit queue is full, the application is blocked until a transmit descriptor becomes available.

Receiving a packet

The SMT32 Ethernet driver has a receive queue which is a second protected object, separate from the transmit queue. The receive queue is used by the Ethernet hardware to control the Ethernet receive ring and by the application to pick received packets. Each receive descriptor is assigned a packet buffer that is owned by default to the DMA. When a packet is available and the application calls the Wait_Packet operation, the packet buffer ownership is transferred to the application to avoid any memory copy. To avoid having a ring descriptor loosing its buffer, the application gives a new buffer that is used for the ring descriptor. This is why the application has first to allocate the buffer (1), call the Receive operation (2) to get back the packet in a new buffer and finally release the buffer when it has done with it (3).

ada-driver-receive.png

Receive loop example

Below is an example of a task that loops to receive Ethernet packets and process them. This is the main receiver task used by the EtherScope monitoring tool.

The Ifnet driver initialization is done in the main EtherScope task. We must not use the driver before it is full initialized. This is why the task starts to loop for the Ifnet driver to be ready.

  task body Controller is
     use type Ada.Real_Time.Time;
  
     Packet  : Net.Buffers.Buffer_Type;
  begin
     while not Ifnet.Is_Ready loop
        delay until Ada.Real_Time.Clock + Ada.Real_Time.Seconds (1);
     end loop;
     Net.Buffers.Allocate (Packet);
     loop
        Ifnet.Receive (Packet);
        EtherScope.Analyzer.Base.Analyze (Packet);
     end loop;
  end Controller;

Then, we allocate a packet buffer and enter in the main loop to continuously receive a packet and do some processing. The careful reader will note that there is no buffer release. We don't need that because the Receive driver operation will pick our buffer for its ring and it will give us a buffer that holds the received packet. We will give him back that buffer at the next loop. In this application, the number of buffers needed by the buffer pool is the size of the Ethernet Rx ring plus one.

The complete source is available in etherscope-receiver.adb.

Using this design and implementation, the EtherScope application has shown that it can sustain more than 95Mb of traffic for analysis. Quite nice for 216 Mhz ARM Cortex-M7!

To add a comment, you must be connected. Login to add a comment

Using the Ada Wiki Engine

By stephane.carrez

The Ada Wiki Engine is a small Ada library that parses a Wiki text in several Wiki syntax such as MediaWiki, Creole, Markdown and renders the result either in HTML, text or into another Wiki format. The Ada Wiki Engine is used in two steps:

  1. The Wiki text is parsed according to its syntax to produce a Wiki Document instance.
  2. The Wiki document is then rendered by a renderer to produce the final HTML or text.

The Ada Wiki Engine does not manage any storage for the wiki content so that it only focuses on the parsing and rendering aspects.

Overview

The Ada Wiki engine is organized in several packages:

  • Several Wiki stream packages define the interface, types and operations for the Wiki engine to read the Wiki or HTML content and for the Wiki renderer to generate the HTML or text outputs.
  • The Wiki parser is responsible for parsing HTML or Wiki content according to a selected Wiki syntax. It builds the final Wiki document through filters and plugins.

ada-wiki.png

  • The Wiki filters provides a simple filter framework that allows to plug specific filters when a Wiki document is parsed and processed. Filters are used for the table of content generation, for the HTML filtering, to collect words or links and so on.
  • The Wiki plugins defines the plugin interface that is used by the Wiki engine to provide pluggable extensions in the Wiki. Plugins are used for the Wiki template support, to hide some Wiki text content when it is rendered or to interact with other systems.
  • The Wiki documents and attributes are used for the representation of the Wiki document after the Wiki content is parsed.
  • The Wiki renderers are the last packages which are used for the rendering of the Wiki document to produce the final HTML or text.

Building Ada Wiki Engine

Download the ada-wiki-1.0.1.tar.gz or get the sources from GitHub:

git clone git@github.com:stcarrez/ada-wiki.git ada-wiki

If you are using Ada Utility Library then you can configure with:

./configure

Otherwise, you should configure with:

./configure --with-ada-util=no

Then, build the library:

make

Once complete, you can install it:

make install

To use the library in your Ada project, add the following line in your GNAT project file:

with "wiki";

Rendering example

The rendering example described in this article generates an HTML or text content from a Wiki source file. The example reads the file in one of the supported Wiki syntax and produces the HTML or text. You will find the source file on GitHub in render.adb. The example has the following usage:

Render a wiki text file into HTML (default) or text
Usage: render [-t] [-m] [-M] [-d] [-c] [-s style] {wiki-file}
  -t        Render to text only
  -m        Render a Markdown wiki content
  -M        Render a Mediawiki wiki content
  -d        Render a Dotclear wiki content
  -g        Render a Google wiki content
  -c        Render a Creole wiki content
  -s style  Use the CSS style file

Parsing a Wiki Text

To render a Wiki text you will first need to parse the Wiki text and produce a Wiki document instance. For this you will need to declare the Wiki document instance and the Wiki parser instance:

with Wiki.Documents;
with Wiki.Parsers;
...
   Doc      : Wiki.Documents.Document;
   Engine   : Wiki.Parsers.Parser;

The Ada Wiki Engine has a filter mechanism that is used while parsing the input and before building the target wiki document instance. Filters are chained together and a filter can do some work on the content it sees such as blocking some content (filtering), collecting some data and doing some transformation on the content. When you want to use a filter, you have to declare an instance of the corresponding filter type.

with Wiki.Filters.Html;
with Wiki.Filters.Autolink;
with Wiki.Filters.TOC;
...
   Filter   : aliased Wiki.Filters.Html.Html_Filter_Type;
   Autolink : aliased Wiki.Filters.Autolink.Autolink_Filter;
   TOC      : aliased Wiki.Filters.TOC.TOC_Filter;

We use the Autolink filter that detects links in the text and transforms them into real links. The TOC filter is used to collect header sections in the Wiki text and builds a table of content. The Html filter is used to filter HTML content that could be contained in a Wiki text. By default it ignores several HTML tags such as html, head, body, title, meta (these tags are silently discarded). Furthermore it has the ability to hide several elements such as style and script (the tag and its content is discarded).

You will then configure the Wiki engine to build the filter chain and then define the Wiki syntax that the parser must use:

Engine.Add_Filter (TOC'Unchecked_Access);
Engine.Add_Filter (Autolink'Unchecked_Access);
Engine.Add_Filter (Filter'Unchecked_Access);
Engine.Set_Syntax (Syntax);

The Wiki engine gets its input from an Input_Stream interface that only defines a Read procedure. The Ada Wiki Engine provides several implementations of that interface, one of them is based on the Ada Text_IO package. This is what we are going to use:

with Wiki.Streams.Text_IO;
...
   Input    : aliased Wiki.Streams.Text_IO.File_Input_Stream;

You will then open the input file. If the file contains UTF-8 characters, you may open it as follows:

Input.Open (File_Path, "WCEM=8");

where File_Path is a string that represents the file's path.

Once the Wiki engine is setup and the input file opened, you can parse the Wiki text and build the Wiki document:

Engine.Parse (Input'Unchecked_Access, Doc);

Rendering a Wiki Document

After parsing a Wiki text you get a Wiki.Documents.Document instance that you can use as many times as you want. To render the Wiki document, you will first choose a renderer according to the target format that you need. The Ada Wiki Engine provides three renderers:

  • A Text renderer that produces text outputs,
  • A HTML renderer that generates an HTML presentation for the document,
  • A Wiki renderer that generates various Wiki syntaxes.

The renderer needs an output stream instance. We are using the Text_IO implementation:

with Wiki.Stream.Html.Text_IO;
with Wiki.Render.Html;
...
   Output   : aliased Wiki.Streams.Html.Text_IO.Html_File_Output_Stream;
   Renderer : aliased Wiki.Render.Html.Html_Renderer;

You will then configure the renderer to tell it the output stream to use. You may enable or not the rendering of Table Of Content and you just use the Render procedure to render the document.

Renderer.Set_Output_Stream (Output'Unchecked_Access);
Renderer.Set_Render_TOC (True);
Renderer.Render (Doc);

By default the output stream is configured to write on the standard output. This means that when Render is called, the output will be written to the standard output. You can choose another output stream or open the output stream to a file according to your needs.

Conclusion

The Ada Wiki Engine can be used to parse HTML content, sanitize the result through the HTML filter and convert it to text or to some Wiki syntax (have a look at the import.adb example). The engine can be extended through filters or plugins thus providing some flexible architecture. The library does not impose any storage mechanism. The Ada Wiki Engine is the core engine used by AWA Blogs and AWA Wiki web applications. You may have a look at some online Wiki in the Atlas Wiki demonstrator.

To add a comment, you must be connected. Login to add a comment

GCC 6.1 Ada Compiler From Scratch

By stephane.carrez

GCC 6.1 release has been announced recently by Jakub Jelinek and it is now time to build a new Ada compiler with it. The process to do that is not complex but contains a few pitfalls. We will do the following tasks:

  1. The binutils build and installation,
  2. The gcc build and installation,
  3. Setting up a default configuration for gprbuild,
  4. The XML/Ada build and installation,
  5. The gprbuild build and installation.

Pre-requisites

First, prepare three distinct directories for the sources, the build materials and the installation. Make sure you have more than 1.5G for the source directory, reserve 7.0G for the build directory and arround 1.5G for the installation directory.

To simplify the commands, define the following shell variables:

BUILD_DIR=<Path of build directory>
INSTALL_DIR=<Path of installation directory>
SRC_DIR=<Path of directory containing the extracted sources>

Also, check that:

  • You have a GNAT Ada compiler installed (at least a 4.9 I guess).
  • You have the gprbuild tool installed and configured for the Ada compiler.
  • You have libmpfr-dev, libgmp3-dev and libgmp-dev installed (otherwise this is far more complex).
  • You have some time and can wait for gcc's compilation (it took more than 2h for me).

Create the directories:

mkdir -p $BUILD_DIR
mkdir -p $INSTALL_DIR/bin
mkdir -p $SRC_DIR

And setup your PATH so that you will use the new binutils and gcc commands while building everything:

export PATH=$INSTALL_DIR/bin:/usr/bin:/bin

Binutils

Download binutils 2.26 and extract the tar.bz2 in the source directory $SRC_DIR.

cd $SRC_DIR
tar xf binutils-2.26.tar.bz2

Never build the binutils within their sources, you must use the $BUILD_DIR for that. Define the installation prefix and configure the binutils as this:

mkdir $BUILD_DIR/binutils
cd $BUILD_DIR/binutils
$SRC_DIR/binutils-2.26/configure --prefix=$INSTALL_DIR

And proceed with the build in the same directory:

make

Compilation is now complete you can install the package:

make install

Gcc

Download gcc 6.1.0 and extract the tar.bz2 in the source directory $SRC_DIR.

cd $SRC_DIR
tar xf gcc-6.1.0.tar.bz2

Again, don't build gcc within its sources and use the $BUILD_DIR directory. At this stage, it is important that your PATH environment variable uses the $INSTALL_DIR/bin first to make sure you will use the new installed binutils tools. You may add the --disable-bootstrap to speed up the build process.

mkdir $BUILD_DIR/gcc
cd $BUILD_DIR/gcc
$SRC_DIR/gcc-6.1.0/configure --prefix=$INSTALL_DIR --enable-languages=c,c++,ada

And proceed with the build in the same directory (go to the restaurant or drink a couple of beers while it builds):

make

Compilation is now complete you can install the package:

make install

The Ada compiler installation does not install two symbolic links which are required during the link phase of Ada libraries and programs. You must create them manually after the install step:

ln -s libgnarl-6.so $INSTALL_DIR/lib/gcc/x86_64-pc-linux-gnu/6.1.0/adalib/libgnarl-6.1.so
ln -s libgnat-6.so $INSTALL_DIR/lib/gcc/x86_64-pc-linux-gnu/6.1.0/adalib/libgnat-6.1.so

Setup the default.cgpr file

The gnatmake command has been deprecated and it is now using gprbuild internally. This means we need a version of gprbuild that uses the new compiler. One way to achieve that is by setting up a gprbuild configuration file:

cd $BUILD_DIR
gprconfig

Select the Ada and C compiler and then edit the default.cgpr file that was generated to change the Toolchain_Version, Runtime_Library_Dir, Runtime_Source_Dir, Driver to indicate the new gcc 6.1 installation paths (replace <INSTALL_DIR> with your installation directory):

configuration project Default is
   ...
   for Toolchain_Version     ("Ada") use "GNAT 6.1";
   for Runtime_Library_Dir   ("Ada") use "<INSTALL_DIR>/lib/gcc/x86_64-pc-linux-gnu/6.1.0//adalib/";
   for Runtime_Source_Dir    ("Ada") use "<INSTALL_DIR>/lib/gcc/x86_64-pc-linux-gnu/6.1.0//adainclude/";
   package Compiler is
      for Driver ("C") use "<INSTALL_DIR>/bin/gcc";
      for Driver ("Ada") use "<INSTALL_DIR>/bin/gcc";
      ...
   end Compiler;
   ...
end Default;

This is the tricky part because if you missed it you may end up using the old Ada compiler. Make sure the Runtime_Library_Dir and Runtime_Source_Dir are correct otherwise you'll have problems during builds. As far as I'm concerned, the gcc target triplet was also changed from x86_64-linux-gnu to x86_64-pc-linux-gnu. Hopefully, once we have built a new gprbuild everything will be easier. The next step is to build XML/Ada which is used by gprbuild.

XML/Ada

Download and extract the XML/Ada sources. Using the git repository works pretty well:

cd $BUILD_DIR
git clone https://github.com/AdaCore/xmlada.git xmlada

This time we must build within the sources. Before running the configure script, the default.cgpr file is installed so that the new Ada compiler is used:

cp $BUILD_DIR/default.cgpr $BUILD_DIR/xmlada/
cd $BUILD_DIR/xmlada
./configure --prefix=$INSTALL_DIR

And proceed with the build in the same directory:

make static shared

Compilation is now complete you can install the package:

make install-static install-relocatable

gprbuild

Get the gprbuild sources from the git repository:

cd $BUILD_DIR
git clone https://github.com/AdaCore/gprbuild.git gprbuild

Copy the default.cgpr file to the gprbuild source tree and run the configure script:

cp $BUILD_DIR/default.cgpr $BUILD_DIR/gprbuild/
cd $BUILD_DIR/gprbuild
./configure --prefix=$INSTALL_DIR

Setup the ADA_PROJECT_PATH environment variable to use the XML/Ada library that was just compiled. If you miss this step, you'll get a file dom.ali is incorrectly formatted error during the bind process.

export ADA_PROJECT_PATH=$INSTALL_DIR/lib/gnat

And proceed with the build in the same directory:

make

Compilation is now complete you can install the package:

make install

Using the compiler

Now you can remove the build directory to make some space. You'll not need the default.cgpr file anymore nor define the ADA_PROJECT_PATH environment variable (except for other needs). To use the new Ada compiler you only need to setup your PATH:

export PATH=$INSTALL_DIR/bin:/usr/bin:/bin

You're now ready to play and use the GCC 6.1 Ada Compiler.

To add a comment, you must be connected. Login to add a comment

New releases for Ada Util, Ada EL, Ada Security, Ada Database Objects, Ada Server Faces, Dynamo

By stephane.carrez

A new release is available for several Ada projects.

Ada Utility Library, Version 1.8.0

  • Added support for immediate flush and file appending to the file logger
  • Added support for RFC7231/RFC2616 date conversion
  • Improvement of configure and installation process with gprinstall (if available)
  • Added file system stat/fstat support
  • Use gcc intrinsics for atomic counters (Intel, Arm)

Download: http://download.vacs.fr/ada-util/ada-util-1.8.0.tar.gz

GitHub: https://github.com/stcarrez/ada-util

Ada EL, Version 1.6.0

  • Added support for thread local EL context
  • Improvement of configure and installation process with gprinstall (if available)

Download: http://download.vacs.fr/ada-el/ada-el-1.6.0.tar.gz

GitHub: https://github.com/stcarrez/ada-el

Ada Security, Version 1.1.2

  • Improvement of configure and installation process with gprinstall (if available)

Download: http://download.vacs.fr/ada-security/ada-security-1.1.2.tar.gz

GitHub: https://github.com/stcarrez/ada-security

Ada Database Objects, Version 1.1.0

  • Fix link issue on Fedora
  • Detect MariaDB as a replacement for MySQL
  • Improvement of configure and installation process with gprinstall (if available)

Download: http://download.vacs.fr/ada-ado/ada-ado-1.1.0.tar.gz

GitHub: https://github.com/stcarrez/ada-ado

Ada Server Faces, Version 1.1.0

  • New EL function util:formatDate
  • New request route mapping with support for URL component extraction and parameter injection in Ada beans
  • Improvement of configure, build and installation with gprinstall when available
  • Integrate jQuery 1.11.3 and jQuery UI 1.11.4
  • Integrate jQuery Chosen 1.4.2
  • New component <w:chosen> for the Chosen support
  • Added a servlet cache control filter

Download: http://download.vacs.fr/ada-asf/ada-asf-1.1.0.tar.gz

GitHub: https://github.com/stcarrez/ada-asf

Dynamo, Version 0.8.0

  • Support to generate Markdown documentation
  • Support to generate query Ada bean operations
  • Better code generation and support for UML Ada beans

Download: http://download.vacs.fr/dynamo/dynamo-0.8.0.tar.gz

GitHub: https://github.com/stcarrez/dynamo

To add a comment, you must be connected. Login to add a comment

Using Ada LZMA to compress and decompress LZMA files

By stephane.carrez

liblzma is a public domain general-purpose data compression library with a zlib-like API. liblzma is part of XZ Utils which includes a gzip-like command line tool named xz and some other tools. XZ Utils is developed and maintained by Lasse Collin. Major parts of liblzma are based on Igor Pavlov's public domain LZMA SDK. The Ada LZMA library provides an Ada05 thin binding for the liblzma library and it allows to use all the operations provided by the compression and decompression library.

Setup of Ada LZMA binding

First download the Ada LZMA binding at http://download.vacs.fr/ada-lzma/ada-lzma-1.0.0.tar.gz or at git@github.com:stcarrez/ada-lzma.git, configure, build and install the library with the next commands:

./configure
make
make install

After these steps, you are ready to use the binding and you can add the next line at begining of your GNAT project file:

with "lzma";

Import Declaration

To use the Ada LZMA packages, you will first import the following packages in your Ada source code:

with Lzma.Base;
with Lzma.Container;
with Lzma.Check;

LZMA Stream Declaration and Initialization

The liblzma library uses the lzma_stream type to hold and control the data for the lzma operations. The lzma_stream must be initialized at begining of the compression or decompression and must be kept until the compression or decompression is finished. To use it, you must declare the LZMA stream as follows:

Stream  : aliased Lzma.Base.lzma_stream := Lzma.Base.LZMA_STREAM_INIT;

Most of the liblzma function return a status value of by lzma_ret, you may declare a result variable like this:

Result : Lzma.Base.lzma_ret;

Initialization of the lzma_stream

After the lzma_stream is declared, you must configure it either for compression or for decompression.

Initialize for compression

To configure the lzma_stream for compression, you will use the lzma_easy_encode function. The Preset parameter controls the compression level. Higher values provide better compression but are slower and require more memory for the program.

Result := Lzma.Container.lzma_easy_encoder (Stream'Unchecked_Access, Lzam.Container.LZMA_PRESET_DEFAULT,
                                            Lzma.Check.LZMA_CHECK_CRC64);
if Result /= Lzma.Base.LZMA_OK then
  Ada.Text_IO.Put_Line ("Error initializing the encoder");
end if;
Initialize for decompression

For the decompression, you will use the lzma_stream_decoder:

Result := Lzma.Container.lzma_stream_decoder (Stream'Unchecked_Access,
                                              Long_Long_Integer'Last,
                                              Lzma.Container.LZMA_CONCATENATED);

Compress or decompress the data

The compression and decompression is done by the lzma_code function which is called several times until it returns LZMA_STREAM_END code. Setup the stream 'next_out', 'avail_out', 'next_in' and 'avail_in' and call the lzma_code operation with the action (Lzma.Base.LZMA_RUN or Lzma.Base.LZMA_FINISH):

Result := Lzma.Base.lzma_code (Stream'Unchecked_Access, Action);

Release the LZMA stream

Close the LZMA stream:

    Lzma.Base.lzma_end (Stream'Unchecked_Access);

Sources

To better understand and use the library, use the source Luke

compress_easy.adb

decompress.adb

Download

https://github.com/stcarrez/ada-lzma

http://download.vacs.fr/ada-lzma/ada-lzma-1.0.0.tar.gz

To add a comment, you must be connected. Login to add a comment

Using MAT the Memory Analysis Tool

By stephane.carrez

MAT is a memory analysis tool that monitors calls to malloc, realloc and free calls. It works with a small shared library libmat.so that is loaded into the program with the LD_PRELOAD dynamic linker feature (See the ld.so(8) man page). The library overrides the malloc, realloc and free function to monitor calls to these functions. It then writes or sends probe events which contain enough information for mat to tell what, when, where and by whom the memory allocation was done.

mat will assign a unique number to each event that is collected. The tool will reconcile the events to find those that are related based on the allocation address so that it becomes possible to find forward and backward who allocates or releases the memory. When started, the tool provides a set of interactive commands that you can enter with the readline editing capabilities.

Instrumenting your application: the file event mode

The first method to instrument your application is by running your program and collecting the information into a file. Later, when the program is finished, use the mat tool to analyze the results.

mat-analyse.png

To instrument a program and save the results into a file, you can use the matl launcher as follows:

matl -o name command

where command is the command to instrument and name is the prefix file name. The generated file will have the process ID in its name with the .mat extension.

To start the analysis, you have to launch mat with the name of the generated file:

mat name-xxx.mat

Instrumenting live application: the TCP/IP socket mode

You can also instrument your application and do some analysis while your program is running. For this, you will use the TCP/IP socket mode provided by libmat.so and mat. You must first start the mat tool so that the TCP/IP server provided by mat is started before the program connects to it through the libmat.so shared library.

mat-analyse-tcp.png

To use this mode, start mat in a first terminal console with either the -s or the -b option to start the TCP/IP server and wait for a program to connect. For example:

mat -b 192.168.0.17:4096

Then, in a second terminal console start your program through the matl launcher as follows:

matl -s 192.168.0.17:4606 command

Here you will give the IP address (you may use localhost or any IP) and the TCP/IP port (the default port used being the port 4606). The mat server may run on a different host with a different architecture.

General information

When mat reads the events, it detects the endianness and the size of pointers so that it is able to analyze 32-bit as well as 64-bit applications with little endian or big endian formats. The info command gives information about the program and the events that have been collected.

The output presented below come from the analysis of gdb 7.9.1 that was debugging mat.

mat>info
Pid              : 5291
Path             : /data/ext/gnu/i386/gdb-7.9.1/gdb/gdb
Endianness       : LITTLE_ENDIAN
Memory regions   : 24
Events           : 0..586514
Duration         : 41.26s
Stack frames     :  0
Malloc count     :  279768
Realloc count    :  36272
Free count       :  270462
Memory allocated : 8341613
Memory slots     :  10484

The number of collected events can be quite high, 586514 in the above example, and it can easily exceed several millions.

Timeline

With so many events, you may not know where to start. You can use the timeline command to analyse the events and find interesting groups and report information about them. The command takes a duration parameter to control the groups by defining the maximum duration in seconds of a group. For each group, the command indicates the event ID range, the number of malloc, realloc and free calls as well as the memory growth or shrink during the period.

mat>timeline 5
Start     End time  Duration  Event range         # malloc  # realloc # free    Memory
0us       4.32s     4.32s     0..533645           261393    20423     251923    +4618592
15.60s    20.57s    4.96s     533646..542619      4495      102       4379      +438803
20.62s    21.05s    432.68ms  542620..581167      11555     15663     11338     +3198744
26.16s    28.86s    2.70s     581168..582388      539       4         678       -9425
31.40s    32.41s    1.01s     582389..583133      288       11        446       +19246

In this sample, the first group correspond to gdb's startup during which it loads the symbol table. The second group correspond to the command b main followed by run in gdb. The third group is when the program reaches the breakpoint and gdb handles it and read the DWARF2 debugging information. The two last groups correspond to a step, where, cont and quit commands.

Looking at allocation sizes

The sizes command is another command that helps with many events as it counts and groups the events by their allocation size. Let's say we want to look further in the second group identified by the timeline 5 command, we can use the event range ID to filter out the events so that the sizes command only takes into account these events. By using the -c option, only a summary is printed by the command:

mat>sizes -c 542620..581167
Found 9550 different sizes, +3198744 bytes, with 11555 malloc, 15663 realloc, 11338 free

There are still many allocation and we can change the filter to report only the memory allocations greater than 75000 bytes. Now, we want to have all the information and the -c option is not given:

mat>sizes 542620..581167 and size > 75000
Event Id range                Time      Event               Size        Count   Total size  Memory
554191..554243                20.75s    realloc             75248        4      300992      -75240
565067                        20.88s    malloc              161323       1      161323      +161323
562816                        20.85s    malloc              179035       1      179035      +179035
564877                        20.87s    malloc              213328       1      213328      +213328
Found 7 different sizes, +730796 bytes, with 7 malloc, 1 realloc, 2 free

Looking at the event

When you identify some interesting event, you can use the event command and give it the event ID to dump the information with the complete stack frame. This is probably the most useful command as the stack frame helps you point out where in the code and by which flow the allocation call is made.

mat>event 554191
75248 bytes reallocated after 20.75s, freed 817us after by event 554243 +8 bytes
Id Frame Address         Function
 1 0x0000000000408898    _start
 2 0x00007F688D18AEC5    __libc_start_main (libc-start.c:321)
 3 0x0000000000408855    main (gdb.c:33)
 4 0x000000000054999B    gdb_main (main.c:1161)
 5 0x00000000005459F5    catch_errors (exceptions.c:237)
 6 0x0000000000549506    captured_main (main.c:1150)
 7 0x00000000005459F5    catch_errors (exceptions.c:237)
 8 0x00000000005484C3    captured_command_loop (main.c:329)
 9 0x000000000054EABE    start_event_loop (event-loop.c:334)
10 0x000000000054EA25    gdb_do_one_event (event-loop.c:296)
11 0x000000000054E755    gdb_wait_for_event (event-loop.c:773)
12 0x0000000000550482    inferior_event_handler (inf-loop.c:57)
13 0x000000000053A218    fetch_inferior_event (infrun.c:3273)
14 0x0000000000537E22    handle_signal_stop (infrun.c:4264)
15 0x000000000061B750    get_current_frame (frame.c:1486)
16 0x000000000054582C    catch_exceptions_with_msg (exceptions.c:189)
17 0x000000000061E36C    unwind_to_current_frame (frame.c:1451)
18 0x000000000061E081    get_prev_frame (frame.c:2212)
19 0x000000000061D949    get_prev_frame_always_1 (frame.c:1954)
20 0x000000000061B63B    compute_frame_id (frame.c:454)
21 0x000000000061EFAF    frame_unwind_find_by_frame (frame-unwind.c:157)
22 0x000000000061EBF6    frame_unwind_try_unwinder (frame-unwind.c:106)
23 0x00000000005CCEB4    dwarf2_frame_sniffer (dwarf2-frame.c:1405)
24 0x00000000005CCB38    dwarf2_frame_find_fde (dwarf2-frame.c:1772)
25 0x00000000005CC8E9    dwarf2_build_frame_info (dwarf2-frame.c:2313)
26 0x00000000005CAC4B    add_fde (dwarf2-frame.c:1812)

This event is a realloc call that reallocates a memory slot to 75248 bytes, the previous slot size was 75240 bytes. The operation that makes the call is add_fde in gdb.

Looking at frames

The frames command analyzes all the event stack frames and report the function with the number of calls and memory growth that they created. The command takes a level parameter that indicates the stack frame level to take into account. The level counts the stack frame starting from the bottom, so that level 1 should give you functions that directly call malloc, realloc and free functions.

A filter can be defined to control the allocation events to take into account. For example if we want to look at functions that allocate or free large memory blocks and which are in the second group reported by the timeline command, we can use the following command:

mat>frames 1 542620..581167 and size > 50000
Level Size      Count   Function
 1    -53328    1       bfd_elf64_slurp_symbol_table (elfcode.h:1172)
 1    +743794   8       __GI__obstack_newchunk (obstack.c:269)
 1    -187272   3       dwarf2_build_frame_info (dwarf2-frame.c:2451)
 1    +53328    1       bfd_elf_get_elf_syms (elf.c:419)
 1    -71104    1       _bfd_elf_canonicalize_dynamic_symtab (elf.c:7215)
 1    +461056   4       bfd_alloc (opncls.c:956)
 1    +25248    3156    add_fde (dwarf2-frame.c:1812)
 1    =75248    2       dwarf2_build_frame_info (dwarf2-frame.c:2422)
 1    -75248    1       dwarf2_frame_find_fde (dwarf2-frame.c:1772)
 1    +71104    1       bfd_elf_get_elf_syms (elf.c:453)

Download

You can download the first release of MAT and get mat sources at http://download.vacs.fr/mat/mat-1.0.0.tar.gz and follow the Build instructions.

Ubuntu 14.04 packages for 64-bit platforms are available:

http://download.vacs.fr/mat/mat_1.0.0_amd64.deb and http://download.vacs.fr/mat/libmat_1.0.0_amd64.deb

Ubuntu 14.04 packages for 32-bit platforms are available:

http://download.vacs.fr/mat/mat_1.0.0_i386.deb and http://download.vacs.fr/mat/libmat_1.0.0_i386.deb

Conclusion

Unlike valgrind, mat does not instrument the program. Instead, it overrides the malloc, realloc and free calls and only monitors these events. This makes the implementation easier to port and allows to use mat in some embedded systems where valgrind is more difficult to use (due to portability and memory resource constraints).

I've used mat on several Mips boards (BCM6362, BCM63168, Vox185) and it was very useful to understand the memory allocations and reduce the memory used by several programs. It does not yet provide a graphical front end but this will come one day.

Give it a try, you may find some interesting features in it!

To add a comment, you must be connected. Login to add a comment

Ada BFD 1.1.0 is available

By stephane.carrez

Ada BFD is an Ada binding for the GNU Binutils BFD library. It allows to read binary ELF, COFF files by using the GNU BFD and allows your program to read ELF sections, get access to the symbol table and use the disassembler.

I've added the support to the GNU demangler so that it is now possible to demangle symbols. You can demangle C++, Java and Ada symbols. To use the demangler you need a BFD file which is a limited type.

with Bfd.Files;
with Bfd.Symbols;
  ...
  File : Bfd.Files.File_Type;

The BFD file is opened as follows:

  Bfd.Files.Open (File, Path, "");

Then, you may convert any symbol name using the Demangle function:

  Name : String := Bfd.Symbols.Demangle (File, "bfd__symbols__get_name",
                                         Constants.DMGL_GNAT);

Sources are now moved to GitHub: https://github.com/stcarrez/ada-bfd

You can download the release at: http://download.vacs.fr/ada-bfd/ada-bfd-1.1.0.tar.gz

To add a comment, you must be connected. Login to add a comment

NetBSD 6.1.5 upgrade

By stephane.carrez

Once every year I try to upgrade one of my virtual machine which is running NetBSD. This description is a short reminder for the major steps for the upgrade process.

System upgrade

The NetBSD system is upgraded by using the following command:

poseidon$ sudo sysupgrade auto ftp://ftp.NetBSD.org/pub/NetBSD/NetBSD-6.1/i386
During the upgrade it will ask whether some system files have to be replaced, merged or kept unmodified.

GCC Ada Package Upgrade

The GCC Ada compiler is now based on GCC 4.9. I did the upgrade by using the following command:

poseidon$ sudo pkg_add -uu gcc-aux-20140422nb3 pkg_add: Warning: package `gcc-aux-20140422nb3' was built for a platform: pkg_add: NetBSD/i386 6.0 (pkg) vs. NetBSD/i386 6.1.4 (this host)
The gprbuild package must also be upgrade:
poseidon$ sudo pkg_add -u gprbuild-aux-20130416 pkg_add: Warning: package `gprbuild-aux-20130416' was built for a platform: pkg_add: NetBSD/i386 6.0 (pkg) vs. NetBSD/i386 6.1.4 (this host) pkg_add: Warning: package `gnat_util-20140422' was built for a platform: pkg_add: NetBSD/i386 6.0 (pkg) vs. NetBSD/i386 6.1.4 (this host)
And because I also use some other packages such as xmlada, the following package is also upgrade:
poseidon$ sudo pkg_add -u xmlada-4.4.0.0nb1 pkg_add: Warning: package `xmlada-4.4.0.0nb1' was built for a platform: pkg_add: NetBSD/i386 6.0 (pkg) vs. NetBSD/i386 6.1.4 (this host)

Before running an Ada program compiled by GCC 4.9

The GCC 4.9 Ada compiler works very well but it comes with a specific libgcc_s.so file installed in /usr/pkg/gcc-aux/lib. By default libgcc_s.so is installed on the system in /usr/lib/libgcc_s.so or /usr/pkg/lib/libgcc_s.so and they were compiled by GCC 4.5.3 or GCC 4.6.4.

If you use the wrong libgcc_s.so, the program will almost work except when a exception is raised: none of the exception can be caught and the program terminates as though there was no exception handler.

What happens is that the GCC 4.6 frame unwinder is unable to correctly identify the frames generated by GCC 4.9. The solution is of course to use the correct library and we can do this by setting the following environment variable before starting any program:

poseidon$ export LD_LIBRARY_PATH=/usr/pkg/gcc-aux/lib

To add a comment, you must be connected. Login to add a comment

Extending an ext4 LVM partition

By stephane.carrez 2 comments

From time to time a disk partition becomes full and it becomes desirable to grow the partition. Since I often don't remember how to do this, I wrote this short description to keep track of how to do it.

Extending the LVM partition

The first step is to extend the size of the LVM partition. This is done easily by using the lvextend (8) command. You just need to specify the amount and the LVM partition. In my case, the vg02-ext volume was using 60G and the +40G command will grow its size to 100G. Note that you can run this command while your file system is mounted (if you grow the size of your LVM partition).

$ sudo lvextend --size +40G /dev/mapper/vg02-ext Extending logical volume ext to 100.00 GiB Logical volume ext successfully resized

Preparing the ext4 filesystem for resize

Before resizing the ext4 filesystem, you must make sure it is not mounted:

$ sudo umount /ext
The file system must be clean and you should run the e2fsck (8) command to check it:
$ sudo e2fsck -f /dev/mapper/vg02-ext e2fsck 1.42 (29-Nov-2011) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/mapper/vg02-ext: 1974269/3932160 files (0.1% non-contiguous), 14942044/15728640 blocks 5.392u 1.476s 0:48.25 14.2% 0+0k 3208184+48io 2pf+0w

Resizing the ext4 filesystem

The last step is to resize the ext4 file system by using the resize2fs (8) command. The command can enlarge or shrink an unmounted file system of type ext2, ext3 or ext4. The command only needs the block device path to operate.

$ sudo resize2fs /dev/mapper/vg02-ext resize2fs 1.42 (29-Nov-2011) Resizing the filesystem on /dev/mapper/vg02-ext to 26214400 (4k) blocks. The filesystem on /dev/mapper/vg02-ext is now 26214400 blocks long.
After the resize, we can re-mount the ext4 partition:
$ sudo mount -a

2 comments
To add a comment, you must be connected. Login to add a comment

Ada BFD 1.0.1 is available

By stephane.carrez

Ada BFD is an Ada binding for the GNU Binutils BFD library.

It allows to read binary ELF, COFF files by using the GNU BFD and allows your program to read ELF sections, get access to the symbol table and use the disassembler.

The new version fixes build and compilation issues with recent releases of GNU Binutils and it also provides support to build Debian packages.

http://download.vacs.fr/ada-bfd/ada-bfd-1.0.1.tar.gz

To add a comment, you must be connected. Login to add a comment

Ada Web Application 1.0.0 is available

By stephane.carrez

Ada Web Application is a framework to build web applications.

The new version of AWA provides:

  • New countries plugin to provide country/region/city data models
  • New settings plugin to control application user settings
  • New tags plugin to easily add tags in applications
  • New <awa:tagList> and <awa:tagCloud> components for tag display
  • Add tags to the question and blog plugins
  • Add comments to the blog post

AWA can be downloaded at http://blog.vacs.fr/vacs/download.html

A live demonstration of various features provided by AWA is available at http://demo.vacs.fr/atlas

A small tutorial explains how you can easily setup a project, design the UML model, and use the features provided by the Ada Web Application framework.

To add a comment, you must be connected. Login to add a comment

New releases available for Ada Utility, Ada EL, Ada Security, Ada Server Faces, ADO, Dynamo

By stephane.carrez

A maintenance release is available for the following Ada packages:

Ada Utility Library: Version 1.7.1

  • Support XmlAda 2014
  • Fixed Get_Week_Start/Get_Week_End when the system timezone is different than the asked timezone

Download: http://download.vacs.fr/ada-util/ada-util-1.7.1.tar.gz

Ada EL: Version 1.5.1

  • Fix minor configuration issue with GNAT 2014

Download: http://download.vacs.fr/ada-el/ada-el-1.5.1.tar.gz

Ada Security: Version 1.1.1

  • Fix minor configuration issue with GNAT 2014

Download: http://download.vacs.fr/ada-security/ada-security-1.1.1.tar.gz

Ada Server Faces: Version 1.0.1

  • Fix minor configuration issue with GNAT 2014
  • Fix concurrent issues in facelet and session cache implementation

Download: http://download.vacs.fr/ada-asf/ada-asf-1.0.1.tar.gz

Ada Database Objects: Version 1.0.1

  • Fix minor configuration issue with GNAT 2014

Download: http://download.vacs.fr/ada-ado/ada-ado-1.0.1.tar.gz

Dynamo: Version 0.7.1

  • Fix minor configuration issue with GNAT 2014

Download: http://download.vacs.fr/dynamo/dynamo-0.7.1.tar.gz

To add a comment, you must be connected. Login to add a comment

Ubuntu 14.04 LTS Ada build node installation

By stephane.carrez 2 comments

This short article is a reminder to know the steps and actions in order to add a Ubuntu 14.04 build machine for Jenkins.

The steps are very similar to what I've described in Installation of FreeBSD for a jenkins build node. The virtual machine setup is the same (20G LVM partition, x86_64 CPU, 1Gb memory) and Ubuntu is installed from the ubuntu-14.04.1-server-i386.iso image.

Packages to build Ada software

The following commands install the GNAT Ada compiler with the libraries and packages to build various Ada libraries and projects including AWA.

# GNAT Compiler Installation
sudo apt-get install gnat-4.6 libaws2.10.2-dev libxmlada4.1-dev gprbuild gdb

# Packages to build Ada Utility Library
sudo apt-get install libcurl4-openssl-dev libssl-dev

# Packages to build Ada Database Objects
sudo apt-get install sqlite libsqlite3-dev
sudo apt-get install libmysqlclient-dev
sudo apt-get install mysql-server mysql-client

# Packages to build libaws2-2-10
sudo apt-get install libasis2010-dev libtemplates-parser11.6-dev
sudo apt-get install texinfo texlive-latex-base \
 texlive-generic-recommended texlive-fonts-recommended 

The libaws2-2-10 package was not functional for me (see bug 1348902) so I had to rebuild the Debian package from the sources and install it.

Packages to create Debian packages

When the Ada build node is intended to create Debian packages, the following steps are necessary:

sudo apt-get install dpkg-dev gnupg reprepro pbuilder debhelper quilt chrpath
sudo apt-get install autoconf automake autotools-dev

Packages and setup for Jenkins

Before adding the build node in Jenkins, the JRE must be installed and a jenkins user must exist:

sudo apt-get install openjdk-7-jre subversion
sudo useradd -m -s /bin/bash jenkins

Jenkins will use ssh to connect to the build node so it is good practice to setup a private/public key to allow the Jenkins master node to connect to the slave. On the master, copy the jenkins user's key:

ssh-copy-id target-host

The Ada build node is then added through the Jenkins UI in Manage Jenkins/Manage Nodes.

Jenkins jobs

The jenkins master is now building 7 projects automatically for Ubuntu 14.04: Trusty Ada Jobs

2 comments
To add a comment, you must be connected. Login to add a comment

Review Web Application: Listing the reviews

By stephane.carrez

After the creation and setup of the AWA project and the UML model design we have seen how to create a review for the review web application. In this new tutorial, you will understand the details to list the reviews that have been created and published. This tutorial has three steps:

  • First the definition of the database query,
  • The implementation of the Ada review list bean,
  • The writing of the XHTML facelet presentation file.

Step 1: Database query to list the reviews

Let's start with the database query that we will use to retrieve the reviews.

Since we need to access the list of reviews from the XHTML files, we will map the SQL query result to a list of Ada Beans objects. For this, an XML query mapping is created to tell how to map the SQL query result into some Ada record. The XML query mapping is then processed by Dynamo to generate the Ada Beans implementation. The XML query mapping is also read by AWA to get the SQL query to execute.

A template of the XML query mapping can be added to a project by using the dynamo add-query command. The first parameter is the module name (reviews) and the second parameter the name of the query (list). The command will generate the file db/reviews-list.xml.

dynamo add-query reviews list

The generated XML query mapping is an example of a query. You can replaced it or update it according to your needs. The first part of the XML query mapping is a class declaration that describes the type to represent each row returned by our query. Within the class, a set of property definition describes the class attributes with their type and name.

<query-mapping package='Atlas.Reviews.Models'>
    <class name="Atlas.Reviews.Models.List_Info" bean="yes">
        <comment>The list of reviews.</comment>
        <property type='Identifier' name="id">
            <comment>the review identifier.</comment>
        </property>
        <property type='String' name="title">
            <comment>the review title.</comment>
        </property>
        ...
    </class>
</query-mapping>

Following the class declaration, the query declaration describes a query by giving it a name and describing the SQL statement to execute. By having the SQL statement separate and external to the application, we can update, fix and tune the SQL without rebuilding the application. The Dynamo code generator will use the query declaration to generate a query definition that can be referenced and used from the Ada code.

The SQL statement is defined within the sql XML entity. The optional sql-count XML entity is used to associate a count query that can be used for the pagination.

We want to display the review with the author's name and email address. The list will be sorted by date to show the newest reviews first. The SQL to execute is the following:

<query-mapping package='Atlas.Reviews.Models'>
   ...
    <query name='list'>
       <comment>Get the list of reviews</comment>
       <sql>
SELECT
      r.id,
      r.title,
      r.site,
      r.create_date,
      r.allow_comments,
      r.reviewer_id,
      a.name,
      e.email,
      r.text
FROM atlas_review AS r
INNER JOIN awa_user AS a ON r.reviewer_id = a.id
INNER JOIN awa_email AS e ON a.email_id = e.id
ORDER BY r.create_date DESC
    LIMIT :first, :last
       </sql>
       <sql-count>
    SELECT
      count(r.id)
    FROM atlas_review AS r
       </sql-count>
    </query>
</query-mapping>

The query has two named parameters represented by :first and :last. These parameters allow to paginate the list of reviews.

The complete source can be seen in the file: db/reviews-list.xml.

Once the XML query is written, the Ada code is generated by Dynamo by reading the UML model and all the XML query mapping defined for the application. Dynamo merges all the definitions into the target Ada packages and generates the Ada code in the src/model directory. You can use the generate make target:

make generate

or run the following command manually:

dynamo generate db uml/atlas.zargo

From the List_Info class definition, Dynamo generates the List_Info tagged record. The record contains all the data members described in the class XML entity description. The List_Info represents one row returned by the SQL query. The attributes of the List_Info can be accessed from the XHTML files by using UEL expression and the property name defined for each attribute.

To describe the list of rows, Dynamo generates the List_Info_Beans package which instantiates the Util.Beans.Basic.Lists generic package. This provides an Ada vector for the List_Info type and an Ada bean that gives access to the list.

package Atlas.Reviews.Models is
  ...
  type List_Info is new Util.Beans.Basic.Readonly_Bean with record
  ...
   package List_Info_Beans is
      new Util.Beans.Basic.Lists (Element_Type => List_Info);
   package List_Info_Vectors renames List_Info_Beans.Vectors;
   subtype List_Info_List_Bean is List_Info_Beans.List_Bean;
   subtype List_Info_Vector is List_Info_Vectors.Vector;
   Query_List : constant ADO.Queries.Query_Definition_Access;
   ...
end Atlas.Reviews.Models;

The generated code can be seen in src/model/atlas-reviews-models.ads.

Step 2: The review list bean

In order to access the list of reviews from the XHTML facelet file, we must create an Ada bean that provides the list of reviews. This Ada bean is modelized in the UML model and we define:

  • A set of attributes to manage the review list pagination (page, page_size, count)
  • An Ada bean action that can be called from the XHTML facelet file (load)

The Review_List_Bean tagged record will hold the list of reviews for us:

package Atlas.Reviews.Beans is
  ...
   type Review_List_Bean is new Atlas.Reviews.Models.Review_List_Bean with record
      Module       : Atlas.Reviews.Modules.Review_Module_Access := null;
      Reviews      : aliased Atlas.Reviews.Models.List_Info_List_Bean;
      Reviews_Bean : Atlas.Reviews.Models.List_Info_List_Bean_Access;
   end record;
   type Review_List_Bean_Access is access all Review_List_Bean'Class;
end Atlas.Reviews.Beans;

We must now implement the Load operation that was described in the UML model and we are going to use our list query. For this, we use the ADO.Queries.Context to setup the query to retrieve the list of reviews. A call to Set_Query indicates the query that will be used. Since that query needs two parameters (first and last), we use the Bind_Param operation to give the two values. The list of reviews is then retrieved easily by calling the Atlas.Reviews.Models.List operation that was generated by Dynamo.

package body Atlas.Reviews.Beans is
...
   overriding
   procedure Load (Into    : in out Review_List_Bean;
                   Outcome : in out Ada.Strings.Unbounded.Unbounded_String) is
      Session     : ADO.Sessions.Session := Into.Module.Get_Session;
      Query       : ADO.Queries.Context;
      Count_Query : ADO.Queries.Context;
      First       : constant Natural  := (Into.Page - 1) * Into.Page_Size;
      Last        : constant Positive := First + Into.Page_Size;
   begin
      Query.Set_Query (Atlas.Reviews.Models.Query_List);
      Count_Query.Set_Count_Query (Atlas.Reviews.Models.Query_List);
      Query.Bind_Param (Name => "first", Value => First);
      Query.Bind_Param (Name => "last", Value => Last);
      Atlas.Reviews.Models.List (Into.Reviews, Session, Query);
      Into.Count := ADO.Datasets.Get_Count (Session, Count_Query);
   end Load;
end Atlas.Reviews.Beans;

Review list bean creation

The AWA framework must be able to create an instance of the Review_List_Bean type. For this, we have to declare and implement a constructor function that allocates an instance of the Review_List_Bean type and setup some pre-defined values. When the instance is returned, the list of reviews is not loaded.

package body Atlas.Reviews.Beans is
   ...
   function Create_Review_List_Bean (Module : in Atlas.Reviews.Modules.Review_Module_Access)
                                     return Util.Beans.Basic.Readonly_Bean_Access is
      Object  : constant Review_List_Bean_Access := new Review_List_Bean;
   begin
      Object.Module       := Module;
      Object.Reviews_Bean := Object.Reviews'Access;
      Object.Page_Size    := 20;
      Object.Page         := 1;
      Object.Count        := 0;
      return Object.all'Access;
   end Create_Review_List_Bean;
end Atlas.Reviews.Beans;

The constructor function is then registered in the Atlas.Reviews.Modules package within the Initialize procedure. This registration allows to give a name for this constructor function and be able to specify it in the managed-bean bean declaration.

package body Atlas.Reviews.Modules is
   ...
   overriding
   procedure Initialize (Plugin : in out Review_Module;
                         App    : in AWA.Modules.Application_Access;
                         Props  : in ASF.Applications.Config) is
   begin
      ...
      Register.Register (Plugin => Plugin,
                         Name   => "Atlas.Reviews.Beans.Review_List_Bean",
                         Handler => Atlas.Reviews.Beans.Create_Review_List_Bean'Access);
   end Initialize;
end Atlas.Reviews.Modules;

Review list bean declaration

The managed-bean XML declaration associates a name to a constructor function that will be called when the name is needed. The scope of the Ada bean is set to request so that a new instance is created for each HTTP GET request.

  <managed-bean>
    <description>The list of reviews</description>
    <managed-bean-name>reviewList</managed-bean-name>
    <managed-bean-class>Atlas.Reviews.Beans.Review_List_Bean</managed-bean-class>
    <managed-bean-scope>request</managed-bean-scope>
  </managed-bean>

Step 3: Listing the reviews: the XHTML facelet presentation file

To load the reviews to be displayed we will use a JSF 2.2 view action. The review list page has a parameter page that indicates the page number to be displayed. The f:viewParam allows to retrieve that parameter and configure the reviewList Ada bean with it. Then, the f:viewAction defines the action that will be executed after the view parameters are extracted, validated and passed to the Ada bean. In our case, we will call the load operation on our reviewList Ada bean.

<f:metadata>
    <f:viewParam id='page' value='#{reviewList.page}' required="false"/>
    <f:viewAction action="#{reviewList.load}"/>
</f:metadata>

To summarize, the reviewList Ada bean is created, then configured for the pagination and filled with the current page content by running our SQL query.

The easy part is now to render the list of reviews. The XHTML file uses the <h:list> component to iterate over the list items and render each of them. At each iteration, the <h:list> component initializes the Ada bean review to refer to the current row in the review list. We can then access each attribute defined in the XML query mapping by using the property name of that attribute. For example review.title returns the title property.

<h:list var="review" value="#{reviewList.reviews}">
    <div class='review' id="p_#{review.id}">
        <div class='review-title'>
            <h2><a href="#{review.site}">#{review.title}</a></h2>
            <ul class='review-info'>
                <li><span>By #{review.reviewer_name}</span></li>
                <li>
                    <h:outputText styleClass='review-date' value="#{review.date}" converter="dateConverter"/>
                </li>
                <h:panelGroup rendered="#{review.reviewer_id == user.id}">
                    <li>
                        <a href="#{contextPath}/reviews/edit-review.html?id=#{review.id}">#{reviewMsg.review_edit_label}</a>
                    </li>
                    <li>
                        <a href="#"
                           onclick="return ASF.OpenDialog(this, 'deleteDialog', '#{contextPath}/reviews/forms/delete-review.html?id=#{review.id}');">
                           #{reviewMsg.review_delete_label}
                        </a>
                    </li>
                </h:panelGroup>
            </ul>
        </div>
        <awa:wiki styleClass='review-text post-text' value="#{review.text}" format="dotclear"/>
    </div>
</h:list>

Understanding the request flow

Let's see the whole request flow to better understand what happens.

To display the list of reviews, the user's browser makes an HTTP GET request to the page /reviews/list.html. This page maps to the XHTML file web/reviews/list.xhtml that we created in the previous tutorial.

The Ada Server Faces framework handles the request by first reading the XHTML file and building a tree of components that represent the view to render. Within that tree of component, the <f:metadata> component allows to make a pre-initialization of components and beans before the component tree is rendered.

For the pre-initialization, the reviewList Ada bean is created because it is referenced in an EL expression used by the <f:viewParam> component or by the <f:viewAction>. For this creation, the Create_Review_List_Bean constructor that we registered is called. The page attribute is set on the reviewList Ada bean if it was passed as a URL request parameter.

The load action is then called by Ada Server Faces and the current review list page is retrieved by executing the SQL query.

As soon as the load action terminates, the rendering of the component tree can be processed. The reviewList Ada bean contains the information to display and the <h:list> component iterates over the list and renders each row at a time.

C

Conclusion

After the previous tutorial we were able to create a review and populate our database with one or several reviews. We are now able to display the list of reviews to our users.

The next tutorial will focus on using the Votes module to bring some voting capabilities in the review web application. Meanwhile, you may browse and study the sources:

db/reviews-list.xml

atlas-reviews-beans.ads

atlas-reviews-beans.adb

review-list.xhtml

To add a comment, you must be connected. Login to add a comment