Java 2 Ada

Upgrading FreeBSD for a GCC 4.9 Ada compiler

By Stephane Carrez

After the recent announcement of the GCC 4.9 Ada compiler availability on FreeBSD by John Marino, I decided to do the upgrade and give it some try.

After a quick investigation, IÔ¶e performed the following two simple steps on my FreeBSD host:

sudo pkg update
sudo pkg upgrade

Among several upgrade notifications, I've noted the following messages. The gcc-aux package corresponds to the GCC 4.9 compiler and the gnat-aux package contains the GCC 4.6.4 compiler.

Upgrading gcc-aux: 20130411_3 -> 20140416
Upgrading gnat-aux: 20130412_1 -> 20130412_2
Upgrading aws: ->

The GCC 4.9 Ada compiler is located in /usr/local/gcc-aux/bin and the GCC 4.6.4 Ada compiler is located in /usr/local/bin.

Once the upgrade was finished, I've rebuilt all my FreeBSD jenkins projects and... it's done.

It worked so well that I wasn't sure whether the right compiler was used. Looking at the generated ALI file there was the V "GNAT Lib v4.9" tag that identifies the new compiler.

Next step is to perform a similar upgrade on NetBSD...

To add a comment, you must be connected. Login to add a comment

New debian repository with Ada packages

By Stephane Carrez

I've created and setup a Debian repository to give access to several Debian packages for several Ada projects that I manage. The goal is to provide some easy and ready to use packages to simplify and help in the installation of various Ada libraries. The Debian repository includes the binary and development packages for Ada Utility Library, Ada EL, Ada Security, and Ada Server Faces.

Access to the repository

The repository packages are signed with PGP. To get the verification key and setup the apt-get tool, you should run the following command:

wget -O - | sudo apt-key add -

Ubuntu 13.04 Raring

A first repository provides Debian packages targeted at Ubuntu 13.04 raring. They are built with the gnat-4.6 package and depend on libaws-2.10.2-4 and libxmlada4.1-dev. Add the following line to your /etc/apt/sources.list configuration:

deb raring main

Ubuntu 12.04 LTS Precise

A second repository contains the Debian packages for Ubuntu 12.04 precise. They are built with the gnat-4.6 package and depend on libaws-2.10.2-1 and libxmlada4.1-dev. Add the following line to your /etc/apt/sources.list configuration:

deb precise main


Once you've added the configuration line, you can install the packages:

sudo apt-get update
sudo apt-get install libada-asf1.0

For the curious, you may browse the repository here.

Ada Server Faces 1.0.0 is available

By Stephane Carrez

Ada Server Faces is a framework that allows to create Web applications using the same design patterns as the Java Server Faces (See JSR 252, JSR 314, or JSR 344). The presentation pages benefit from the Facelets Web template system and the runtime takes advantages of the Ada language safety and performance.

A new release is available with several features that help writing online applications:

  • Add support for Facebook and Google+ login
  • Javascript support for popup and editable fields
  • Added support to enable/disable mouseover effect in lists
  • New EL function util:iso8601
  • New component <w:autocomplete> for input text with autocompletion
  • New component <w:gravatar> to render a gravatar image
  • New component <w:like> to render a Facebook, Twitter or Google+ like button
  • New component <w:panel> to provide collapsible div panels
  • New components <w:tabView> and <w:tab> for tabs display
  • New component <w:accordion> to display accordion tabs
  • Add support for JSF <f:facet>, <f:convertDateTime>, <h:doctype>
  • Support for the creation of Debian packages

You can try the online demonstration of the new widget components and download this new release at

Ada Security 1.1.0 is available

By Stephane Carrez

The Ada Security library provides a security framework which allows applications to define and enforce security policies. This framework allows users to authenticate by using OpenID Authentication 2.0, OAuth 2.0 or OpenID Connect protocols.

The new version brings the following improvements:

  • New authentication framework that supports OpenID, OpenID Connect, OAuth, Facebook login
  • AWS demo for a Google, Yahoo!, Facebook, Google+ authentication
  • Support to extract JSON Web Token (JWT)
  • Support for the creation of Debian packages

The library can be downloaded at

Ada EL 1.5.0 is available

By Stephane Carrez

Ada EL is a library that implements an expression language similar to JSP and JSF Unified Expression Languages (EL). The expression language is the foundation used by Java Server Faces and Ada Server Faces to make the necessary binding between presentation pages in XML/HTML and the application code written in Java or Ada.

The presentation page uses an UEL expression to retrieve the value provided by some application object (Java or Ada). In the following expression:


the EL runtime will first retrieve the object registered under the name questionInfo and look for the question and then rating data members. The data value is then converted to a string.

The new release is available for download at

This version brings the following improvements:

  • EL parser optimization (20% to 30% speed up)
  • Support for the creation of Debian packages

Ada Utility Library 1.7.0 is available

By Stephane Carrez

Ada Utility Library is a collection of utility packages for Ada 2005. A new version is available which provides:

  • Added a text and string builder
  • Added date helper operations to get the start of day, week or month time
  • Support XmlAda 2013
  • Added Objects.Datasets to provide list beans (lists of row/column objects)
  • Added support for shared library loading
  • Support for the creation of Debian packages
  • Update Ahven integration to 2.3
  • New option -r <test> option for the unit test harness to execute a single test
  • Port on FreeBSD

It has been compiled and ported on Linux, Windows, Netbsd, FreeBSD (gcc 4.6, GNAT 2013, gcc 4.7.3). You can download this new version at

Migrating a virtual machine from one server to another

By Stephane Carrez

OVH is providing new offers that are cheaper and provide more CPU power so it was time for me to migrate and pick another server and reduce the cost by 30%. I'm using 7 virtual machines that run either NetBSD, OpenBSD, FreeBSD, Ubuntu or Debian. Most are Intel based, but some of them are Sparc or Arm virtual machines. I've colllected below the main steps that must be done for the migration.

LVM volume creation on the new server

The first step is to create the LVM volume on the new server. The volume should have the same size as the original. The following command creates a 20G volume labeled netbsd.

$ sudo lvcreate -L 20G -Z n -n netbsd vg01
  WARNING: "netbsd" not zeroed
  Logical volume "netbsd created

Copying the VM image

After stopping the VM, we can copy the system image from one server to another server by using a combination of dd and ssh. The command must be executed as root otherwise some temporary file and additional copy steps could be necessary.

$ sudo dd if=/dev/vg01/netbsd bs=8192 |
  ssh dd bs=8192 of=/dev/vg01/netbsd's password: 
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 1858.33 s, 11.6 MB/s
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 1848.62 s, 11.6 MB/s

By compressing the image on the fly, the remote copy is faster (4 times faster). The following command does this:

$ sudo dd if=/dev/vg01/netbsd bs=8192 |
gzip -c | ssh \
'gzip -c -d | dd bs=8192 of=/dev/vg01/netbsd''s password: 
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 427.313 s, 50.3 MB/s
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 436.128 s, 49.2 MB/s

Once the copy is done, it's good to verify the integrity of the copy. For this, we can run the sha1sum on the source image and on the destination image and compare the SHA1 checksum: they must match.

$ sudo sha1sum /dev/vg01/netbsd
04e23ccc1d22cb1de439b43535855b2d1331da6a  /dev/vg01/netbsd

(run this command on both servers and compare the result).

Importing the virtual machine definition

The last step is to copy the virtual machine definition from one server to the other. The definition is an XML file located in the /etc/libvirt/qemu directory. Once copied, run the virsh command on the target server and import the definition:

$ sudo virsh
virsh# define netbsd.xml
virsh# start netbsd

That's it, the virtual machine was migrated at a reasonable small cost: the whole process took less than one hour!

To add a comment, you must be connected. Login to add a comment

Installation of FreeBSD for a jenkins build node

By Stephane Carrez

A few days ago, I did a fresh installation of my Jenkins build environment for my Ada projects (this was necessary after a disk crash on my OVH server). I took this opportunity to setup a FreeBSD build node. This article is probably incomplete but tends to collect a number of tips for the installation.

Virtual machine setup

The FreeBSD build node is running within a QEMU virtual machine. The choice of the host turns out to be important since not all versions of QEMU are able to run a FreeBSD/NetBSD or OpenBSD system. There is a bug in QEMU PCI emulation that prevents the NetBSD network driver to recognize the emulated network cards (See qemu-kvm 1.0 breaks openbsd, netbsd, freebsd). Ubuntu 12.04 and 12.10 provide a version of Qemu that has the problem. This is solved in Ubuntu 13.04, so this is the host linux distribution that I've installed.

For the virtual machine disk, I've setup some LVM partition on the host as follows:

sudo lvcreate -Z n -L 20G -n freebsd vg01

this creates a disk volume of 20G and label it freebsd.

The next step is to download the FreeBSD Installation CD (I've installed the FreeBSD-10.0-RC2). To manage the virtual machines, one can use the virsh command but the virt-manager graphical front-end provides an easier setup.

sudo virt-manager

The virtual machine is configured with:

    CPU: x86_64
    • Memory: 1048576
    • Disk type: raw, source: /dev/vg01/freebsd
    • Network card model: e1000
    • Boot on the CD image

    After the virtual machine starts, the FreeBSD installation proceeds (it was so simple that I took no screenshot at all).

    Post installation

    After the FreeBSD system is installed, it is almost ready to be used. Some additional packages are added by using the pkg install command (which is very close to the Debian apt-get command).

    pkg install jed
    pkg install sudo bash tcpdump

    By default the /proc is not setup and some application like the OpenJDK need to access it. Edit the file /etc/fstab and add the following lines:

    fdesc   /dev/fd         fdescfs         rw      0       0
    proc    /proc           procfs          rw      0       0

    and mount the new partitions with:

    mount -a

    GNAT installation

    The FreeBSD repository provides some packages for Ada development. They are easily installed as follows:

    pkg install gmake
    pkg install gnat-aux-20130412_1 gprbuild-20120510
    pkg install xmlada- zip-ada-45
    pkg install aws-
    pkg install gdb-7.6.1_1

    After the installation, change the path and setup the ADA_PROJECT_PATH variables to be able to use gnatmake:

    export PATH=/usr/local/gcc-aux/bin:$PATH
    export ADA_PROJECT_PATH=/usr/local/lib/gnat

    Jenkins slave node installation

    Jenkins uses a Java application that runs on each build node. It is necessary to install some Java JRE. To use subversion on the build node, we must make sure to install some 1.6 version since the 1.8 and 1.7 version have incompatibilities with the Jenkins master. The following packages are necessary:

    pkg install openjdk6-jre-b28_7
    pkg install subversion-1.6.23_2

    Jenkins needs a user to connect to the build node. The user is created by the adduser command. The Jenkins user does not need any privilege.

    Jenkins master will use SSH to connect to the slave node. During the first connection, it installs the slave.jar file which manages the launch of remote builds on the slave. For the SSH connection, the password authentication is possible but I've setup a public key authentication that I've setup on the FreeBSD node by using ssh-copy-id.

    At this stage, the FreeBSD build node is ready to be added on the Jenkins master node (through the Jenkins UI Manage Jenkins/Manage Nodes).

    MySQL Installation

    The MySQL installation is necessary for some of my projects. This is easily done as follows:

    pkg install mysql55-server-5.5.35 mysql55-client-5.5.35

    Then add the following line to /etc/rc.conf


    and start the server manyally:

    /usr/local/etc/rc.d/mysql-server onestart

    The database tables are setup during the first start.

    Other packages

    Some packages that are necessary for some projets:

    pkg install autoconf-2.69 curl-7.33.0_1
    pkg install ImageMagick-nox11-

    Jenkins jobs

    The jenkins master is now building 7 projects automatically for FreeBSD 10: FreeBSD Ada Jobs

    To add a comment, you must be connected. Login to add a comment

    World IPv6 Day

    By Stephane Carrez

    Today, June 8th 2011, is the World IPv6 day. Major organisations such as Google, Facebook, Yahoo! wil offer native IPv6 connectivity.

    To check your IPv6 connectivity, you can run a test from your browser: Test your IPv6 connectivity.

    If you install the ShowIP Firefox plugin, you will know the IP address of web sites while you browse and therefore quickly know whether you navigate using IPv4 or IPv6.

    Below are some basic performance results between IPv4 and IPv6. Since most routers are tuned for IPv4, the IPv6 flow path is not yet as fast as IPv4. The (small) performance degradation has nothing to do with the IPv6 protocol.

    Google IPv4 vs IPv6 ping

    $ ping -n
    PING ( 56(84) bytes of data.
    64 bytes from icmp_seq=1 ttl=55 time=9.63 ms
    $ ping6 -n
    PING 56 data bytes
    64 bytes from 2a00:1450:400c:c00::67: icmp_seq=1 ttl=56 time=11.6 ms

    Yahoo IPv4 vs IPv6 ping

    $ ping -n
    PING ( 56(84) bytes of data.
    64 bytes from icmp_seq=1 ttl=58 time=25.7 ms
    $ ping6 -n
    PING 56 data bytes
    64 bytes from 2a00:1288:f00e:1fe::3000: icmp_seq=1 ttl=60 time=31.3 ms

    Facebook IPv4 vs IPv6 ping

    $ ping -n
    PING ( 56(84) bytes of data.
    64 bytes from icmp_seq=1 ttl=247 time=80.6 ms
    $ ping6 -n
    PING 56 data bytes
    64 bytes from 2620:0:1c18:0:face:b00c:0:1: icmp_seq=1 ttl=38 time=98.6 ms
    To add a comment, you must be connected. Login to add a comment

    Suivi de consommation éléctrique avec clef USB Teleinfo ADTEK

    By Stephane Carrez

    Les compteurs EDF r飥nt disposent d'un module 魥ttant p鲩odiquement des informations sur la consommation 鬩ctrique. Le compteur utilise un protocol s鲩e ࠱200 baud, le signal est modul頰ar une porteuse ࠵0Khz (Voir t鬩information EDF pour les d鴡ils ainsi que la Sp飩fication Technique EDF). Cet article explique comment r飵p鲥r ces informations et les rendre visibles 蠴ravers plusieurs graphes. En deux mots, le principe est de r飵p鲥r les informations EDF, d'envoyer ces informations sur un serveur et afficher tous les graphes et r鳵ltats ࠴ravers une interface Web accessible depuis Internet.


    T鬩information avec clef USB ADTEK

    La soci鴩 Adtek propose un petit module T鬩info USB permettant de r飵p鲥r la t鬩information via un port s鲩e. La communication se fait ࠹600 baud, 8-bits, sans parit鮠Sous Linux, il faut charger les deux modules usbserial et ftdi_sio. Suivant la version du driver ftdi, la clef USB peut ne pas être reconnue, il faut alors indiquer les identifiants du fabricant et du produit lors du chargement du driver.

    insmod usbserial.ko
    insmod ftdi_sio.ko vendor=0x0403 product=0x6015

    Si tout se passe bien le driver va cr饲 le device /dev/ttyUSB0 lorsque la clef est mont饺

    usbserial: USB Serial Driver core
    USB Serial support registered for FTDI USB Serial Device
    ftdi_sio 2-2:1.0: FTDI USB Serial Device converter detected
    usb 2-2: Detected FT232RL
    usb 2-2: FTDI USB Serial Device converter now attached to ttyUSB0
    usbcore: registered new interface driver ftdi_sio
    ftdi_sio: v1.4.3:USB FTDI Serial Converters Driver

    Petit agent de monitoring

    Un petit agent de monitoring va lire en permanence les trames EDF de t鬩information via le port s鲩e. Il doit collecter les donn饳 et envoyer les r鳵ltats toutes les 5 minutes en utilisant un POST HTTP vers le serveur qui lui est donn頡u d魡rrage.

    edf-teleinfo /dev/ttyUSB0 http://server/teleinfo.php &

    Cet agent peut tourner dans un Raspberry Pi, un BeagleBone Black. Dans mon cas, je le fais tourner sur ma Bbox Sensation ADSL. A d馡ut, on peut utiliser un PC standard mais ce n'est pas optimal pour la consommation 鬩ctrique. Source de l'agent: edf-teleinfo.c

    La compilation de l'agent se fait simplement avec l'une des commandes suivantes:

    gcc -o edf-teleinfo -Wall -O2 edf-teleinfo.c
    arm-angstrom-linux-gnueabi-gcc -o edf-teleinfo-arm -Wall -O2 edf-teleinfo.c

    Cré¡´ion des fichiers RRDtool

    Le compteur EDF envoie une mesure toutes les 2 secondes (option -s de rrdtool). La consommation 鬩ctrique est enregistr饠sous deux data sources: hc (Heures creuses) et hp (Heures pleines). Les min, max et average sont calcul鳠pour des p鲩odes de 1 mn (30 mesures), 5mn (150 mesures) et 15 mn (450 mesures).

    rrdtool create teleinfo-home.rrd -s 2 \
       DS:hc:COUNTER:300:0:4294967295 \
       DS:hp:COUNTER:300:0:4294967295 \
       RRA:AVERAGE:0.1:30:1800 \
       RRA:MIN:0.1:30:1800 \
       RRA:MAX:0.1:30:1800 \
       RRA:AVERAGE:0.1:150:1800 \
       RRA:MIN:0.1:150:1800 \
       RRA:MAX:0.1:150:1800 \
       RRA:AVERAGE:0.1:450:1800 \
       RRA:MIN:0.1:450:1800 \

    Alors que les Heures creuses et Heures pleines sont d馩nies comme COUNTER, l'intensit頩nstantan饠et la puissance apparente sont repr鳥nt饳 avec des gauges variant de 0 ࠷0A ou 0 ࠱5000W.

    rrdtool create teleinfo_power-home.rrd -s 2 \
       DS:ic:GAUGE:300:0:70 \
       DS:pap:GAUGE:300:0:15000 \
       RRA:AVERAGE:0.1:30:1800 \
       RRA:MIN:0.1:30:1800 \
       RRA:MAX:0.1:30:1800 \
       RRA:AVERAGE:0.1:150:1800 \
       RRA:MIN:0.1:150:1800 \
       RRA:MAX:0.1:150:1800 \
       RRA:AVERAGE:0.1:450:1800 \
       RRA:MIN:0.1:450:1800 \

    La cré¡´ion des fichiers est à ¦aire une seule fois sur le serveur. Si la cré¡´ion est faite dans un ré°¥rtoire /var/lib/collectd/rrd alors on peut facilement utiliser Collectd Graph Panel pour l'affichage des graphes.

    Collecte des informations

    Sur le serveur, une page fait l'extraction des paramè´²es de la requê´¥ POST et remplit la base de donné³ RRDtool.

    L'agent envoie les informations suivantes:

    • date: le temps Unix correspondant à ¬a premiè²¥ mesure,
    • end: le temps Unix de la derniè²¥ mesure,
    • hc: la valeur du compteur sur les heures creuses,
    • hp: la valeur du compteur sur les heures pleines,
    • ic: le courant instantan鬍
    • pap: la puissance apparente.

    Comme l'agent envoie les donn饳 par lot de 150 valeurs (ou plus si il y a eu des probl譥s de connection), la mise ࠪour se fait en ins鲡nt plusieurs valeurs ࠬa fois. Dans ce cas, rrdupdate s'attend ࠡvoir le timestamp Unix suivit des valeurs des deux data sources (courant et puissance). Voici un extrait de la commande:

    rrdupdate \
      /var/lib/collectd/rrd/home/teleinfo/teleinfo_power-home.rrd \
      1379885272:4:1040 1379885274:4:1040 1379885276:4:1040 \
      1379885278:4:1040 1379885280:4:1040 1379885282:4:1040 \
      1379885284:4:1040 1379885286:4:1040 1379885288:4:1040 ...

    Pour l'installation de la collecte, copier le fichier edf-collect.php sur le serveur en s'arrangeant pour rendre accessible la page via le serveur web. Source: edf-collect.php.txt

    Affichage des informations

    Collectd Graph Panel est une application web 飲ite en PHP et Javascript permettant d'afficher les graphes collect鳠par collectd. Si les graphes sont cr饳 au bon endroit, alors cette application les reconnaitra et permettra de les afficher. Pour cela, il faut ajouter le plugin teleinfo.php dans le r鰥rtoire plugin. Source: teleinfo.php.txt

    cp teleinfo.php.txt CGP-0.4.1/plugin/teleinfo.php

    Et maintenant

    Voir sa consommation 鬩ctrique a un petit cot頬udique. Parfois c'est surprenant de constater que la consommation 鬩ctrique ne descend pas en dessous de 200W. Ceci dit c'est normal avec toutes ces Box, d飯deurs, switch et autres appareils qui mꭥ en veille consomme quelques watts.


    Planete Domotique

    To add a comment, you must be connected. Login to add a comment

    Integration of Ada Web Server behind an Apache Server

    By Stephane Carrez

    When you run several web applications implemented in various languages (php, Java, Ada), you end up with some integration issue. The PHP application runs within an Apache Server, the Java application must runs in a Java web server (Tomcat, Jetty), and the Ada application executes within the Ada Web Server. Each of these web servers need a distinct listening port or distinct IP address. Integration of several web servers on the same host, is often done by using a front-end server that handles all incomming requests and dispatches them if necessary to other web servers.

    In this article I describe the way I have integrated the Ada Web Server. The Apache Server is the front-end server that serves the PHP files as well as the static files and it redirects some requests to the Ada Web Server.

    Virtual host definition

    The Apache Server can run more than one web site on a single machine. The Virtual hosts can be IP-based or name-based. We will use the later because it provides a greater scalability. The virtual host definition is bound to the server IP address and the listening port.

    <VirtualHost *:80>
      ServerAdmin webmaster@localhost
      LogLevel warn
      ErrorLog /var/log/apache2/demo-error.log
      CustomLog /var/log/apache2/demo-access.log combined

    The ServerName part is matched against the Host: request header that is received by the Apache server.

    The ErrorLog and CustomLog are not part of the virtual hosts definition but they allow to use dedicated logs which is useful for trouble shotting issues.

    Setting up the proxy

    The Apache mod_proxy module must be enabled. This is the module that will redirect the incomming requests to the Ada Web Server.

      <Proxy *>
        AddDefaultCharset off
        Order allow,deny
        Allow from all

    Redirection rules

    The Apache mod_rewrite module must be enabled.

      RewriteEngine On

    A first set of rewriting rules will redirect the request to dynamic pages to the Ada Web Server. The [P] flag activates the proxy and redirects the request. The Ada Web Server is running on the same host but is using port 8080.

      # Let AWS serve the dynamic HTML pages.
      RewriteRule ^/demo/(.*).html$ http://localhost:8080/demo/$1.html [P]
      RewriteRule ^/demo/auth/(.*)$ http://localhost:8080/demo/auth/$1 [P]
      RewriteRule ^/demo/statistics.xml$ http://localhost:8080/demo/statistics.xml [P]

    When the request is redirected, the mod_proxy will add a set of headers that can be used within AWS if necessary.

    Via: 1.1

    The X-Forwarded-For: header indicates the IP address of client.

    Static files

    Static files like images, CSS and javascript files can be served by the Apache front-end server. This is faster than proxying these requests to the Ada Web Server. At the same time we can setup some expiration and cache headers sent in the response (Expires: and Cache-Control: respectively). The definition below only deal with images that are accessed from the /demo/images/ URL component. The Alias directive tells you how to map the URL to the directory on the file system that holds the files.

      Alias /demo/images/ "/home/htdocs.demo/web/images/"
      <Directory "/home/htdocs.demo/web/images/">
        Options -Indexes +FollowSymLinks
        # Do not check for .htaccess (perf. improvement)
        AllowOverride None
        Order allow,deny
        allow from all
        # enable expirations
        ExpiresActive On
        # Activate the browser caching
        # (CSS, images and scripts should not change)
        ExpiresByType image/png A1296000
        ExpiresByType image/gif A1296000
        ExpiresByType image/jpg A1296000

    This kind of definition is repeated for each set of static files (javascript and css).

    Proxy Overhead

    The proxy adds a small overhead that you can measure by using the Apache Benchmark tool. A first run is done on AWS and another on Apache.

    ab -n 1000 http://localhost:8080/demo/compute.html
    ab -n 1000

    The overhead will depend on the application and the page being served. On this machine, the AWS server can process arround 720 requests/sec and this is reduced to 550 requests/sec through the Apache front-end (23% decrease).

    Bacula database cleanup

    By Stephane Carrez

    Bacula maintains a catalog of files in a database. Over time, the database grows and despite some automatic purge and job cleanup, some information remains that is no longer necessary. This article explains how to remove some dead records from the Bacula catalog.

    Bacula maintains a list of backup jobs that have been executed in the job table. For each job, it keeps the list of files that have been saved in the file table. When you do a restore, you somehow select the job to restore and pick files from that job. There should not exist any file entry associated with a non existing job. Unfortunately this is not the case. I've found that some files (more than 2 millions entries) were pointing to some job that did not exist.

    Discovering dead jobs still referenced

    The first step is to find out which job has been deleted and is still referenced by the file table. First, let's create a temporary table that will hold the job ids associated with the files.

    mysql> create temporary table job_files (id bigint);

    The use of a temporary table was necessary in my case because the file table is so big and the ReadyNAS so slow that scanning the database takes too much time.

    Now, we can populate the temporary table with the job ids:

    mysql> insert into job_files select distinct file.jobid from file;
    Query OK, 350 rows affected (8 min 53.26 sec)
    Records: 350  Duplicates: 0  Warnings: 0

    The list of jobs that have been removed but are still referenced by a file is obtained by:

    mysql> select from job_files
     left join job on = job.jobid
     where job.jobid is null;
    | id   |
    | 2254 | 
    | 2806 | 
    2 rows in set (0.05 sec)

    Deleting Dead Files

    Deleting all the file records in one blow was not possible for me because there was too many files to delete and the mysql server did not have enough resources on the ReadyNAS to do it. I had to delete these records in batch of 100000 files, the process was repeated several times (each delete query took more than 2mn!!!).

    mysql> delete from file where jobid = 2254 limit 100000;


    This cleanup process allowed me to reduce the size of the file table from 10 millions entries to 7 millions. This improves the database performance and speeds up the Bacula catalog backup process.

    Optimization with Valgrind Massif and Cachegrind

    By Stephane Carrez

    Memory optimization reveals sometimes some nice surprise. I was interested to analyze the memory used by the Ada Server Faces framework. For this I've profiled the unit tests program. This includes 130 tests that cover almost all the features of the framework.

    Memory analysis with Valgrind Massif

    Massif is a Valgrind tool that is used for heap analysis. It does not require the application to be re-compiled and can be used easily. The application is executed by using Valgrind and its tool Massif. The command that I've used was:

    valgrind --tool=massif --threshold=0.1 \
       --detailed-freq=1 --alloc-fn=__gnat_malloc \
       bin/asf_harness -config

    The valgrind tool creates a file massif.out.NNN which contains the analysis. The massif-visualizer is a graphical tool that reads the file and allows you to analyze the results. It is launched as follows:

    massif-visualizer massif.out.19813

    (the number is the pid of the process that was running, replace it accordingly).

    The tool provides a graphical representation of memory used over the time. It allows to highlight a given memory snapshot and understand roughly where the memory is used.

    Memory consumption with Massif [before]

    While looking at the result, I was intrigued by a 1MB allocation that was made several times and then released (It creates these visual spikes and it correspond to the big red horizontal bar that appears visually). It was within the sax-utils.adb file that is part of the XML/Ada library. Looking at the implementation, it turns out that it allocates a hash table with 65536 entries. This allocation is done each time the sax parser is created. I've reduced the size of this hash table to 1024 entries. If you want to do it, change the following line in sax/ (line 99):

       Hash_Num : constant := 2**16;


       Hash_Num : constant := 2**10;

    After building, checking there is no regression (yes, it works), I've re-run the Massif tool and here are the results.

    Memory consumption with Massif [after]

    The peak memory was reduced from 2.7Mb to 2.0Mb. The memory usage is now easier to understand and analyse because the 1Mb allocation is gone. Other memory allocations have more importance now. But wait. There is more! My program is now faster!

    Cache analysis with cachegrind

    To understand why the program is now faster, I've used Cachegrind that measures processor cache performance. Cachegrind is a cache and branch-prediction profiler provided by Valgrind as another tool. I've executed the tool with the following command:

    valgrind --tool=cachegrind \
        bin/asf_harness -config

    I've launched it once before the hash table correction and once after. Similar to Massif, Cachegrind generates a file cachgrind.NNN that contains the analysis. You analyze the result by using either cg_annotate or kcachegrind. Having two Cachegrind files, I've used cg_diff to somehow get diff between the two executions.

    cg_diff cachegrind.out.24198 cachegrind.out.23286 > cg.out.1
    cg_annotate cg.out.1

    Before the fix, we can see in Cachegrind report that the most intensive memory operations are performed by Sax.Htable.Reset operation and by the GNAT operation that initializes the Sax.Symbols.Symbol_Table_Record type which contains the big hash table. Dr is the number of data reads, D1mr the L1 cache read miss and Dw is the number of writes with D1mw representing the L1 cache write miss. Having a lot of cache miss will slow down the execution: L1 cache access requires a few cycles while main memory access could cost several hundreds of them.

             Dr      D1mr          Dw      D1mw 
    212,746,571 2,787,355 144,880,212 2,469,782  PROGRAM TOTALS
            Dr      D1mr         Dw      D1mw  file:function
    25,000,929 2,081,943     27,672       244  sax/sax-htable.adb:sax__symbols__string_htable__reset
           508       127 33,293,050 2,080,768  sax/sax-htable.adb:sax__symbols__symbol_table_recordIP
    43,894,931   129,786  7,532,775     8,677  ???:???
    15,021,128     4,140  5,632,923         0  pthread_getspecific
     7,510,564     2,995  7,510,564    10,673  ???:system__task_primitives__operations__specific__selfXnn
     6,134,652    41,357  4,320,817    49,207  _int_malloc
     4,774,547    22,969  1,956,568     4,392  _int_free
     3,753,930         0  5,630,895     5,039  ???:system__task_primitives__operations(short,...)(long, float)

    With a smaller hash table, the Cachegrind report indicates a reduction of 24,543,482 data reads and 32,765,323 data writes. The cache read miss was reduced by 2,086,579 (74%) and the cache write miss was also reduced by 2,056,247 (83% reduction!).

    With a small hash table, the Sax.Symbols.Symbol_Table_Record gets initialized quicker and its cleaning needs less memory accesses, hence CPU cycles. By having a smaller hash table, we also benefit from less cache miss: using a 1Mb hash table flushes a big part of the data cache.

             Dr    D1mr          Dw    D1mw 
    188,203,089 700,776 112,114,889 413,535  PROGRAM TOTALS
            Dr    D1mr        Dw   D1mw  file:function
    43,904,760 120,883 7,532,577  8,407  ???:???
    15,028,328      98 5,635,623      0  pthread_getspecific
     7,514,164     288 7,514,164  9,929  ???:system__task_primitives__operations__specific__selfXnn
     6,129,019  39,636 4,305,043 48,446  _int_malloc
     4,784,026  18,626 1,959,387  3,261  _int_free
     3,755,730       0 5,633,595  4,390  ???:system__task_primitives__operations(short,...)(long, float)
     2,418,778      65 2,705,140     14  ???:system__tasking__initialization__abort_undefer
     3,839,603   2,605 1,283,289      0  malloc


    Running massif and cachegrind is very easy but it may take some time to figure out how to understand and use the results. A big hash table is not always a good thing for an application. By creating cache misses it may in fact slow down the application. To learn more about this subject, I recommend the excellent document What Every Programmer Should Know About Memory written by Ulrich Drepper.

    To add a comment, you must be connected. Login to add a comment

    Ada Web Application 0.3.0 is available

    By Stephane Carrez

    Ada Web Application is a framework to build web applications.

    • AWA uses Ada Server Faces for the web framework. This framework is using several patterns from the Java world such as Java Server Faces and Java Servlets.
    • AWA provides a set of ready to use and extendable modules that are common to many web application. This includes managing the login, authentication, users, permissions.
    • AWA uses an Object Relational Mapping that helps in writing Ada applications on top of MySQL or SQLite databases. The ADO framework allows to map database objects into Ada records and access them easily.
    • AWA is a model driven engineering framework that allows to design the application data model using UML and generate the corresponding Ada code.

    Ada Web Application Architecture

    The new version of AWA provides:

    • New jobs plugin to manage asynchronous jobs,
    • New storage plugin to manage a storage space for application documents,
    • New votes plugin to allow voting on items,
    • New question plugin to provide a general purpose Q&A.

    AWA can be downloaded at

    A live demonstration of various features provided by AWA is available at

    To add a comment, you must be connected. Login to add a comment

    Dynamo 0.6.0 is available

    By Stephane Carrez

    Dynamo is a tool to help developers write some types of Ada Applications which use the Ada Server Faces or Ada Database Objects frameworks. Dynamo provides several commands to perform one specific task in the development process: creation of an application, generation of database model, generation of Ada model, creation of database.

    The new version of Dynamo provides:

    • A new command build-doc to extract some documentation from the sources,
    • The generation of MySQL and SQLite schemas from UML models,
    • The generation of Ada database mappings from UML models,
    • The generation of Ada beans from the UML models,
    • A new project template for command line tools using ADO,
    • A new distribution command to merge the resource bundles.

    The most important feature is probably the Ada code generation from a UML class diagram. With this, you can design the data model of an application using ArgoUML and generate the Ada model files that will be used to access the database easily through the Ada Database Objects library. The tool will also generate the SQL database schema so that everything is concistent from your UML model, to the Ada implementation and the database tables.

    The short tutorial below indicates how to design a UML model with ArgoUML, generate the Ada model files, the SQL files and create the MySQL database.

    The Dynamo tool is available at

    To build Dynamo, you will need:

    To add a comment, you must be connected. Login to add a comment

    Ada Database Objects 0.4.0 is available

    By Stephane Carrez

    The Ada Database Objects is an Object Relational Mapping for the Ada05 programming language. It allows to map database objects into Ada records and access databases easily. Most of the concepts developped for ADO come from the Java Hibernate ORM. ADO supports MySQL and SQLite databases.

    The new version brings:

    • Support to reload query definitions,
    • It optimizes session factory implementation,
    • It allows to customize the MySQL database connection by using MySQL SET

    This version can be downloaded at

    To add a comment, you must be connected. Login to add a comment

    Ada Server Faces 0.5.0 is available

    By Stephane Carrez

    Ada Server Faces is an Ada implementation of several Java standard web frameworks.

    • The Java Servlet (JSR 315) defines the basis for a Java application to be plugged in Web servers. It standardizes the way an HTTP request and HTTP response are represented. It defines the mechanisms by which the requests and responses are passed from the Web server to the application possibly through some additional filters.
    • The Java Unified Expression Language (JSR 245) is a small expression language intended to be used in Web pages. Through the expressions, functions and methods it creates the link between the Web page template and the application data identified as beans.
    • The Java Server Faces (JSR 314 and JSR 344) is a component driven framework which provides a powerful mechanism for Web applications. Web pages are represented by facelet views (XHTML files) that are modelized as components when a request comes in. A lifecycle mechanism drives the request through converters and validators triggering events that are received by the application. Navigation rules define what result view must be rendered and returned.

    Ada Server Faces gives to Ada developers a strong web framework which is frequently used in Java Web applications. On their hand, Java developers could benefit from the high performance that Ada brings: apart from the language, they will use the same design patterns.

    Ada Server Faces

    The new version of Ada Server Faces is available and brings the following changes:

    • The Security packages was moved in a separate project: Ada Security,
    • New demo to show OAuth and Facebook API integration,
    • Integrated jQuery 1.8.3 and jQuery UI 1.9.2,
    • New converter to display file sizes,
    • Javascript support was added for click-to-edit behavior,
    • Add support for JSF session beans,
    • Add support for servlet error page customization,
    • Allow navigation rules to handle exceptions raised by Ada bean actions,
    • Support the JSF 2.2 conditional navigation,
    • New functions fn:escapeXml and fn:replace.

    The new version can be downloaded on the Ada Server Faces project page. A live demo is available at

    To add a comment, you must be connected. Login to add a comment

    Ada Utility Library 1.6.1 is available

    By Stephane Carrez

    Ada Utility Library is a collection of utility packages for Ada 2005. This release is provided as workarround release to avoid the gcc 4.7 bug 53737. It allows to build Ada Utility Library with gcc 4.7.2.

    To add a comment, you must be connected. Login to add a comment

    Ada Security 1.0.0 is available

    By Stephane Carrez

    Ada Security is a security framework which allows web applications to define and enforce security policies. The framework allows users to authenticate by using OpenID Authentication 2.0. Ada Security also defines a set of client methods for using the OAuth 2.0 protocol.

    Ada Security Framework

    • A security policy manager defines and implements the set of security rules that specify how to protect the system or the resources.
    • A user is authenticated in the application. Authentication can be based on OpenID or another system.
    • A security context holds the contextual information that allows the security policy manager to verify that the user is allowed to access the protected resource according to the policy rules.

    The Ada Security framework can be downloaded at Ada Security project page.

    The framework is the core security framework used by Ada Server Faces and Ada Web Application to protect access to resources.

    To add a comment, you must be connected. Login to add a comment

    Ada Utility Library 1.6.0 is available

    By Stephane Carrez

    Ada Utility Library is a collection of utility packages for Ada 2005. A new version is available which provides:

    • Support for HTTP clients (curl, AWS, ...)
    • Support for REST APIs using JSON
    • New operations To_JSON and From_JSON for easy object map serialization
    • Added a listeners to help implementing the observer/listener design patterns
    • Added support for wildcard mapping in serialization framework
    • New option -d <dir> for the unit test harness to change the working directory,
    • New example facebook.adb to show the REST support.

    It has been compiled and ported on Linux, Windows and Netbsd (gcc 4.4, GNAT 2011, gcc 4.6.3). You can download this new version at

    To add a comment, you must be connected. Login to add a comment