Java 2 Ada

New debian repository with Ada packages

By Stephane Carrez 2014-04-06 15:54:30

I've created and setup a Debian repository to give access to several Debian packages for several Ada projects that I manage. The goal is to provide some easy and ready to use packages to simplify and help in the installation of various Ada libraries. The Debian repository includes the binary and development packages for Ada Utility Library, Ada EL, Ada Security, and Ada Server Faces.

Access to the repository

The repository packages are signed with PGP. To get the verification key and setup the apt-get tool, you should run the following command:

wget -O - http://apt.vacs.fr/apt.vacs.fr.gpg.key | sudo apt-key add -

Ubuntu 13.04 Raring

A first repository provides Debian packages targeted at Ubuntu 13.04 raring. They are built with the gnat-4.6 package and depend on libaws-2.10.2-4 and libxmlada4.1-dev. Add the following line to your /etc/apt/sources.list configuration:

deb http://apt.vacs.fr/ubuntu-raring raring main

Ubuntu 12.04 LTS Precise

A second repository contains the Debian packages for Ubuntu 12.04 precise. They are built with the gnat-4.6 package and depend on libaws-2.10.2-1and libxmlada4.1-dev. Add the following line to your /etc/apt/sources.list configuration:

deb http://apt.vacs.fr/ubuntu-precise precise main

Installation

Once you've added the configuration line, you can install the packages:

sudo apt-get update
sudo apt-get install libada-asf1.0

For the curious, you may browse the repository here.

Ada Server Faces 1.0.0 is available

By Stephane Carrez 2014-04-05 19:05:21

Ada Server Faces is a framework that allows to create Web applications using the same design patterns as the Java Server Faces (See JSR 252, JSR 314, or JSR 344). The presentation pages benefit from the Facelets Web template system and the runtime takes advantages of the Ada language safety and performance.

A new release is available with several features that help writing online applications:

  • Add support for Facebook and Google+ login
  • Javascript support for popup and editable fields
  • Added support to enable/disable mouseover effect in lists
  • New EL function util:iso8601
  • New component <w:autocomplete> for input text with autocompletion
  • New component <w:gravatar> to render a gravatar image
  • New component <w:like> to render a Facebook, Twitter or Google+ like button
  • New component <w:panel> to provide collapsible div panels
  • New components <w:tabView> and <w:tab> for tabs display
  • New component <w:accordion> to display accordion tabs
  • Add support for JSF <f:facet>, <f:convertDateTime>, <h:doctype>
  • Support for the creation of Debian packages

You can try the online demonstration of the new widget components and download this new release at http://download.vacs.fr/ada-asf/ada-asf-1.0.0.tar.gz

Ada Security 1.1.0 is available

By Stephane Carrez 2014-03-22 14:05:36

The Ada Security library provides a security framework which allows applications to define and enforce security policies. This framework allows users to authenticate by using OpenID Authentication 2.0, OAuth 2.0 or OpenID Connect protocols.

The new version brings the following improvements:

  • New authentication framework that supports OpenID, OpenID Connect, OAuth, Facebook login
  • AWS demo for a Google, Yahoo!, Facebook, Google+ authentication
  • Support to extract JSON Web Token (JWT)
  • Support for the creation of Debian packages

The library can be downloaded at http://download.vacs.fr/ada-security/ada-security-1.1.0.tar.gz

Ada EL 1.5.0 is available

By Stephane Carrez 2014-03-19 21:49:33

Ada EL is a library that implements an expression language similar to JSP and JSF Unified Expression Languages (EL). The expression language is the foundation used by Java Server Faces and Ada Server Faces to make the necessary binding between presentation pages in XML/HTML and the application code written in Java or Ada.

The presentation page uses an UEL expression to retrieve the value provided by some application object (Java or Ada). In the following expression:

#{questionInfo.question.rating}

the EL runtime will first retrieve the object registered under the name questionInfo and look for the question and then rating data members. The data value is then converted to a string.

The new release is available for download at http://download.vacs.fr/ada-el/ada-el-1.5.0.tar.gz

This version brings the following improvements:

  • EL parser optimization (20% to 30% speed up)
  • Support for the creation of Debian packages

Ada Utility Library 1.7.0 is available

By Stephane Carrez 2014-02-09 11:06:03

Ada Utility Library is a collection of utility packages for Ada 2005. A new version is available which provides:

  • Added a text and string builder
  • Added date helper operations to get the start of day, week or month time
  • Support XmlAda 2013
  • Added Objects.Datasets to provide list beans (lists of row/column objects)
  • Added support for shared library loading
  • Support for the creation of Debian packages
  • Update Ahven integration to 2.3
  • New option -r <test> option for the unit test harness to execute a single test
  • Port on FreeBSD

It has been compiled and ported on Linux, Windows, Netbsd, FreeBSD (gcc 4.6, GNAT 2013, gcc 4.7.3). You can download this new version at http://download.vacs.fr/ada-util/ada-util-1.7.0.tar.gz.

Migrating a virtual machine from one server to another

By Stephane Carrez 2014-01-24 23:05:37

OVH is providing new offers that are cheaper and provide more CPU power so it was time for me to migrate and pick another server and reduce the cost by 30%. I'm using 7 virtual machines that run either NetBSD, OpenBSD, FreeBSD or Debian. Most are Intel based, but some of them are Sparc or Arm virtual machines. I've colllected below the main steps that must be done for the migration.

LVM volume creation on the new server

The first step is to create the LVM volume on the new server. The volume should have the same size as the original. The following command creates a 20G volume labeled netbsd.

$ sudo lvcreate -L 20G -Z n -n netbsd vg01
WARNING: "netbsd" not zeroed
Logical volume "netbsd created

Copying the VM image

After stopping the VM, we can copy the system image from one server to another server by using a combination of dd and ssh. The command must be executed as roototherwise some temporary file and additional copy steps could be necessary.

$ sudo dd if=/dev/vg01/netbsd bs=8192 |
ssh root@master.vacs.fr dd bs=8192 of=/dev/vg01/netbsd

root@master.vacs.fr's password:
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 1858.33 s, 11.6 MB/s
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 1848.62 s, 11.6 MB/s

By compressing the image on the fly, the remote copy is faster (4 times faster). The following command does this:

$ sudo dd if=/dev/vg01/netbsd bs=8192 |
gzip -c | ssh root@master.vacs.fr \
'gzip -c -d | dd bs=8192 of=/dev/vg01/netbsd'

root@master.vacs.fr's password:
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 427.313 s, 50.3 MB/s
2621440+0 records in
2621440+0 records out
21474836480 bytes (21 GB) copied, 436.128 s, 49.2 MB/s

Once the copy is done, it's good to verify the integrity of the copy. For this, we can run the sha1sumon the source image and on the destination image and compare the SHA1 checksum: they must match.

$ sudo sha1sum /dev/vg01/netbsd
04e23ccc1d22cb1de439b43535855b2d1331da6a /dev/vg01/netbsd

(run this command on both servers and compare the result).

Importing the virtual machine definition

The last step is to copy the virtual machine definition from one server to the other. The definition is an XML file located in the /etc/libvirt/qemu directory. Once copied, run the virsh command on the target server and import the definition:

$ sudo virsh
virsh# define netbsd.xml
virsh# start netbsd

That's it, the virtual machine was migrated at a reasonable small cost: the whole process took less than one hour!

Installation of FreeBSD for a jenkins build node

By Stephane Carrez 2013-12-31 10:59:44

A few days ago, I did a fresh installation of my Jenkins build environment for my Ada projects (this was necessary after a disk crash on my OVH server). I took this opportunity to setup a FreeBSD build node. This article is probably incomplete but tends to collect a number of tips for the installation.

Virtual machine setup

The FreeBSD build node is running within a QEMU virtual machine. The choice of the host turns out to be important since not all versions of QEMU are able to run a FreeBSD/NetBSD or OpenBSD system. There is a bug in QEMU PCI emulation that prevents the NetBSD network driver to recognize the emulated network cards (See qemu-kvm 1.0 breaks openbsd, netbsd, freebsd). Ubuntu 12.04 and 12.10 provide a version of Qemu that has the problem. This is solved in Ubuntu 13.04, so this is the host linux distribution that I've installed.

For the virtual machine disk, I've setup some LVM partition on the host as follows:

sudo lvcreate -Z n -L 20G -n freebsd vg01

this creates a disk volume of 20G and label it freebsd.

The next step is to download the FreeBSD Installation CD (I've installed the FreeBSD-10.0-RC2). To manage the virtual machines, one can use the virsh command but the virt-manager graphical front-end provides an easier setup.

sudo virt-manager

The virtual machine is configured with:

  • CPU: x86_64
  • Memory: 1048576
  • Disk type: raw, source: /dev/vg01/freebsd
  • Network card model: e1000
  • Boot on the CD image

After the virtual machine starts, the FreeBSD installation proceeds (it was so simple that I took no screenshot at all).

Post installation

After the FreeBSD system is installed, it is almost ready to be used. Some additional packages are added by using the pkg install command (which is very close to the Debian apt-get command).

pkg install jed
pkg install sudo bash tcpdump

By default the /proc is not setup and some application like the OpenJDK need to access it. Edit the file /etc/fstab and add the following lines:

fdesc   /dev/fd         fdescfs         rw      0       0
proc    /proc           procfs          rw      0       0

and mount the new partitions with:

mount -a

GNAT installation

The FreeBSD repository provides some packages for Ada development. They are easily installed as follows:

pkg install gmake
pkg install gnat-aux-20130412_1 gprbuild-20120510
pkg install xmlada-4.4.0.0_1 zip-ada-45
pkg install aws-3.1.0.0
pkg install gdb-7.6.1_1

After the installation, change the path and setup the ADA_PROJECT_PATH variables to be able to use gnatmake:

export PATH=/usr/local/gcc-aux/bin:$PATH
export ADA_PROJECT_PATH=/usr/local/lib/gnat

Jenkins slave node installation

Jenkins uses a Java application that runs on each build node. It is necessary to install some Java JRE. To use subversion on the build node, we must make sure to install some 1.6 version since the 1.8 and 1.7 version have incompatibilities with the Jenkins master. The following packages are necessary:

pkg install openjdk6-jre-b28_7
pkg install subversion-1.6.23_2

Jenkins needs a user to connect to the build node. The user is created by the adduser command. The Jenkins user does not need any privilege.

Jenkins master will use SSH to connect to the slave node. During the first connection, it installs the slave.jarfile which manages the launch of remote builds on the slave. For the SSH connection, the password authentication is possible but I've setup a public key authentication that I've setup on the FreeBSD node by using ssh-copy-id.

At this stage, the FreeBSD build node is ready to be added on the Jenkins master node (through the Jenkins UI Manage Jenkins/Manage Nodes).

MySQL Installation

The MySQL installation is necessary for some of my projects. This is easily done as follows:

pkg install mysql55-server-5.5.35 mysql55-client-5.5.35

Then add the following line to /etc/rc.conf

mysql_enable="YES"

and start the server manyally:

/usr/local/etc/rc.d/mysql-server onestart

The database tables are setup during the first start.

Other packages

Some packages that are necessary for some projets:

pkg install autoconf-2.69 curl-7.33.0_1
pkg install ImageMagick-nox11-6.8.0.7_3

Jenkins jobs

The jenkins master is now building 7 projects automatically for FreeBSD 10: FreeBSD Ada Jobs

Suivi de consommation éléctrique avec clef USB Teleinfo ADTEK

By Stephane Carrez 2013-09-22 08:57:37

Les compteurs EDF récent disposent d'un module émettant périodiquement des informations sur la consommation éléctrique. Le compteur utilise un protocol série à 1200 baud, le signal est modulé par une porteuse à 50Khz (Voir téléinformation EDF pour les détails ainsi que la Spéficiation Technique EDF). Cet article explique comment récupérer ces informations et les rendre visibles à travers plusieurs graphes. En deux mots, le principe est de récupérer les informations EDF, d'envoyer ces informations sur un serveur et afficher tous les graphes et résultats à travers un interface Web accessible depuis Internet.

bbox-teleinfo.png

Téléinformation avec clef USB ADTEK

La société Adtek propose un petit module Téléinfo USB permettant de récupérer la téléinformation via un port série. La communication se fait à 9600 baud, 8-bits, sans parité. Sous Linux, il faut charger les deux modules usbserialet ftdi_sio. Suivant la version du driver ftdi, la clef USB peut ne pas être reconnue, il faut alors indiquer les identifiants du fabricant et du produit lors du chargement du driver.

insmod usbserial.ko
insmod ftdi_sio.ko vendor=0x0403 product=0x6015

Si tout se passe bien le driver va créer le device /dev/ttyUSB0 lorsque la clef est montée:

usbserial: USB Serial Driver core
USB Serial support registered for FTDI USB Serial Device
ftdi_sio 2-2:1.0: FTDI USB Serial Device converter detected
usb 2-2: Detected FT232RL
usb 2-2: FTDI USB Serial Device converter now attached to ttyUSB0
usbcore: registered new interface driver ftdi_sio
ftdi_sio: v1.4.3:USB FTDI Serial Converters Driver

Petit agent de monitoring

Un petit agent de monitoring va lire en permanence les trames EDF de téléinformation via le port série. Il doit collecter les données et envoyer les résultats toutes les 5 minutes en utilisant un POST HTTP vers le serveur qui lui est donné au démarrage.

edf-teleinfo /dev/ttyUSB0 http://server/teleinfo.php &

Cet agent peut tourner dans un Raspberry Pi, un BeagleBone Black. Dans mon cas, je le fais tourner sur ma Bbox Sensation ADSL. A défaut, on peut utiliser un PC standard mais ce n'est pas optimal pour la consommation éléctrique. Source de l'agent: edf-teleinfo.c

La compilation de l'agent se fait simplement avec l'une des commandes suivantes:

gcc -o edf-teleinfo -Wall -O2 edf-teleinfo.c
arm-angstrom-linux-gnueabi-gcc -o edf-teleinfo-arm -Wall -O2 edf-teleinfo.c

Création des fichiers RRDtool

Le compteur EDF envoie une mesure toutes les 2 secondes (option -s de rrdtool). La consommation éléctrique est enregistrée sous deux data sources: hc (Heures creuses) et hp (Heures pleines). Les min, max et average sont calculés pour des périodes de 1 mn (30 mesures), 5mn (150 mesures) et 15 mn (450 mesures).

rrdtool create teleinfo-home.rrd -s 2 \
   DS:hc:COUNTER:300:0:4294967295 \
   DS:hp:COUNTER:300:0:4294967295 \
   RRA:AVERAGE:0.1:30:1800 \
   RRA:MIN:0.1:30:1800 \
   RRA:MAX:0.1:30:1800 \
   RRA:AVERAGE:0.1:150:1800 \
   RRA:MIN:0.1:150:1800 \
   RRA:MAX:0.1:150:1800 \
   RRA:AVERAGE:0.1:450:1800 \
   RRA:MIN:0.1:450:1800 \
   RRA:MAX:0.1:150:1800

Alors que les Heures creuses et Heures pleines sont définies comme COUNTER, l'intensité instantanée et la puissance apparente sont représentées avec des gauges variant de 0 à 70A ou 0 à 15000W.

rrdtool create teleinfo_power-home.rrd -s 2 \
   DS:ic:GAUGE:300:0:70 \
   DS:pap:GAUGE:300:0:15000 \
   RRA:AVERAGE:0.1:30:1800 \
   RRA:MIN:0.1:30:1800 \
   RRA:MAX:0.1:30:1800 \
   RRA:AVERAGE:0.1:150:1800 \
   RRA:MIN:0.1:150:1800 \
   RRA:MAX:0.1:150:1800 \
   RRA:AVERAGE:0.1:450:1800 \
   RRA:MIN:0.1:450:1800 \
   RRA:MAX:0.1:150:1800

La création des fichiers est à faire une seule fois sur le serveur. Si la création est faite dans un répertoire /var/lib/collectd/rrd alors on peut facilement utiliser Collectd Graph Panel pour l'affichage des graphes.

Collecte des informations

Sur le serveur, une page fait l'extraction des paramètres de la requête POST et remplit la base de données RRDtool.

L'agent envoit les informations suivantes:

  • date: le temps Unix correspondant à la première mesure,
  • end: le temps Unix de la dernière mesure,
  • hc: la valeur du compteur sur les heures creuses,
  • hp: la valeur du compteur sur les heures pleines,
  • ic: le courant instantané,
  • pap: la puissance apparente.

Comme l'agent envoie les données par lot de 150 valeurs (ou plus si il y a eu des problèmes de connection), la mise à jour se fait en insérant plusieurs valeurs à la fois. Dans ce cas, rrdupdate s'attend à avoir le timestamp Unix suivit des valeurs des deux data sources (courant et puissance). Voici un extrait de la commande:

rrdupdate \
  /var/lib/collectd/rrd/home/teleinfo/teleinfo_power-home.rrd \
  1379885272:4:1040 1379885274:4:1040 1379885276:4:1040 \
  1379885278:4:1040 1379885280:4:1040 1379885282:4:1040 \
  1379885284:4:1040 1379885286:4:1040 1379885288:4:1040 ...

Pour l'installation de la collecte, copier le fichier edf-collect.php sur le serveur en s'arrangeant pour rendre accessible la page via le serveur web. Source: edf-collect.php.txt

Affichage des informations

Collectd Graph Panel est une application web écrite en PHP et Javascript permettant d'afficher les graphes collectés par collectd. Si les graphes sont créés au bon endroit, alors cette application les reconnaitra et permettra de les afficher. Pour cela, il faut ajouter le plugin teleinfo.php dans le répertoire plugin. Source: teleinfo.php.txt

unzip CGP-0.4.1.zip
cp teleinfo.php.txt CGP-0.4.1/plugin/teleinfo.php

Voici le résultat (nettement améliorable mais c'est un premier pas)...

Et maintenant

Voir sa consommation éléctrique a un petit coté ludique. Parfois c'est surprenant de constater que la consommation éléctrique ne descend pas en dessous de 200W. Ceci dit c'est normal avec toutes ces Box, décodeurs, switch et autres appareils qui même en veille consomme quelques watts.

Suiviconso

Planete Domotique

Integration of Ada Web Server behind an Apache Server

By Stephane Carrez 2013-07-07 07:54:25

When you run several web applications implemented in various languages (php, Java, Ada), you end up with some integration issue. The PHP application runs within an Apache Server, the Java application must runs in a Java web server (Tomcat, Jetty), and the Ada application executes within the Ada Web Server. Each of these web servers need a distinct listening port or distinct IP address. Integration of several web servers on the same host, is often done by using a front-end server that handles all incomming requests and dispatches them if necessary to other web servers.

In this article I describe the way I have integrated the Ada Web Server. The Apache Server is the front-end server that serves the PHP files as well as the static files and it redirects some requests to the Ada Web Server.

Virtual host definition

The Apache Server can run more than one web site on a single machine. The Virtual hosts can be IP-based or name-based. We will use the later because it provides a greater scalability. The virtual host definition is bound to the server IP address and the listening port.

<VirtualHost *:80>
  ServerAdmin webmaster@localhost
  ServerAlias demo.vacs.fr
  ServerName demo.vacs.fr
...
  LogLevel warn
  ErrorLog /var/log/apache2/demo-error.log
  CustomLog /var/log/apache2/demo-access.log combined
</VirtualHost>

The ServerName part is matched against the Host: request header that is received by the Apache server.

The ErrorLog and CustomLog are not part of the virtual hosts definition but they allow to use dedicated logs which is useful for trouble shotting issues.

Setting up the proxy

The Apache mod_proxy module must be enabled. This is the module that will redirect the incomming requests to the Ada Web Server.

  <Proxy *>
    AddDefaultCharset off
    Order allow,deny
    Allow from all
  </Proxy>

Redirection rules

The Apache mod_rewrite module must be enabled.

  RewriteEngine On

A first set of rewriting rules will redirect the request to dynamic pages to the Ada Web Server. The [P] flag activates the proxy and redirects the request. The Ada Web Server is running on the same host but is using port 8080.

  # Let AWS serve the dynamic HTML pages.
  RewriteRule ^/demo/(.*).html$ http://localhost:8080/demo/$1.html [P]
  RewriteRule ^/demo/auth/(.*)$ http://localhost:8080/demo/auth/$1 [P]
  RewriteRule ^/demo/statistics.xml$ http://localhost:8080/demo/statistics.xml [P]

When the request is redirected, the mod_proxy will add a set of headers that can be used within AWS if necessary.

Via: 1.1 demo.vacs.fr
X-Forwarded-For: 31.39.214.181
X-Forwarded-Host: demo.vacs.fr
X-Forwarded-Server: demo.vacs.fr

The X-Forwarded-For: header indicates the IP address of client.

Static files

Static files like images, CSS and javascript files can be served by the Apache front-end server. This is faster than proxying these requests to the Ada Web Server. At the same time we can setup some expiration and cache headers sent in the response (Expires: and Cache-Control: respectively). The definition below only deal with images that are accessed from the /demo/images/ URL component. The Alias directive tells you how to map the URL to the directory on the file system that holds the files.

  Alias /demo/images/ "/home/htdocs.demo/web/images/"
  <Directory "/home/htdocs.demo/web/images/">
    Options -Indexes +FollowSymLinks

    # Do not check for .htaccess (perf. improvement)
    AllowOverride None
    Order allow,deny
    allow from all
                                                 
    # enable expirations
    ExpiresActive On
                                  
    # Activate the browser caching
    # (CSS, images and scripts should not change)
    ExpiresByType image/png A1296000
    ExpiresByType image/gif A1296000
    ExpiresByType image/jpg A1296000
  </Directory>

This kind of definition is repeated for each set of static files (javascript and css).

Proxy Overhead

The proxy adds a small overhead that you can measure by using the Apache Benchmark tool. A first run is done on AWS and another on Apache.

ab -n 1000 http://localhost:8080/demo/compute.html
ab -n 1000 http://demo.vacs.fr/demo/compute.html

The overhead will depend on the application and the page being served. On this machine, the AWS server can process arround 720 requests/sec and this is reduced to 550 requests/sec through the Apache front-end (23% decrease).

Bacula database cleanup

By Stephane Carrez 2013-06-30 07:41:49

Bacula maintains a catalog of files in a database. Over time, the database grows and despite some automatic purge and job cleanup, some information remains that is no longer necessary. This article explains how to remove some dead records from the Bacula catalog.

Bacula maintains a list of backup jobs that have been executed in the job table. For each job, it keeps the list of files that have been saved in the file table. When you do a restore, you somehow select the job to restore and pick files from that job. There should not exist any file entry associated with a non existing job. Unfortunately this is not the case. I've found that some files (more than 2 millions entries) were pointing to some job that did not exist.

Discovering dead jobs still referenced

The first step is to find out which job has been deleted and is still referenced by the file table. First, let's create a temporary table that will hold the job ids associated with the files.

mysql> create temporary table job_files (id bigint);

The use of a temporary table was necessary in my case because the file table is so big and the ReadyNAS so slow that scanning the database takes too much time.

Now, we can populate the temporary table with the job ids:

mysql> insert into job_files select distinct file.jobid from file;
Query OK, 350 rows affected (8 min 53.26 sec)
Records: 350  Duplicates: 0  Warnings: 0

The list of jobs that have been removed but are still referenced by a file is obtained by:

mysql> select job_files.id from job_files
 left join job on job_files.id = job.jobid
 where job.jobid is null;
+------+
| id   |
+------+
| 2254 | 
| 2806 | 
+------+
2 rows in set (0.05 sec)

Deleting Dead Files

Deleting all the file records in one blow was not possible for me because there was too many files to delete and the mysql server did not have enough resources on the ReadyNAS to do it. I had to delete these records in batch of 100000 files, the process was repeated several times (each delete query took more than 2mn!!!).

mysql> delete from file where jobid = 2254 limit 100000;

Conclusion

This cleanup process allowed me to reduce the size of the file table from 10 millions entries to 7 millions. This improves the database performance and speeds up the Bacula catalog backup process.

Optimization with Valgrind Massif and Cachegrind

By Stephane Carrez 2013-03-02 16:19:53

Memory optimization reveals sometimes some nice surprise. I was interested to analyze the memory used by the Ada Server Faces framework. For this I've profiled the unit tests program. This includes 130 tests that cover almost all the features of the framework.

Memory analysis with Valgrind Massif

Massif is a Valgrind tool that is used for heap analysis. It does not require the application to be re-compiled and can be used easily. The application is executed by using Valgrind and its tool Massif. The command that I've used was:

valgrind --tool=massif --threshold=0.1 \
   --detailed-freq=1 --alloc-fn=__gnat_malloc \
   bin/asf_harness -config test.properties

The valgrind tool creates a file massif.out.NNN which contains the analysis. The massif-visualizeris a graphical tool that reads the file and allows you to analyze the results. It is launched as follows:

massif-visualizer massif.out.19813

(the number is the pid of the process that was running, replace it accordingly).

The tool provides a graphical representation of memory used over the time. It allows to highlight a given memory snapshot and understand roughly where the memory is used.

Memory consumption with Massif [before]

While looking at the result, I was intrigued by a 1MB allocation that was made several times and then released (It creates these visual spikes and it correspond to the big red horizontal bar that appears visually). It was within the sax-utils.adb file that is part of the XML/Ada library. Looking at the implementation, it turns out that it allocates a hash table with 65536 entries. This allocation is done each time the sax parser is created. I've reduced the size of this hash table to 1024 entries. If you want to do it, change the following line in sax/sax-symbols.ads (line 99):

   Hash_Num : constant := 2**16;

by:

   Hash_Num : constant := 2**10;

After building, checking there is no regression (yes, it works), I've re-run the Massif tool and here are the results.

Memory consumption with Massif [after]

The peak memory was reduced from 2.7Mb to 2.0Mb. The memory usage is now easier to understand and analyse because the 1Mb allocation is gone. Other memory allocations have more importance now. But wait. There is more! My program is now faster!

Cache analysis with cachegrind

To understand why the program is now faster, I've used Cachegrind that measures processor cache performance. Cachegrind is a cache and branch-prediction profiler provided by Valgrind as another tool. I've executed the tool with the following command:

valgrind --tool=cachegrind \
    bin/asf_harness -config test.properties

I've launched it once before the hash table correction and once after. Similar to Massif, Cachegrind generates a file cachgrind.NNN that contains the analysis. You analyze the result by using either cg_annotate or kcachegrind. Having two Cachegrind files, I've used cg_diffto somehow get diff between the two executions.

cg_diff cachegrind.out.24198 cachegrind.out.23286 > cg.out.1
cg_annotate cg.out.1

Before the fix, we can see in Cachegrind report that the most intensive memory operations are performed by Sax.Htable.Reset operation and by the GNAT operation that initializes the Sax.Symbols.Symbol_Table_Record type which contains the big hash table. Dr is the number of data reads, D1mr the L1 cache read miss and Dw is the number of writes with D1mw representing the L1 cache write miss. Having a lot of cache miss will slow down the execution: L1 cache access requires a few cycles while main memory access could cost several hundreds of them.

--------------------------------------------------------------------------------
         Dr      D1mr          Dw      D1mw 
--------------------------------------------------------------------------------
212,746,571 2,787,355 144,880,212 2,469,782  PROGRAM TOTALS

--------------------------------------------------------------------------------
        Dr      D1mr         Dw      D1mw  file:function
--------------------------------------------------------------------------------
25,000,929 2,081,943     27,672       244  sax/sax-htable.adb:sax__symbols__string_htable__reset
       508       127 33,293,050 2,080,768  sax/sax-htable.adb:sax__symbols__symbol_table_recordIP
43,894,931   129,786  7,532,775     8,677  ???:???
15,021,128     4,140  5,632,923         0  pthread_getspecific
 7,510,564     2,995  7,510,564    10,673  ???:system__task_primitives__operations__specific__selfXnn
 6,134,652    41,357  4,320,817    49,207  _int_malloc
 4,774,547    22,969  1,956,568     4,392  _int_free
 3,753,930         0  5,630,895     5,039  ???:system__task_primitives__operations(short,...)(long, float)

With a smaller hash table, the Cachegrind report indicates a reduction of 24,543,482 data reads and 32,765,323 data writes. The cache read miss was reduced by 2,086,579 (74%) and the cache write miss was also reduced by 2,056,247 (83% reduction!).

With a small hash table, the Sax.Symbols.Symbol_Table_Record gets initialized quicker and its cleaning needs less memory accesses, hence CPU cycles. By having a smaller hash table, we also benefit from less cache miss: using a 1Mb hash table flushes a big part of the data cache.

--------------------------------------------------------------------------------
         Dr    D1mr          Dw    D1mw 
--------------------------------------------------------------------------------
188,203,089 700,776 112,114,889 413,535  PROGRAM TOTALS

--------------------------------------------------------------------------------
        Dr    D1mr        Dw   D1mw  file:function
--------------------------------------------------------------------------------
43,904,760 120,883 7,532,577  8,407  ???:???
15,028,328      98 5,635,623      0  pthread_getspecific
 7,514,164     288 7,514,164  9,929  ???:system__task_primitives__operations__specific__selfXnn
 6,129,019  39,636 4,305,043 48,446  _int_malloc
 4,784,026  18,626 1,959,387  3,261  _int_free
 3,755,730       0 5,633,595  4,390  ???:system__task_primitives__operations(short,...)(long, float)
 2,418,778      65 2,705,140     14  ???:system__tasking__initialization__abort_undefer
 3,839,603   2,605 1,283,289      0  malloc

Conclusion

Running massif and cachegrind is very easy but it may take some time to figure out how to understand and use the results. A big hash table is not always a good thing for an application. By creating cache misses it may in fact slow down the application. To learn more about this subject, I recommend the excellent document What Every Programmer Should Know About Memory written by Ulrich Drepper.

Ada Web Application 0.3.0 is available

By Stephane Carrez 2013-02-15 20:45:46

Ada Web Application is a framework to build web applications.

  • AWA uses Ada Server Faces for the web framework. This framework is using several patterns from the Java world such as Java Server Faces and Java Servlets.
  • AWA provides a set of ready to use and extendable modules that are common to many web application. This includes managing the login, authentication, users, permissions.
  • AWA uses an Object Relational Mapping that helps in writing Ada applications on top of MySQL or SQLite databases. The ADO framework allows to map database objects into Ada records and access them easily.
  • AWA is a model driven engineering framework that allows to design the application data model using UML and generate the corresponding Ada code.

Ada Web Application Architecture

The new version of AWA provides:

  • New jobs plugin to manage asynchronous jobs,
  • New storage plugin to manage a storage space for application documents,
  • New votes plugin to allow voting on items,
  • New question plugin to provide a general purpose Q&A.

AWA can be downloaded at http://code.google.com/p/ada-awa/downloads/list

A live demonstration of various features provided by AWA is available at http://demo.vacs.fr/atlas

Dynamo 0.6.0 is available

By Stephane Carrez 2013-02-10 21:57:02

Dynamo is a tool to help developers write some types of Ada Applications which use the Ada Server Faces or Ada Database Objects frameworks. Dynamo provides several commands to perform one specific task in the development process: creation of an application, generation of database model, generation of Ada model, creation of database.

The new version of Dynamo provides:

  • A new command build-doc to extract some documentation from the sources,
  • The generation of MySQL and SQLite schemas from UML models,
  • The generation of Ada database mappings from UML models,
  • The generation of Ada beans from the UML models,
  • A new project template for command line tools using ADO,
  • A new distribution command to merge the resource bundles.

The most important feature is probably the Ada code generation from a UML class diagram. With this, you can design the data model of an application using ArgoUML and generate the Ada model files that will be used to access the database easily through the Ada Database Objects library. The tool will also generate the SQL database schema so that everything is concistent from your UML model, to the Ada implementation and the database tables.

The short tutorial below indicates how to design a UML model with ArgoUML, generate the Ada model files, the SQL files and create the MySQL database.

The Dynamo tool is available at http://code.google.com/p/ada-gen.

To build Dynamo, you will need:

Ada Database Objects 0.4.0 is available

By Stephane Carrez 2013-02-10 21:38:42

The Ada Database Objects is an Object Relational Mapping for the Ada05 programming language. It allows to map database objects into Ada records and access databases easily. Most of the concepts developped for ADO come from the Java Hibernate ORM. ADO supports MySQL and SQLite databases.

The new version brings:

  • Support to reload query definitions,
  • It optimizes session factory implementation,
  • It allows to customize the MySQL database connection by using MySQL SET

This version can be downloaded at http://code.google.com/p/ada-ado/downloads/list.

Ada Server Faces 0.5.0 is available

By Stephane Carrez 2013-02-05 22:29:27

Ada Server Faces is an Ada implementation of several Java standard web frameworks.

  • The Java Servlet (JSR 315) defines the basis for a Java application to be plugged in Web servers. It standardizes the way an HTTP request and HTTP response are represented. It defines the mechanisms by which the requests and responses are passed from the Web server to the application possibly through some additional filters.
  • The Java Unified Expression Language (JSR 245) is a small expression language intended to be used in Web pages. Through the expressions, functions and methods it creates the link between the Web page template and the application data identified as beans.
  • The Java Server Faces (JSR 314 and JSR 344) is a component driven framework which provides a powerful mechanism for Web applications. Web pages are represented by facelet views (XHTML files) that are modelized as components when a request comes in. A lifecycle mechanism drives the request through converters and validators triggering events that are received by the application. Navigation rules define what result view must be rendered and returned.

Ada Server Faces gives to Ada developers a strong web framework which is frequently used in Java Web applications. On their hand, Java developers could benefit from the high performance that Ada brings: apart from the language, they will use the same design patterns.

Ada Server Faces

The new version of Ada Server Faces is available and brings the following changes:

  • The Security packages was moved in a separate project: Ada Security,
  • New demo to show OAuth and Facebook API integration,
  • Integrated jQuery 1.8.3 and jQuery UI 1.9.2,
  • New converter to display file sizes,
  • Javascript support was added for click-to-edit behavior,
  • Add support for JSF session beans,
  • Add support for servlet error page customization,
  • Allow navigation rules to handle exceptions raised by Ada bean actions,
  • Support the JSF 2.2 conditional navigation,
  • New functions fn:escapeXml and fn:replace.

The new version can be downloaded on the Ada Server Faces project page. A live demo is available at http://demo.vacs.fr/demo.

Ada Utility Library 1.6.1 is available

By Stephane Carrez 2013-02-03 22:06:08

Ada Utility Library is a collection of utility packages for Ada 2005. This release is provided as workarround release to avoid the gcc 4.7 bug 53737. It allows to build Ada Utility Library with gcc 4.7.2.

Ada Security 1.0.0 is available

By Stephane Carrez 2013-01-20 17:54:13

Ada Security is a security framework which allows web applications to define and enforce security policies. The framework allows users to authenticate by using OpenID Authentication 2.0. Ada Security also defines a set of client methods for using the OAuth 2.0 protocol.

Ada Security Framework

  • A security policy manager defines and implements the set of security rules that specify how to protect the system or the resources.
  • A user is authenticated in the application. Authentication can be based on OpenID or another system.
  • A security context holds the contextual information that allows the security policy manager to verify that the user is allowed to access the protected resource according to the policy rules.

The Ada Security framework can be downloaded at Ada Security project page.

The framework is the core security framework used by Ada Server Faces and Ada Web Application to protect access to resources.

Ada Utility Library 1.6.0 is available

By Stephane Carrez 2012-12-26 21:16:23

Ada Utility Library is a collection of utility packages for Ada 2005. A new version is available which provides:

  • Support for HTTP clients (curl, AWS, ...)
  • Support for REST APIs using JSON
  • New operations To_JSON and From_JSON for easy object map serialization
  • Added a listeners to help implementing the observer/listener design patterns
  • Added support for wildcard mapping in serialization framework
  • New option -d <dir> for the unit test harness to change the working directory,
  • New example facebook.adb to show the REST support.

It has been compiled and ported on Linux, Windows and Netbsd (gcc 4.4, GNAT 2011, gcc 4.6.3). You can download this new version at http://code.google.com/p/ada-util/downloads/list.

Disabling overlay scrollbar fixes the Thunderbird scrollbar position issue on Ubuntu 12.04

By Stephane Carrez 2012-12-09 08:55:11

Since Ubuntu 12.04 upgrade, the Thunderbird scrollbar position were not visible any more. The scrollbar works but you have no visual feedback to know where you are in your long lists. Annoying!!! It turns out that this was a feature of the overlay scrollbar.

To restore the Thunderbird scrollbar, remove the following packages which are causing these troubles:

sudo apt-get remove overlay-scrollbar liboverlay-scrollbar-0.2-0 liboverlay-scrollbar3-0.2-0

Since I've disabled it, I realized many others have the issue. The How do I disable overlay scrollbars? Q&A gives other hints.

Ada BFD 1.0 is available

By Stephane Carrez 2012-11-11 09:29:47

Ada BFD is an Ada binding for the GNU Binutils BFD library.

It allows to read binary ELF, COFF files by using the GNU BFD. The Ada BFD library allows to:

  • list and scan the ELF sections of an executable or object file,
  • get the content of the ELF sections,
  • get access to the symbol table,
  • use the BFD disassembler

Version 1.0 of this Ada binding library is now available on Ada BFD. This new release bring the following changes:

  • Fixed installation of library
  • Added examples for BfdAda
  • Add support to use the disassembler

Reading a program symbol table with Ada BFD

By Stephane Carrez 2012-11-03 14:55:50

The GNU Binutils provides support for reading and writing program files in various formats such as ELF, COFF. This support is known as the BFD, the Binary File Descriptor library (or the Big F*cking Deal). This article illustrates how an Ada application can use the BFD library to have access to a program symbol table.

Declarations

The Ada BFD library provides a set of Ada bindings that give access to the BFD library. A binary file such as an object file, an executable or an archive is represented by the Bfd.Files.File_Type limited type. The symbol table is represented by the Bfd.Symbols.Symbol_Table limited type. These two types hold internal data used and managed by the BFD library.

with Bfd.Files;
with Bfd.Sections;
with Bfd.Symbols;
...
  File    : Bfd.Files.File_Type;
  Symbols : Bfd.Symbols.Symbol_Table;

Opening the BFD file

The Open procedure must be called to read the object or executable file whose path is given as argument. The File_Type parameter will be initialized to get access to the binary file information. The Check_Format function must then be called to let the BFD library gather the file format information and verify that it is an object file or an executable.

Bfd.Files.Open (File, Path, "");
if Bfd.Files.Check_Format (File, Bfd.Files.OBJECT) then
    ...
end if;

Loading the symbol table

The symbol table is loaded by using the Read_Symbols procedure.

   Bfd.Symbols.Read_Symbols (File, Symbols);

The resources used by the symbol table will be freed when the symbol table instance is finalized.

Looking for a symbol

Once the symbol table is loaded, we can use the Get_Symbol function to find a symbol knowing its name. If the symbol is not found, a Null_Symbol is returned.

Sym  : Bfd.Symbols.Symbol := Bfd.Symbols.Get_Symbol (Symbols, "_main");
...
if Sym /= Bfd.Symbols.Null_Symbol then
   --  Symbol found
end if;

Each symbol has the following set of information:

  • A name (it may not be unique),
  • A set of flags that describe the symbol (global, local, weak, constructor, TLS, ...),
  • A symbol value,
  • A section to which the symbol belongs.
Sec   : constant Bfd.Sections.Section := Bfd.Symbols.Get_Section (Sym);
Flags : constant Bfd.Symbol_Flags     := Bfd.Symbols.Get_Flags (Sym);
Value : constant Bfd.Symbol_Value    := Bfd.Symbols.Get_Value (Sym);

Before interpreting and using the symbol value returned by Get_Value, you must look at the section to check for an undefined symbol. Indeed, undefined symbols being not yet resolved by the linker they have no value and no section. You can check that by using the Is_Undefined_Section function:

if Bfd.Sections.Is_Undefined_Section (Sec) then
   Ada.Text_IO.Put_Line ("undefined symbol");
end if;

When the symbol is defined, you must look at the flags and also the section information to know more about it.

if (Flags and Bfd.Symbols.BSF_GLOBAL) /= 0 then
  Ada.Text_IO.Put_Line ("global symbol in section "
               & Bfd.Sections.Get_Name (Sec)
               & " := " & Bfd.Symbol_Value'Image (Value));

elsif (Flags and Bfd.Symbols.BSF_LOCAL) /= 0 then
  Ada.Text_IO.Put_Line ("local symbol in section "
               & Bfd.Sections.Get_Name (Sec)
               & " := " & Bfd.Symbol_Value'Image (Value));

else
  Ada.Text_IO.Put_Line ("other symbol in section "
                                     & Bfd.Sections.Get_Name (Sec)
                                     & " := " & Bfd.Symbol_Value'Image (Value));
end if;

Conclusion and references

Reading an executable symbol table has been made fairly simple with the use of the Ada BFD library. Furthermore, the library allows to scan the sections, read their content and even use the BFD disassembler.

symbol.adb

BFD Documentation

Fixing the mysql_thread_end cleanup bug in libmysql

By Stephane Carrez 2012-09-07 20:59:50

If you see the following message when an application stops, the fix is for you.

Error in my_thread_global_end(): 5 threads didn't exit

The message comes from the MySQL C connector when the library cleans up before the program exits.

The MySQL C client connector has a bug that exists for many years in multi-threaded applications: it does not cleanup correctly threads that use the library. Interestingly, MySQL documentation says you must call mysql_thread_end yourself before a thread stops. This is a shame as it forces developers to find impossible and dirty workaround, one of them is nicely explained in Bug 846602 - MySQL C API missing THR_KEY_mysys.

Root cause

The origin of the bug comes from a wrong use or a miss-understanding of the POSIX Threads. Indeed, the library uses the thread-specific data management through pthread_getspecific and pthread_setspecific to store some dynamically allocated object associated with each thread. The POSIX 1003.1c standard has defined a cleanup mechanism that can be used to automatically reclaim storage that was allocated for thread specific data.

The pthread_key_create is the operation called to create a new thread specific data key. When the key is created, it allows to register a cleanup handler that will be called with the thread specific data before the thread exits. A cleanup handler is as simple as the following function:

static void thread_cleanup(void* data)
{
    free(data);
}

With such cleanup handler, the pthread data key must be created as follows:

static pthread_key_t  THR_KEY_mysys;
...
  pthread_key_create(&THR_KEY_mysys, thread_cleanup);

The MySQL C connector does not specify any cleanup handler when creating the pthread data key. The patch attached to this post defines the cleanup handler and installs it as described above.

Fixing MySQL C connector

To fix the MySQL connector, you will need the sources, apply the patch and build the sources. Download the Connector/C sources from MySQL site. The patch is against 6.0.2 but you should be able to apply it to other versions easily (it patches the file mysys/my_thr_init.c that was not changed since 2009). Extract the sources and apply the patch:

tar xzf mysql-connector-c-6.0.2.tar.gz
patch < fix-mysql-connector.diff

To build the connector, you will need CMake. The build process is explained in the sources (read BUILD.unix).

cmake -G "Unix Makefiles"
make
make install

Beware that before building you may have to change some configuration setup according to your system (check include/mysql_version.h).

Patch

fix-mysql-connector.diff

Using the Facebook API

By Stephane Carrez 2012-06-03 21:15:36

Through this article you will learn how to use the OAuth 2.0 framework to let an application access service provider APIs such as Facebook API, Google+ API and others. Althought this article uses Ada as programming language and Facebook as service provider, most part also applies to other programming languages and other service providers.

Overview

OAuth 2.0 is an open standard for authorization. It is used by service providers as authorization mechanism for most of their APIs. The authorization workflow is depicted below:

  • [1], first a user goes in the application which displays a link to the OAuth API provider asking the user to grant access to his data for the application,
  • [2], the user clicks on the authenticate link and grants access to the application,
  • [3.1], The OAuth server redirects the user to a callback URL and it provides an application grant code,
  • [3.3], The application ask the API provider to transform the grant code to an access token,
  • [4] The application invokes the API provider with the access token

OAuth Workflow

Registering the application

The first step is to register the application in the service provider (Facebook, Google+, Twitter, ...). This registration process is specific to the provider. From this registration, several elements will be defined:

  • An application id is allocated, This identifier is public. This is the client_id parameter in OAuth 2.0.
  • An application secret is defined. It must be kept private to the application. This is the secret parameter in OAuth 2.0.
  • A callback URL or domain name is registered in the service provider. As far as I'm concerned, I had to register the domain demo.vacs.fr.

Facebook OAuth

For the OAuth authorization process, we will use the Ada Security library and its Application type. We will extend that type to expose some EL variables and an EL method that will be used in the authorization process. The Ada Server Faces Application Example part 3: the action bean explains how to do that and many details will no be covered by this article.

type Facebook_Auth is new Security.OAuth.Clients.Application
     and Util.Beans.Basic.Readonly_Bean
     and Util.Beans.Methods.Method_Bean with private;

FB_Auth      : aliased Facebook.Facebook_Auth;

Before anything we have to initialize the Application type to setup the application identifier, the application secret, the provider URL and a callback URL.

FB_Auth.Set_Application_Identifier ("116337738505130");
FB_Auth.Set_Application_Secret ("xxxxxxxxxxx");
FB_Auth.Set_Application_Callback ("http://demo.vacs.fr/oauth/localhost:8080/demo/oauth_callback.html");

The first step in the OAuth process is to build the authorization page with the link that redirects the user to the service provider OAuth authorization process. The link must be created by using the Get_State and Get_Auth_Params functions provided by the Application type. The first one generates a secure unique key that will be returned back by the service provider. The second one builds a list of request parameters that are necessary for the service provider to identify the application and redirect the user back to the application once the authentication is done.

Id : constant String := "...";
State  : constant String := FB_Auth.Get_State (Id);
Params : constant String := FB_Auth.Get_Auth_Params (State, "read_stream");

For a Facebook authorization process, the URI would be created as follows:

URI : constant String := "https://www.facebook.com/dialog/oauth?" & Params;

For another service provider, the process is similar but the URL is different.

OAuth callback

When the user has granted access to his data, he will be redirected to the callback defined by the application. Most service providers will require that the OAuth callback be a public URL. If you want to run you application on localhost (which is the case when you are developing), you will need a second redirection. If you are using the Apache server, you can easily setup a rewrite rule:

RewriteRule ^/oauth/localhost:([0-9]+)/(.*) http://localhost:$1/$2 [L,R=302]

With the above rewrite rule, the callback given to the OAuth provider would look like:

http://demo.vacs.fr/oauth/localhost:8080/demo/oauth_callback.html

The OAuth provider will first redirect to the public internet site which will redirect again to localhost and port 8080.

Getting the OAuth access token

The next step is to receive the code parameter from the callback which grants the application access to the service provider API. For this, we will use an XHTML view file and a view action that will be executed when the page is displayed. When this happens, the EL method authenticate will be called on the facebook bean (ie, our FB_Auth instance).

<f:view xmlns:f="http://java.sun.com/jsf/core">
    <f:metadata>
        <f:viewAction action="#{facebook.authenticate}"/>
    </f:metadata>
</f:view>

The Authenticate procedure extracts from the request the OAuth state and code parameters. It verifies that the state parameter is a valid key that we submitted and it makes a HTTP POST request on the OAuth service provider to transform the code into an access token. This step is handled by the Ada Security library through the Request_Access_Token operation.

procedure Authenticate (From    : in out Facebook_Auth;
                        Outcome : in out Ada.Strings.Unbounded.Unbounded_String) is
   use type Security.OAuth.Clients.Access_Token_Access;

   F       : constant ASF.Contexts.Faces.Faces_Context_Access := ASF.Contexts.Faces.Current;
   State : constant String := F.Get_Parameter (Security.OAuth.State);
   Code  : constant String := F.Get_Parameter (Security.OAuth.Code);
   Session : ASF.Sessions.Session := F.Get_Session;
begin
   Log.Info ("Auth code {0} for state {1}", Code, State);
   if Session.Is_Valid then
      if From.Is_Valid_State (Session.Get_Id, State) then
         declare
            Acc : Security.OAuth.Clients.Access_Token_Access
              := From.Request_Access_Token (Code);
         begin
            if Acc /= null then
               Log.Info ("Access token is {0}", Acc.Get_Name);
               Session.Set_Attribute ("access_token",
                                      Util.Beans.Objects.To_Object (Acc.Get_Name));
            end if;
         end;
      end if;
   end if;
end Authenticate;

The access token must be saved in the user session or another per-user safe storage so that it can be retrieved later on. The access token can expire and if this happens a fresh new access token must be obtained.

Getting the Facebook friends

Until now we have dealt with the authorization process. Let's look at using the service provider API and see how the Ada Utility Library will help in this task.

Defining the Ada beans

To represent the API result, we will use an Ada bean object that can easily be used from a presentation page. For the Facebook friend, a name and an identifier are necessary:

type Friend_Info is new Util.Beans.Basic.Readonly_Bean with record
   Name : Util.Beans.Objects.Object;
   Id   : Util.Beans.Objects.Object;
end record;
type Friend_Info_Access is access all Friend_Info;

Having a bean type to represent each friend, we will get a list of friends by instantiating the Ada bean Lists package:

package Friend_List is new Util.Beans.Basic.Lists (Element_Type => Friend_Info);

Mapping JSON or XML to Ada

The Ada Utility library provides a mechanism that parses JSON or XML and map the result in Ada objects. To be able to read the Facebook friend definition, we have to define an enum and implement a Set_Member procedure. This procedure will be called by the JSON/XML parser when a given data field is recognized and extracted.

type Friend_Field_Type is (FIELD_NAME, FIELD_ID);

procedure Set_Member (Into : in out Friend_Info;
                      Field : in Friend_Field_Type;
                      Value : in Util.Beans.Objects.Object);

The Set_Member procedure is rather simple as it just populates the data record with the value.

procedure Set_Member (Into : in out Friend_Info;
                      Field : in Friend_Field_Type;
                      Value : in Util.Beans.Objects.Object) is
begin
   case Field is
      when FIELD_ID =>
         Into.Id := Value;

      when FIELD_NAME =>
         Into.Name := Value;

   end case;
end Set_Member;

The mapper is a package that defines and controls how to map the JSON/XML data fields into the Ada record by using the Set_Member operation. We just have to instantiate the package. The Record_Mapper generic package will map JSON/XML into the Ada record and the Vector_Mapper will map a list of JSON/XML elements following a given structure into an Ada vector.

package Friend_Mapper is
  new Util.Serialize.Mappers.Record_Mapper (Element_Type        => Friend_Info,
                                            Element_Type_Access => Friend_Info_Access,
                                            Fields              => Friend_Field_Type,
                                            Set_Member          => Set_Member);

package Friend_Vector_Mapper is
   new Util.Serialize.Mappers.Vector_Mapper (Vectors        => Friend_List.Vectors,
                                             Element_Mapper => Friend_Mapper);

Now we need to control how the JSON/XML fields are mapped to our Ada fields. For this we have to setup the mapping. The Facebook JSON structure is so simple that we can use the default mapping provided by the mapper. For this we use the Add_Default_Mapping procedure. We also have to tell what is the JSON mapping used by the friend vector mapper.

Friend_Map        : aliased Friend_Mapper.Mapper;
Friend_Vector_Map : aliased Friend_Vector_Mapper.Mapper;
...
   Friend_Map.Add_Default_Mapping;
   Friend_Vector_Map.Set_Mapping (Friend_Map'Access);

Creating the REST client

Now it would be nice if we could get an operation that invokes the service provider API through an HTTP GET operation and put the result in our Ada object. The Facebook friends API returns a list of friends which correspond to our Friend_List.Vectors. To get our operation, we just have to instantiate the Rest_Get_Vector operation with our vector mapper (the generic parameter is a package name).

procedure Get_Friends is
  new Util.Http.Rest.Rest_Get_Vector (Vector_Mapper => Friend_Vector_Mapper);

Calling the REST client

Invoking the service provider API is now as simple as calling a procedure. The URI must include the access token as parameter. The HTTP GET operation must be made using SSL/TLS since this is part of OAuth 2.0.

 List  : Friend_List.List_Bean;
...
   Get_Friends ("https://graph.facebook.com/me/friends?access_token="
                      & Token,
                      Friend_Vector_Map'Access,
                      "/data",
                      List.List'Access);

Now you are ready to use and access the user's data as easily as other information...

References

facebook.ads

facebook.adb

Facebook API

AWA 0.2 is available

By Stephane Carrez 2012-05-24 20:03:25

Ada Web Application is a framework to build web applications easily on top of Ada Server Faces, Ada Database Objects and Ada Web Server.

The new version of the framework provides:

  • A new event framework with configurable action listeners,
  • Persistent event queues for the event framework,
  • A new blog module and wiki engine supporting Google Wiki, Creole, MediaWiki, phpPP and Dotclear syntax,
  • New mail UI components allowing to generate and send email easily with the ASF presentation pages,
  • A new Javascript plugin Markedit with jQuery Markedit (MIT License)

This new version can be downloaded at http://code.google.com/p/ada-awa/downloads/list (downloading the awa-all package is recommended to get the project and its dependencies).

A demo of an AWA application is available at http://demo.vacs.fr/atlas/

Atlas, the Ada Web Application demonstrator

By Stephane Carrez 2012-05-22 21:12:41

AWA is a framework to build web applications on top of various robust components.

  • AWA uses Ada Server Faces for the web framework. This framework is using several patterns from the Java world such as Java Server Faces and Java Servlets.
  • AWA is architectured arround modules and plugins that allow to build, re-use and extend modules made of Ada code, Web pages or Javascript.
  • AWA provides a set of ready to use and extendable plugins that are common to many web application. This includes managing the login, authentication, users, permissions, a mail plugin, a blog plugin, a Javascript light editor.
  • AWA uses an Object Relational Mapping that helps in writing Ada applications on top of MySQL or SQLite databases. The ADO framework allows to map database objects into Ada records and access them easily.
  • AWA integrates a configurable event service which allows plugins to easily interact and connect with each other (either synchronously or asynchronously). The event service provided by AWA is heavily inspired from the Java Message Service.

To learn more on how to create easily a web application using AWA, look at the 4 minutes video.

The Atlas Web Application Demonstrator is a demonstration of an application using this AWA framework.

Dynamo 0.5.0 is available

By Stephane Carrez 2012-05-20 19:48:13

Dynamo is a tool to help developers write an Ada Web Application using the Ada Server Faces and the Ada Database Objects frameworks. Dynamo provides several commands to perform one specific task in the development process: creation of an application, generation of database model, generation of Ada model, creation of database.

The new version of Dynamo provides:

  • Support multi-line comments in XML mappings,
  • Generate List_Bean types for the XML mapped queries,
  • Add support for Ada enum generation,
  • Add test template generation,
  • Add AWA service template generation,
  • Add support for blob model mapping,
  • New command 'add-ajax-form', 'add-query', 'dist', 'create-plugin'

The Dynamo tool is available at http://code.google.com/p/ada-gen. To build Dynamo, you will need:

Ada Database Objects 0.3.0 is available

By Stephane Carrez 2012-05-12 20:17:07

The Ada Database Objects is an Object Relational Mapping for the Ada05 programming language. It allows to map database objects into Ada records and access databases easily. Most of the concepts developped for ADO come from the Java Hibernate ORM. ADO supports MySQL and SQLite databases.

The new version brings:

  • Support to update database records when a field is really modified,
  • Customization of the SQLite database connection by using SQLite PRAGMAs,
  • Escape of MySQL or SQLite reserved keywords,
  • Support for blob type.

This version can be downloaded at http://code.google.com/p/ada-ado/downloads/list.

Ada Server Faces 0.4.0 is available

By Stephane Carrez 2012-05-11 21:12:36

Ada Server Faces is a web framework which uses the Java Server Faces design patterns (See JSR 252, JSR 314 or the latest one the JSR 344).

This new version brings a serious step ahead towards JSF compatibility. This new version provides:

  • Support for shared or static build configuration,
  • Support for file upload,
  • New components <h:inputFile>, <f:metadata>, <f:viewParam>, <f:viewAction>,
  • New EL function util:hasMessage,
  • ASF now Implements the JSF phase events and phase listeners,
  • Implements the JSF/Ruby on Rails flash context,
  • Adds the pre-defined JSF beans: initParam, flash,
  • Support for locales and honors the Accept-Language,
  • New demos are available in French and English

It has been compiled and ported on Linux, Windows and Netbsd (gcc 4.4, GNAT 2011, gcc 4.6.2). You can download this new version at http://code.google.com/p/ada-asf/downloads/list.

A live demo is available at: http://demo.vacs.fr.

Feel free to play with the OpenID stuff!!!

Ada Utility Library 1.5.0 is available

By Stephane Carrez 2012-05-09 20:30:36

Ada Utility Library is a collection of utility packages for Ada 2005. A new version is available which provides:

  • Concurrent fifo queues and arrays
  • Changed Objects.Maps to use a String instead of an Unbounded_String as the key
  • Support for shared or static build configuration
  • Implementation of input/output/error redirection to a file for process launch

It has been compiled and ported on Linux, Windows and Netbsd (gcc 4.4, GNAT 2011, gcc 4.6.2). You can download this new version at http://code.google.com/p/ada-util/downloads/list.

Process creation in Java and Ada

By Stephane Carrez 2012-03-16 20:06:26

When developping and integrating applications together it is often useful to launch an external program. By doing this, many integration and implementation details are simplified (this integration technic also avoids license issues for some open source software integration).

This article explains how to launch an external program in Java and Ada and be able to read the process output to get the result.

Java Process Creation

The process creation is managed by the ProcessBuilder Java class. An instance of this class holds the necessary information to create and launch a new process. This includes the command line and its arguments, the standard input and outputs, the environment variables and the working directory.

The process builder instance is created by specifying the command and its arguments. For this the constructor accepts a variable list of parameters of type String.

import java.lang.ProcessBuilder;
...
  final String cmd = "ls";
  final String arg1 = "-l";
  final ProcessBuilder pb = new ProcessBuilder(cmd, arg1);

When the process builder is initialized, we can invoke the start method to create a new process. Each process is then represented by an instance of the Process class. It is possible to invoke start serveral times and each call creates a new process. It is necessary to catch the IOExceptionwhich can be raised if the process cannot be created.

import java.lang.Process;
...
  try {
    final Process p = pb.start();
    ...

  } catch (final IOException ex) {
    System.err.println("IO error: " + ex.getLocalizedMessage());
  }

The Process class gives access to the process output through an input stream represented by the InputStream class. With this input stream, we can read what the process writes on its output. We will use a BufferedReader class to read that output line by line.

import java.io.*;
...
  final InputStream is = p.getInputStream();
  final BufferedReader reader = new BufferedReader(new InputStreamReader(is));

By using the readLine method, we can read a new line after each call. Once the whole stream is read, we have to close it. Closing the BufferedReader will close the InputStream associated with it.

  String line;
  while ((line = reader.readLine()) != null) {
    System.out.println(line);
  }
  reader.close();

Last step is to wait for the process termination and get the exit status: we can use the waitFor method. Since this method can be interrupted, we have to catch the InterruptedException.

  try {
    ...
    final int exit = p.waitFor();
    if (exit != 0) {
      System.err.printf("Command exited with status %d\n", exit);
    }
  }  catch (final InterruptedException ex) {
    System.err.println("Launch was interrupted...");
  }

You can get the complete source from the file: Launch.java

Ada Process Creation

For the Ada example, we will create an application that invokes the nslookup utility to resolve a set of host names. The list of host names is provided to nslookup by writing on its standard input and the result is collected by reading the output.

We will use the Pipe_Stream to launch the process, write on its input and read its output at the same time. The process is launched by calling the Open procedure and specifying the pipe redirection modes: READ is for reading the process output, WRITE is for writing to its input and READ_WRITEis for both.

with Util.Processes;
with Util.Streams.Pipes;
...
   Pipe    : aliased Util.Streams.Pipes.Pipe_Stream;

   Pipe.Open ("nslookup", Util.Processes.READ_WRITE);

We can read or write on the pipe directly but using a Print_Stream to write the text and the Buffered_Stream to read the result simplifies the implementation. Both of them are connected to the pipe: the Print_Stream will use the pipe output stream and the Buffered_Stream will use the pipe input stream.

with Util.Streams.Buffered;
with Util.Streams.Texts;
...
   Buffer  : Util.Streams.Buffered.Buffered_Stream;
   Print   : Util.Streams.Texts.Print_Stream;
begin
      --  Write on the process input stream
   Buffer.Initialize (null, Pipe'Unchecked_Access, 1024);
   Print.Initialize (Pipe'Unchecked_Access);

Before reading the process output, we send the input data to be solved by the process. By closing the print stream, we also close the pipe output stream, thus closing the process standard input.

   Print.Write ("www.google.com" & ASCII.LF);
   Print.Write ("set type=NS" & ASCII.LF);
   Print.Write ("www.google.com" & ASCII.LF);
   Print.Write ("set type=MX" & ASCII.LF);
   Print.Write ("www.google.com" & ASCII.LF);
   Print.Close;

We can now read the program output by using the Read procedure and get the result in the Content string. The Close procedure is invoked on the pipe to close the pipe (input and output) and wait for the application termination.

   Content : Unbounded_String;

   --  Read the 'nslookup' output.
   Buffer.Read (Content);
   Pipe.Close;

Once the process has terminated, we can get the exit status by using the Get_Exit_Status function.

   Ada.Text_IO.Put_Line ("Exit status: "
       & Integer'Image (Pipe.Get_Exit_Status));


References

launch.adb

util-streams-pipes.ads

util-streams-buffered.ads

util-streams-texts.ads

Ada Server Faces 0.3.0 is available

By Stephane Carrez 2012-02-14 21:09:39

Ada Server Faces is a web framework which uses the Java Server Faces design patterns (See JSR 252 and JSR 314).

JSF and ASF use a component-based model for the design and implementation of a web application. The presentation layer is implemented using XML or XHTML files and the component layer is implemented in Ada 05 for ASF and in Java for JSF.

A new version of ASF is available which provides:

  • New components used in HTML forms (textarea, select, label, hidden),
  • New components for the AJAX framework,
  • Support for dialog boxes with jQuery UI,
  • Pre-defined beans in ASF contexts: param, header,
  • A complete set of example and documentation for each tag.

It has been compiled and ported on Linux, Windows and Netbsd (gcc 4.4, GNAT 2011, gcc 4.6.2). You can download this new version at http://code.google.com/p/ada-asf/downloads/list.

A live demo is available at: http://demo.vacs.fr.

Ada perfect hash generation with gperfhash

By Stephane Carrez 2012-01-16 21:31:23

A perfect hash function is a function that returns a distinct hash number for each keyword of a well defined set. gperf is famous and well known perfect hash generator used for C or C++ languages. Ada is not supported.

The gperfhash is a sample from the Ada Utility Library which generates an Ada package that implements such perfect hash operation. It is not as complete as gperf but allows to easily get a hash operation. The gperfhash tool uses the GNAT package GNAT.Perfect_Hash_Generators.

Pre requisite

Since the gperfhash tool is provided by the Ada Util samples, you must build these samples with the following command:

$ gnatmake -Psamples

Define a keyword file

First, create a file which contains one keyword on each line. For example, let's write a keywords.txt file which contains the following three keywords:

int
select
print

Generate the package

Run the gperfhash tool and give it the package name.

$ gperfhash -p Hashing keywords.txt

The package defines a Hash and an Is_Keyword function. The Hash function returns a hash number for each string passed as argument. The hash number will be different for each string that matches one of our keyword. You can give a string not in the keyword list, in that case the hash function will return a number that collides with a hash number of one or our keyword.

The Is_Keyword function allows to check whether a string is a keyword of the list. This is very useful when you just want to know whether a string is a reserved keyword in some application.

The package specification is the following:

--  Generated by gperfhash
package Hashing is

   function Hash (S : String) return Natural;

   --  Returns true if the string <b>S</b> is a keyword.
   function Is_Keyword (S : in String) return Boolean;

   type Name_Access is access constant String;
   type Keyword_Array is array (Natural range <>) of Name_Access;
   Keywords : constant Keyword_Array;
private
...
end Hashing;

How to use the hash

Using the perfect hash generator is simple:

with Hashing;

  if Hashing.Is_Keyword (S) then
     -- 'S' is one of our keyword
  else
     -- No, it's not a keyword
  end if;

Ada Utility Library 1.4.0 is available

By Stephane Carrez 2012-01-15 19:32:22

Ada Utility Library is a collection of utility packages for Ada 2005. A new version is available which provides:

  • Support for localized date format,
  • Support for process creation and pipe streams (on Unix and Windows),
  • Support for CSV in the serialization framework,
  • Integration of Ahven 2.1 for the unit tests (activate with --enable-ahven),
  • A tool to generate perfect hash function

It has been compiled and ported on Linux, Windows and Netbsd (gcc 4.4, GNAT 2011, gcc 4.6.2). You can download this new version at http://code.google.com/p/ada-util/downloads/list.

Aunit vs Ahven

By Stephane Carrez 2011-11-27 17:25:44

AUnit and Ahven are two testing frameworks for Ada. Both of them are inspired from the well known JUnit Java framework. Having some issues with the Aunit testing framework, I wanted to explore the use of Ahven. This article gives some comparison elements between the two unit test frameworks. I do not pretend to list all the differences since both frameworks are excellent.

Writing a unit test is equally simple in both frameworks. They however have some differences that may not be visible at the first glance.

AUnit

AUnit is a unit test framework developped by Ed Falis and maintained by AdaCore. It is distributed under the GNU GPL License.

Some good points:

  • AUnit has a good support to report where a test failed. Indeed, the Assert procedures will report the source file and line number where the assertion failed.
  • AUnit is also able to dump the exception stack trace in symbolic form. This is useful to find out quickly the source of a problem.

Some bad points:

  • AUnit has several memory leaks which is quite annoying when you want to track memory links with valgrind.
  • AUnit does not integrate easily with JUnit-based XML tools. In particular the XML file it creates can be invalid in some cases (special characters in names). More annoying is the fact that the XML format is not compatible with JUnit XML format.

Ahven

Ahven is another unit test framework developped by Tero Koskinen. It is distributed under the permissive ISC License.

Some good points:

  • Ahven license is a better model for proprietary unit tests.
  • Ahven generates XML result files which are compatible with Junit XML result files. Integration with automatic build tools such as Jenkins is easier.
  • Ahven XML result files can integrate the test output (as in JUnit). This is useful to analyze a problem.
  • Ahven has a test case timeout which is useful to detect and stop blocking tests.

Some bad points:

  • The lack of precise information in message (source line, exception trace) can be annoying to find out why a test failed.

Don't choose and be prepared to use both with Ada Util

The unit tests I've written were done for AUnit and I had arround 329 tests to migrate. To help the migration to Ahven, I wrote a Util.XUnit package which exposes a common interface on top of AUnit or Ahven. It turns out that this is easy and quite small. The package has one specific implementation (spec+body) for both frameworks. All the unit tests have to use it instead of the AUnit or Ahven packages.

The Aunit implementation (util-xunit.ads) defines several types which are also defined in the Ahven implementation.

package Util.XUnit is
...
   subtype Status is AUnit.Status;

   Success : constant Status := AUnit.Success;
   Failure : constant Status := AUnit.Failure;

   subtype Message_String is AUnit.Message_String;
   subtype Test_Suite is AUnit.Test_Suites.Test_Suite;
   subtype Access_Test_Suite is AUnit.Test_Suites.Access_Test_Suite;

   type Test_Case is abstract new AUnit.Simple_Test_Cases.Test_Case with null record;
   type Test is abstract new AUnit.Test_Fixtures.Test_Fixture with null record;
...
end Util.XUnit;

The XUnit implementation for Ahven is a little bit more complex because all my tests were using AUnit interface, I decided to keep almost that API and thus I had to simulate what is missing or is different.

Ahven implementation: util-xunit.ads

package Util.XUnit is
...
   type Status is (Success, Failure);

   subtype Message_String is String;
   subtype Test_Suite is Ahven.Framework.Test_Suite;
   type Access_Test_Suite is access all Test_Suite;

   type Test_Case is abstract new Ahven.Framework.Test_Case with null record;
   type Test is new Ahven.Framework.Test_Case with null record;
...
end Util.XUnit;

The choice of the unit test framework is done when the Ada Utility library is configured.

How to expand disk capacity on a ReadyNAS Duo from 1 TB to 2 TB

By Stephane Carrez 2011-09-06 19:51:19

This article describes the process to increase the disk capacity of a ReadyNAS Duo configuration from 1 TB to 2 TB In my case, my X-RAID configuration was broken due to a faulty disk. I took the opportunity to repair the redundancy and also to increase the capacity. The process is simple but very long. It took me 4 days, several reboots and many disk synchronisation.

To replace the 1TB disks, I bought two Seagate ST2000DL003-9VT166 hard disks which offer 2TB (they are referenced in the hardware compatibility list). I then followed the following process :

  1. Upgrade to the latest RAIDiator firmware (4.1.7)
  2. Replace a first disk by the new larger disk (in my case the faulty disk)
  3. Wait until the disks are fully synchronized (Status should be Redundant)
  4. Shutdown properly and restart the ReadyNAS
  5. Make sure the disks are fully synchronized. If not, wait for synchronization to finish.
  6. Replace the second disk by the larger disk
  7. Wait until the disks are fully synchronized (Status should be Redundant)
  8. Shutdown properly and restart the ReadyNAS
  9. After the reboot, the ReadyNAS triggers a disk expand
  10. Another reboot is necessary after which ReadyNAS triggers the file system expansion

ReadyNAS Disk Expansion

The disk expansion happens at the very end and is fairly quick. Before the disk expansion, and when the new disks are installed, you will see that the disk partition table has not changed. The fdisk /dev/hdc command reports:

Device Boot   Start      End      Blocks   Id  System
/dev/hdc1         1      255     2048000   83  Linux
/dev/hdc2       255      287      256000   82  Linux swap
/dev/hdc3       287   121575   974242116    5  Extended
/dev/hdc5       287   121575   974242115+  8e  Linux LVM

Since ReadyNAS uses LVM to manage the disks, you can use pvdisplay to look at the available space. At this stage, everything is used.

nas-D2-24-F2:/var/log/frontview# pvdisplay
  --- Physical volume ---
  PV Name           /dev/hdc5
  VG Name           c
  PV Size           929.09 GB / not usable 0   
  Allocatable       yes (but full)
  PE Size (KByte)   32768
  Total PE          29731
  Free PE           0
  Allocated PE      29731
  PV UUID           huL1xb-0v0O-vJ6K-LqaK-P4kf-q4Wm-SFeYCX

After the reboot, the ReadyNAS will start the disk expand process. It will do this only if the two disks are redundant. After expand, the partition looks as follows:

Device Boot    Start       End      Blocks   Id  System
/dev/hdc1        1       255     2048000   83  Linux
/dev/hdc2      255       287      256000   82  Linux swap
/dev/hdc3      287    243201  1951200343    5  Extended
/dev/hdc5      287    121575   974242115+  8e  Linux LVM
/dev/hdc6   121575    243200   976950032   8e  Linux LVM

Once the partition table is fixed, you are asked to reboot:

The first stage of the in-place volume expansion is done.
Please reboot the device to complete the volume expansion.

After the reboot, the LVM volumes are increased. You can check with pvdisplay which now reports the new disk partition and with lvdisplay which takes into account the two physical volumes.

nas-D2-24-F2:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/hdc5
  VG Name               c
  PV Size               929.09 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              29731
  Free PE               0
  Allocated PE          29731
  PV UUID               huL1xb-0v0O-vJ6K-LqaK-P4kf-q4Wm-SFeYCX
   
  --- Physical volume ---
  PV Name               /dev/hdc6
  VG Name               c
  PV Size               931.69 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              29814
  Free PE               0
  Allocated PE          29814
  PV UUID               TOqmR2-fYOq-jf0q-n1ka-N9K1-B2CB-oyU23Y

nas-D2-24-F2:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/c/c
  VG Name                c
  LV UUID                2CzUXf-uzSD-DGcS-KePF-6elz-XveS-xePwHf
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                1.82 TB
  Current LE             59545
  Segments               2
  Allocation             inherit
  Read ahead sectors     0
  Block device           254:0

The last step is now to resize the file system. The ReadyNAS reports the following alter the LVM volume is expanded:

Your system will now begin online expansion. 
Please do not reboot until you receive notification that the expansion is complete.

And while the expansion is in progress, you will see that the ReadyNAS uses resize2fs to grow the file system. If you look at the running processes, you will see the following:

root 1371  ?  S    21:41   0:00 /bin/bash /frontview/bin/expand_online
root 1537  ?  Ss   21:41   0:00 /frontview/bin/blink_expand
root 1538  ?  S    21:41   0:42 resize2fs -pf /dev/c/c

Data volume has been successfully expanded to 1853 GB.

nas-D2-24-F2:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hdc1             1.9G  590M  1.3G  30% /
tmpfs                  16k     0   16k   0% /USB
/dev/c/c              1.8T  888G  966G  48% /c

Problems and hints

  • If at some point, the ReadyNAS enter in re-synchronization after a reboot even if disks are already synchronized. Check that the disks are in good health. Look at the /etc/rc3.d directory and make sure the rc3 script is called only once through the Sxxx symbolic links (See Frontview shows 100% disk usage)
  • If you suspect something wrong, use ssh to connect to the ReadyNAS and look at /var/log/messages or /var/log/kern.log to see if there is not some hardware issue.
  • Check the file /etc/frontview/raid.conf and verify that the two lines are similar and indicate the reference of your new disk (this file is rebuilt after each reboot).
  • Look at the /proc/xraid/configuration file. It indicates a lot of information about the current X-RAID status and synchronisation process.
  • At the last resort, read and understand the /etc/hotplug/sata.agent script which contains the details of the resynchronisation and expansion process.

World IPv6 Day

By Stephane Carrez 2011-06-08 06:52:17

Today, June 8th 2011, is the World IPv6 day. Major organisations such as Google, Facebook, Yahoo! wil offer native IPv6 connectivity.

To check your IPv6 connectivity, you can run a test from your browser: Test your IPv6 connectivity.

If you install the ShowIP Firefox plugin, you will know the IP address of web sites while you browse and therefore quickly know whether you navigate using IPv4 or IPv6.

Below are some basic performance results between IPv4 and IPv6. Since most routers are tuned for IPv4, the IPv6 flow path is not yet as fast as IPv4. The (small) performance degradation has nothing to do with the IPv6 protocol.

Google IPv4 vs IPv6 ping

$ ping -n www.google.com
PING www.l.google.com (209.85.146.103) 56(84) bytes of data.
64 bytes from 209.85.146.103: icmp_seq=1 ttl=55 time=9.63 ms
$ ping6 -n www.google.com
PING www.google.com(2a00:1450:400c:c00::67) 56 data bytes
64 bytes from 2a00:1450:400c:c00::67: icmp_seq=1 ttl=56 time=11.6 ms

Yahoo IPv4 vs IPv6 ping

$ ping -n www.yahoo.com
PING fpfd.wa1.b.yahoo.com (87.248.122.122) 56(84) bytes of data.
64 bytes from 87.248.122.122: icmp_seq=1 ttl=58 time=25.7 ms
$ ping6 -n www.yahoo.com
PING www.yahoo.com(2a00:1288:f00e:1fe::3000) 56 data bytes
64 bytes from 2a00:1288:f00e:1fe::3000: icmp_seq=1 ttl=60 time=31.3 ms

Facebook IPv4 vs IPv6 ping

$ ping -n www.facebook.com
PING www.facebook.com (66.220.156.25) 56(84) bytes of data.
64 bytes from 66.220.156.25: icmp_seq=1 ttl=247 time=80.6 ms
$ ping6 -n www.facebook.com
PING www.facebook.com(2620:0:1c18:0:face:b00c:0:1) 56 data bytes
64 bytes from 2620:0:1c18:0:face:b00c:0:1: icmp_seq=1 ttl=38 time=98.6 ms

Thread safe object pool to manage scarce resource in application servers

By Stephane Carrez 2011-05-29 19:47:39

Problem Description

In application servers some resources are expensive and they must be shared. This is the case for a database connection, a frame buffer used for image processing, a connection to a remote server, and so on. The problem is to make available these scarce resources in such a way that:

  • a resource is used by only one thread at a time,
  • we can control the maximum number of resources used at a time,
  • we have some flexibility to define such maximum when configuring the application server,
  • and of course the final solution is thread safe.

The common pattern used in such situation is to use a thread-safe pool of objects. Objects are picked from the pool when needed and restored back to the pool when they are no longer used.

Java thread safe object pool

Let's see how to implement our object pool in Java. We will use a generic class declaration into which we define a fixed array of objects. The pool array is allocated by the constructor and we will assume it will never change (hence the final keyword).

public class Pool<T> {
  private final T[] objects;
 
  public Pool<T>(int size) {
     objects = new T[size];
  }
  ...
}

First, we need a getInstance method that picks an object from the pool. The method must be thread safe and it is protected by the synchronized keyword. It there is no object, it has to wait until an object is available. For this, the method invokes wait to sleep until another thread releases an object. To keep track of the number of available objects, we will use an available counter that is decremented each time an object is used.

  private int available = 0;
  private int waiting = 0;
  public synchronized T getInstance() {
     while (available == 0) {
        waiting++;
        wait();
        waiting--;
     }
     available--;
     return objects[available];
  }

To know when to wakeup a thread, we keep track of the number of waiters in the waiting counter. A loop is also necessary to make sure we have an available object after being wakeup. Indeed, there is no guarantee that after being notified, we have an available object to return. The call to wait will release the lock on the pool and puts the thread is wait mode.

Releasing the object is provided by release. The object is put backed in the pool array and the available counter incremented. If some threads are waiting, one of them is awaken by calling notify.

  public synchronized void release(T obj) {
     objects[available] = obj;
     available++;
     if (waiting) {
        notify();
     }
  }

When the application is started, the pool is initialized and some pre-defined objects are inserted.

   class Item { ... };
...
   Pool<Item> pool = new Pool<Item>(10);
   for (int i = 0; i < 10; i++) {
       pool.release(new Item());
   }

Ada thread safe pool

The Ada object pool will be defined in a generic package and we will use a protected type. The protected type will guarantee the thread safe behavior of the implementation by making sure that only one thread executes the procedures.

generic
   type Element_Type is private;
package Util.Concurrent.Pools is
     type Element_Array_Access is private;
     Null_Element_Array : constant Element_Array_Access;
  ...
private
    type Element_Array is array (Positive range <>) of Element_Type;
    type Element_Array_Access is access all Element_Array;
     Null_Element_Array : constant Element_Array_Access := null;
end Util.Concurrent.Pools;   

The Ada protected type is simple with three procedures, we get the Get_Instanceand Release as in the Java implementation. The Set_Size will take care of allocating the pool array (a job done by the Java pool constructor).

protected type Pool is
  entry Get_Instance (Item : out Element_Type);
  procedure Release (Item : in Element_Type);
  procedure Set_Size (Capacity : in Positive);
private
  Available     : Natural := 0;
  Elements      : Element_Array_Access := Null_Element_Array;
end Pool;

First, the Get_Instance procedure is defined as an entry so that we can define a condition to enter in it. Indeed, we need at least one object in the pool. Since we keep track of the number of available objects, we will use it as the entry condition. Thanks to this entry condition, the Ada implementation is a lot easier.

protected body Pool is
  entry Get_Instance (Item : out Element_Type) when Available > 0 is
  begin
     Item := Elements (Available);
     Available := Available - 1;
  end Get_Instance;
...
end Pool;

The Release operation is also easier as there is no need to wakeup any thread: the Ada runtime will do that for us.

protected body Pool is
   procedure Release (Item : in Element_Type) is
   begin
      Available := Available + 1;
      Elements (Available) := Item;
   end Release;
end Pool;

The pool is instantiated:

type Connection is ...;
package Connection_Pool is new Util.Concurrent.Pools (Connection);

And a pool object can be declared and initialized with some default object:

P : Connection_Pool.Pool;
C : Connection;
...
   P.Set_Size (Capacity => 10);
   for I in 1 .. 10 loop
         ...
         P.Release (C);
   end loop;

References

util-concurrent-pools.ads

util-concurrent-pools.adb

Ada Server Faces Application Example part 3: the action bean

By Stephane Carrez 2011-05-02 21:01:32

In a previous article, I presented in the cylinder volume example the Ada Server Faces presentation layer and then the Ada beans that link the presentation and ASF components together. This article explains how to implement an action bean and have a procedure executed when a button is pressed.

Command buttons and method expression

We have seen in the presentation layer how to create a form and have a submit button. This submit button can be associated with an action that will be executed when the button is pressed. The EL expression is the mechanism by which we create a binding between the XHTML presentation page and the component implemented in Java or Ada. A method expression is a simple EL expression that represents a bean and a method to invoke on that bean. This method expression represent our action.

A typical use is on the h:commandButton component where we can specify an action to invoke when the button is pressed. This is written as:

<h:commandButton id='run' value='Compute'
       action="#{compute.run}"/>

The method expression #{compute.run} indicates to execute the method run of the bean identified by compute.

Method Bean Declaration

Java implements method expressions by using reflection. It is able to look at the methods implemented by an object and then invoke one of these method with some parameters. Since we cannot do this in Ada, some developer help is necessary.

For this an Ada bean that implements an action must implement the Method_Bean interface. If we take the Compute_Bean type defined in the Ada beans previous article, we just have to extend that interface and implement the Get_Method_Bindings function. This function will indicate the methods which are available for an EL expression and somehow how they can be called.

with Util.Beans.Methods;
...
   type Compute_Bean is new Util.Beans.Basic.Bean
          and Util.Beans.Methods.Method_Bean with record
      Height : My_Float := -1.0;
      Radius : My_Float := -1.0;
      Volume: My_Float := -1.0;
   end record;
   --  This bean provides some methods that can be used in a Method_Expression
   overriding
   function Get_Method_Bindings (From : in Compute_Bean)
      return Util.Beans.Methods.Method_Binding_Array_Access;

Our Ada type can now define a method that can be invoked through a method expression. The action bean always receives the bean object as an in out first parameter and it must return the action outcome as an Unbounded_String also as in out.

 procedure Run (From    : in out Compute_Bean;
                         Outcome : in out Unbounded_String);

Implement the action

The implementation of our action is quite simple. The Radius and Height parameters submitted in the form have been set on the bean before the action is called. We can use them to compute the cylinder volume.

procedure Run (From    : in out Compute_Bean;
                        Outcome : in out Unbounded_String) is
      V : My_Float;
   begin
      V := (From.Radius * From.Radius);
      V := V * From.Height;
      From.Volume := V * 3.141;
      Outcome := To_Unbounded_String ("compute");
   end Run;

Define the action binding

To be able to call the Run procedure from an EL method expression, we have to create a binding object. This binding object will hold the method name as well as a small procedure stub that will somehow tie the method expression to the procedure. This step is easily done by instantiating the ASF.Events.Actions.Action_Method.Bind package.

with ASF.Events.Actions;
...
   package Run_Binding is
     new ASF.Events.Actions.Action_Method.Bind
        (Bean  => Compute_Bean,
         Method => Run,
         Name    => "run");

Register and expose the action bindings

The last step is to implement the Get_Method_Bindings function. Basically it has to return an array of method bindings which indicate the methods provided by the Ada bean.

Binding_Array : aliased constant Util.Beans.Methods.Method_Binding_Array
  := (Run_Binding.Proxy'Unchecked_Access, Run_Binding.Proxy'Unchecked_Access);

overriding
function Get_Method_Bindings (From : in Compute_Bean)
   return Util.Beans.Methods.Method_Binding_Array_Access is
begin
   return Binding_Array'Unchecked_Access;
end Get_Method_Bindings;

What happens now?

When the user presses the Compute button, the brower will submit the form and the ASF framework will do the following:

  • It will check the validity of input parameters,
  • It will save the input parameters on the compute bean,
  • It will execute the method expression #{compute.run}:
  • It calls the Get_Method_Bindings function to get a list of valid method,
  • Having found the right binding, it calls the binding procedure
  • The binding procedure invokes the Run procedure on the object.

Next time...

We have seen the presentation layer, how to implement the Ada bean and this article explained how to implement an action that is called when a button is pressed. The next article will explain how to initialize and build the web application.

Ada Server Faces Application Example part 4: the server

By Stephane Carrez 2011-04-18 20:35:58

In previous articles, we have seen that an Ada Server Faces application has a presentation layer composed of XHTML and CSS files. Similar to Java Server Faces, Ada Server Faces is a component-based model and we saw how to write the Ada beans used by the application. Later, we also learnt how an action bean can have a procedure executed when a button is pressed. Now, how can all these stuff fit together?

Well, to finish our cylinder volume example, we will see how to put everything together and get our running web application.

Application Initialization

An Ada Server Faces Application is represented by the Application type which holds all the information to process and dispatch requests. First, let's declare a variable that represents our application.

Note: for the purpose of this article, we will assume that every variable is declared at some package level scope. If those variables are declared in another scope, the Access attribute should be replaced by Unchecked_Access.

with ASF.Applications.Main;
...
   App : aliased ASF.Applications.Main.Application;

To initialize the application, we will also need some configuration properties and a factory object. The configuration properties are used to configure the various components used by ASF. The factory allows to customize some behavior of Ada Server Faces. For now, we will use the default factory.

with ASF.Applications;
...
   C        : ASF.Applications.Config;
   Factory : ASF.Applications.Main.Application_Factory;

The initialization requires to define some configuration properties. The VIEW_EXT property indicates the URI extension that are recognized by ASF to associate an XHTML file (the compute.html corresponds to the XHTML file compute.xhtml). The VIEW_DIR property defines the root directory where the XHTML files are stored.

C.Set (ASF.Applications.VIEW_EXT, ".html");
C.Set (ASF.Applications.VIEW_DIR, "samples/web");
C.Set ("web.dir", "samples/web");
App.Initialize (C, Factory);

Servlets

Ada Server Faces uses the Ada Servlet framework to receive and dispatch web requests. It provides a Faces_Servlet servlet which can be plugged in the servlet container. This servlet is the entry point for ASF to process incoming requests. We will also need a File_Servlet to process the static files. Note that these servlets are implemented using tagged records and you can easily override the entry points (Do_Get or Do_Post) to implement specific behaviors.

with ASF.Servlets.Faces;
with ASF.Servlets.Files;
...
   Faces : aliased ASF.Servlets.Faces.Faces_Servlet;
   Files   : aliased ASF.Servlets.Files.File_Servlet;

The servlet instances are registered in the application.

App.Add_Servlet (Name => "faces", Server => Faces'Access);
App.Add_Servlet (Name => "files", Server => Files'Access);

Once registered, we have to define a mapping that tells which URI path is mapped to the servlet.

App.Add_Mapping (Name => "faces", Pattern => "*.html");
App.Add_Mapping (Name => "files", Pattern => "*.css");

For the purpose of debugging, ASF provides a servlet filter that can be plugged in the request processing flow. The Dump_Filter will produce a dump of the request with the headers and parameters.

with ASF.Filters.Dump;
...
   Dump    : aliased ASF.Filters.Dump.Dump_Filter;

The filter instance is registered as follows:

App.Add_Filter (Name => "dump", Filter => Dump'Access);

And a mapping is defined to tell which URL will trigger the filter.

App.Add_Filter_Mapping (Name => "dump", Pattern => "*.html");

Application and Web Container

The application object that we created is similar to a Java Web Application packaged in a WAR file. It represents the application and it must be deployed in a Web Container. With Ada Server Faces this is almost the same, the application needs a Web container. By default, ASF provides a web container based on the excellent Ada Web Server implementation (other web containers could be provided in the future based on other web servers).

with ASF.Server.Web;
...
   WS : ASF.Server.Web.AWS_Container;

To register the application, we indicate the URI context path to which the application is associated. Several applications can be registered, each of them having a unique URI context path.

CONTEXT_PATH : constant String := "/volume";
...
WS.Register_Application (CONTEXT_PATH, App'Access);

Global Objects

An application can provide some global objects which will be available during the request processing through the EL expression. First, we will expose the application context path which allows to write links in the XHTML page that match the URI used for registering the application in the web container.

App.Set_Global ("contextPath", CONTEXT_PATH);

Below is an example of use of this contextPath variable:

<link media="screen" type="text/css" rel="stylesheet"
      href="#{contextPath}/themes/main.css"/>

Now, we will register the bean that we created for our application! This was explained in the Ada beans previous article.

with Volume;
...
   Bean    : aliased Volume.Compute_Bean;
...
   App.Set_Global ("compute", Util.Beans.Objects.To_Object (Bean'Access));

Note: For the purpose of this example, the Compute_Bean is registered as a global object. This means that it will be shared by every request. A future article will explain how to get a session or a request bean as in Java Server Faces.

Starting the server

Once the application is registered, we can start our server. Note that since Ada Web Server starts several threads that listen to requests, the Start procedure does not block and returns as soon as the server is started. The delay is necessary to let the server wait for requests during some time.

WS.Start;
delay 1000.0;

What happens to a request?

Let's say the server receives a HTTP GET request on /volume/compute.html. Here is what happens:

Volume ASF Flow

  • Ada Web Server receives the HTTP request
  • It identifies the application that matches /volume (our context path) and gives the control to it
  • The application identifies the servlet that processes the remaining URI, which is compute.html
  • It gives the control to the Dump_Filter filter and then to the Faces_Servlet servlet,
  • The faces servlet identifies the XHTML facelet file and reads the compute.xhtml file
  • ASF builds the component tree that describes the page and invokes the render response phase
  • While rendering, the EL expressions such as #{compute.radius} are evaluated and the value is obtained on our Bean global instance.
  • The HTML content is produced as part of the rendering process and returned by AWS.

References

asf_volume_server.adb

volume.ads

volume.adb

compute.xhtml

Ada Server Faces Application Example part 2: the Ada beans

By Stephane Carrez 2011-04-10 19:46:24

The first article explained how to design the presentation page of an Ada Server Faces application. This article presents the Ada beans that are behind the presentation page.

Ada Bean and presentation layer

We have seen that the presentation page contains components that make references to Ada beans with an EL expression.

<h:inputText id='height' size='10' value='#{compute.height}'>
        <f:converter converterId="float" />
</h:inputText>

The #{compute.height} is an EL expression that refers to the height property of the Ada bean identified as compute.

Writing the Cylinder Ada Bean

The Ada bean is a instance of an Ada tagged record that must implement a getter and a setter operation. These operations are invoked through an EL expression. Basically the getter is called when the view is rendered and the setter is called when the form is submitted and validated. The Bean interface defines the two operations that must be implemented by the Ada type:

with Util.Beans.Basic;
with Util.Beans.Objects;
...
   type Compute_Bean is new Util.Beans.Basic.Bean with record
      Height : My_Float := -1.0;
      Radius : My_Float := -1.0;
   end record;

   --  Get the value identified by the name.
   overriding
   function Get_Value (From : Compute_Bean;
                       Name : String) return Util.Beans.Objects.Object;

   --  Set the value identified by the name.
   overriding
   procedure Set_Value (From  : in out Compute_Bean;
                        Name  : in String;
                        Value : in Util.Beans.Objects.Object);

The getter and setter will identify the property to get or set through a name. The value is represented by an Object type that can hold several data types (boolean, integer, floats, strings, dates, ...). The getter looks for the name and returns the corresponding value in an Object record. Several To_Object functions helps in creating the result value.

   function Get_Value (From : Compute_Bean;
                       Name : String) return Util.Beans.Objects.Object is
   begin
      if Name = "radius" and From.Radius >= 0.0 then
         return Util.Beans.Objects.To_Object (Float (From.Radius));

      elsif Name = "height" and From.Height >= 0.0 then
         return Util.Beans.Objects.To_Object (Float (From.Height));

      else
         return Util.Beans.Objects.Null_Object;
      end if;
   end Get_Value;

The setter is similar.

   procedure Set_Value (From  : in out Compute_Bean;
                        Name  : in String;
                        Value : in Util.Beans.Objects.Object) is
   begin
      if Name = "radius" then
         From.Radius := My_Float (Util.Beans.Objects.To_Float (Value));
      elsif Name = "height" then
         From.Height := My_Float (Util.Beans.Objects.To_Float (Value));
      end if;
   end Set_Value;

Register the Cylinder Ada Bean

The next step is to register the cylinder bean and associate it with the compute name. There are several ways to do that but for the purpose of this example, there will be a global instance of the bean. That instance must be aliased so that we can use the Access attributes.

 Bean  : aliased Compute_Bean;

The Ada bean is registered on the application object by using the Set_Global procedure. This creates a global binding between a name and an Object record. In our case, the object will hold a reference to the Ada bean.

App : aliased ASF.Applications.Main.Application;
...
   App.Set_Global ("compute", Util.Beans.Objects.To_Object (Bean'Unchecked_Access));

Next Time...

We have seen how the presentation layer and the Ada beans are associated. The next article will present the action binding that links the form submission action to an Ada bean method.

");