Java 2 Ada

Ubuntu 14.04 LTS Ada build node installation

By stephane.carrez 2 comments

This short article is a reminder to know the steps and actions in order to add a Ubuntu 14.04 build machine for Jenkins.

The steps are very similar to what I've described in Installation of FreeBSD for a jenkins build node. The virtual machine setup is the same (20G LVM partition, x86_64 CPU, 1Gb memory) and Ubuntu is installed from the ubuntu-14.04.1-server-i386.iso image.

Packages to build Ada software

The following commands install the GNAT Ada compiler with the libraries and packages to build various Ada libraries and projects including AWA.

# GNAT Compiler Installation
sudo apt-get install gnat-4.6 libaws2.10.2-dev libxmlada4.1-dev gprbuild gdb

# Packages to build Ada Utility Library
sudo apt-get install libcurl4-openssl-dev libssl-dev

# Packages to build Ada Database Objects
sudo apt-get install sqlite libsqlite3-dev
sudo apt-get install libmysqlclient-dev
sudo apt-get install mysql-server mysql-client

# Packages to build libaws2-2-10
sudo apt-get install libasis2010-dev libtemplates-parser11.6-dev
sudo apt-get install texinfo texlive-latex-base \
 texlive-generic-recommended texlive-fonts-recommended 

The libaws2-2-10 package was not functional for me (see bug 1348902) so I had to rebuild the Debian package from the sources and install it.

Packages to create Debian packages

When the Ada build node is intended to create Debian packages, the following steps are necessary:

sudo apt-get install dpkg-dev gnupg reprepro pbuilder debhelper quilt chrpath
sudo apt-get install autoconf automake autotools-dev

Packages and setup for Jenkins

Before adding the build node in Jenkins, the JRE must be installed and a jenkins user must exist:

sudo apt-get install openjdk-7-jre subversion
sudo useradd -m -s /bin/bash jenkins

Jenkins will use ssh to connect to the build node so it is good practice to setup a private/public key to allow the Jenkins master node to connect to the slave. On the master, copy the jenkins user's key:

ssh-copy-id target-host

The Ada build node is then added through the Jenkins UI in Manage Jenkins/Manage Nodes.

Jenkins jobs

The jenkins master is now building 7 projects automatically for Ubuntu 14.04: Trusty Ada Jobs

2 comments
To add a comment, you must be connected. Login to add a comment

New debian repository with Ada packages

By stephane.carrez

I've created and setup a Debian repository to give access to several Debian packages for several Ada projects that I manage. The goal is to provide some easy and ready to use packages to simplify and help in the installation of various Ada libraries. The Debian repository includes the binary and development packages for Ada Utility Library, Ada EL, Ada Security, and Ada Server Faces.

Access to the repository

The repository packages are signed with PGP. To get the verification key and setup the apt-get tool, you should run the following command:

wget -O - http://apt.vacs.fr/apt.vacs.fr.gpg.key | sudo apt-key add -

Ubuntu 13.04 Raring

A first repository provides Debian packages targeted at Ubuntu 13.04 raring. They are built with the gnat-4.6 package and depend on libaws-2.10.2-4 and libxmlada4.1-dev. Add the following line to your /etc/apt/sources.list configuration:

deb http://apt.vacs.fr/ubuntu-raring raring main

Ubuntu 12.04 LTS Precise

A second repository contains the Debian packages for Ubuntu 12.04 precise. They are built with the gnat-4.6 package and depend on libaws-2.10.2-1 and libxmlada4.1-dev. Add the following line to your /etc/apt/sources.list configuration:

deb http://apt.vacs.fr/ubuntu-precise precise main

Installation

Once you've added the configuration line, you can install the packages:

sudo apt-get update
sudo apt-get install libada-asf1.0

For the curious, you may browse the repository here.

Disabling overlay scrollbar fixes the Thunderbird scrollbar position issue on Ubuntu 12.04

By stephane.carrez

Since Ubuntu 12.04 upgrade, the Thunderbird scrollbar position were not visible any more. The scrollbar works but you have no visual feedback to know where you are in your long lists. Annoying!!! It turns out that this was a feature of the overlay scrollbar.

To restore the Thunderbird scrollbar, remove the following packages which are causing these troubles:

sudo apt-get remove overlay-scrollbar liboverlay-scrollbar-0.2-0 liboverlay-scrollbar3-0.2-0

Since I've disabled it, I realized many others have the issue. The How do I disable overlay scrollbars? Q&A gives other hints.

To add a comment, you must be connected. Login to add a comment

How to repair the USB connection problem on Android Samsung phones

By stephane.carrez 31 comments

Several Samsung Galaxy phones seem to have USB connection problems. Sometimes the USB connection stops working and even rebooting the phone does not solve the problem. This article gives the symptoms and explains how to fix that.

Symptoms

It took me a long time to figure out and fix the problem (looking at many forums, trying many solutions that never worked). The problem was not a driver problem on the PC, nor some dust on the USB connector but really a software/configuration problem on the Android phone itself. The symptoms were the following:

  • The phone is correctly configured in Settings -> Applications -> Development to USB mode (no development)
  • Plugging and unplugging the USB cable does not produce any event on the phone (it is as though you plug the charger: no USB icon in the status bar)
  • Rebooting the phone has no effect. USB is still not recognized (the USB icon does not appear in the status bar).
  • Rebooting the phone with the USB cable connected to the PC is better. The USB icon appears but it never disappears after unplug and the connection still does not work.
  • Under Windows, Samsung Kies tries to connect but the connection process never ends. Windows sees the USB device.
  • Under Ubuntu, the dmesg command reports an error when the USB cable is plugged:
[62752.296029] usb 2-6: new high speed USB device using ehci_hcd and address 38
[62752.429897] usb 2-6: configuration #1 chosen from 1 choice
[62752.431442] hub 2-6:1.0: bad descriptor, ignoring hub
[62752.431450] hub: probe of 2-6:1.0 failed with error -5
[62752.431543] cdc_acm 2-6:1.0: ttyACM0: USB ACM device
[62752.432560] hub 2-6:1.2: bad descriptor, ignoring hub
[62752.432567] hub: probe of 2-6:1.2 failed with error -5
  • However the device is recognized. Indeed, you can see it with lsusb.
  • You can even use the lsusb -D command to look at the details of the device. BUT, this command reports an error (can't get hub descriptor: Broken pipe) within its output:
$ lsusb -D /dev/bus/usb/002/014 
Device: ID 04e8:6601 Samsung Electronics Co., Ltd Z100 Mobile Phone
Device Descriptor:
  bLength                18
  bDescriptorType         1
  bcdUSB               2.00
  bDeviceClass            9 Hub
  bDeviceSubClass         0 Unused
  bDeviceProtocol         0 Full speed (or root) hub
  bMaxPacketSize0        64
  idVendor           0x04e8 Samsung Electronics Co., Ltd
  idProduct          0x6601 Z100 Mobile Phone
 ...
can't get hub descriptor: Broken pipe
Device Qualifier (for other device speed):
  bLength                10
  bDescriptorType         6
  bcdUSB               2.00
 ...

Resolution

1. Unplug the USB cable

2. On the cell phone, dial the following number: *#7284#

Once the last # is hit, the PhoneUtil application is launched. Choose USB -> Modem and then USB -> PDA mode.

The good mode should be PDA. Even if the mode is PDA, switch to Modem and then back to PDA.

3. Plug the USB cable.

Android PhoneUtil

Results

Once the cable is plugged, the USB device is recognized and the following messages are reported by dmesg:

[62941.921435] usb 2-6: new high speed USB device using ehci_hcd and address 39
[62942.054057] usb 2-6: configuration #2 chosen from 1 choice
[62942.086841] Initializing USB Mass Storage driver...
[62942.087128] scsi8 : SCSI emulation for USB Mass Storage devices
[62942.087310] usbcore: registered new interface driver usb-storage
[62942.087314] USB Mass Storage support registered.
[62942.087340] usb-storage: device found at 39
[62942.087344] usb-storage: waiting for device to settle before scanning
[62947.084396] usb-storage: device scan complete
[62947.085230] scsi 8:0:0:0: Direct-Access     SAMSUNG  GT-I5800 Card    0000 PQ: 0 ANSI: 2
[62947.088053] sd 8:0:0:0: Attached scsi generic sg4 type 0
[62947.096526] sd 8:0:0:0: [sdd] Attached SCSI removable disk

The lsusb -D command should now work without any problem (the can't get hub descriptor: Broken pipe error has gone).

Mounting the sdcard

Before mounting the sdcard, activate the USB mount connection on the phone by clicking on the USB icon in the status bar. Once this is done, dmesg will report more messages such as:

[66309.394438] sd 8:0:0:0: [sdd] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB)
[66309.394934] sd 8:0:0:0: [sdd] Assuming drive cache: write through
[66309.396297] sd 8:0:0:0: [sdd] Assuming drive cache: write through
[66309.396301]  sdd: sdd1

On Ubuntu, the sdcard is mounted with the following command:

$ sudo mount -t vfat /dev/sdd1 /mnt/storage

Android Device Development

For Android development, it is necessary to configure the udev (dynamic device management) service. It should be done before connecting the device. For this you have to create a rules file for udev. Create the following file /etc/udev/rules.d/51-android.rules and put:

SUBSYSTEM=="usb", ATTR{idVendor}=="0bb4", MODE="0666"
SUBSYSTEM=="usb", ATTR{idVendor}=="04e8", MODE="0666"

Make sure the file is readable:

# chmod a+r /etc/udev/rules.d/51-android.rules

Then, restart udev with

# service udev restart

References

Update 2017

Seven years after this initial post, people still have many problems with their USB connection. The solution presented here may not work anymore for some of you. You may checkout the following article that gives you more Android secret codes:

31 comments
To add a comment, you must be connected. Login to add a comment

Fixing the blank screen on Ubuntu 10.04 with an ATI Radeon HD5450 after a distribution upgrade

By stephane.carrez

After upgrading my Ubuntu desktop with sudo apt-get upgrade, the X11 server was unable to start. Indeed, the AMD Catalyst driver was made unusable due to a missing symbol.

If this happens to you, check the file /var/log/kdm.log and if you see some error such as:

/usr/bin/X: symbol lookup error: /usr/lib/xorg/modules/drivers/fglrx_drv.so: undefined symbol: GlxInitVisuals2D
xinit /etc/gdm/failsafeXinit /etc/X11/xorg.conf.failsafe -- /usr/bin/X  -br -once -config /etc/X11/xorg.conf.failsafe -logfil
e /var/log/Xorg.failsafe.log

Then you have to re-install the proprietary AMD Catalyst driver (AMD just released a new driver yesterday).

After re-installation and a reboot, the dual screen configuration was running again. To configure the dual screen, it may be necessary to launch the AMD Catalyst Control Center with:

$ sudo amdcccle

Check out my xorg.conf file in case of problem.

To add a comment, you must be connected. Login to add a comment

Migration of KVM virtual machine image to a raw disk partition

By stephane.carrez 4 comments

This article explains how to move a KVM virtual disk image file from a plain file to a raw hard disk partition. It then explains how to grow the virtual disk to use the full partition size.

Why using a disk partition for the virtual machine image

Using a plain file for a virtual machine disk image is the easiest configuration when you setup some virtual machine environment. It allows to start quickly for the setup and you can copy easily the virtual machine for a backup.

However, using a raw disk partition for the virtual machine provides better performance in general. The overhead of the guest file system is avoided as we have a direct access to the partition.

Copy the virtual machine image on the partition

To copy the virtual machine image on our partition, the easiest way is to use the dd command. This step assumes that the virtual machine is stopped. In the example, the partition is /dev/sdb10, this partition is bigger than the image file (if this is not the case, the image will be truncated).

$ sudo dd if=windows-xp.img of=/dev/sdb10 bs=1048576

5120+1 records in

5120+1 records out

5368709121 bytes (5.4 GB) copied, 331.51 s, 16.2 MB/s

Resize the virtual disk to the full partition size

The virtual disk partition should be changed to use the full disk space provided by our /dev/sdb10 partition. For this, we can use the fdisk command:

$ sudo fdisk /dev/sdb10

Command (m for help): p

Disk /dev/sdb10: 22.0 GB, 22019042304 bytes

...

Device Boot Start End Blocks Id System

/dev/sdb10p1 * 1 651 5229126 7 HPFS/NTFS

You can easily change the partition to use the full disk by deleting the partition and creating it again so that you get something such as:

Device Boot Start End Blocks Id System

/dev/sdb10p1 1 2676 21494938+ 7 HPFS/NTFS
Now, we have to resize the file system on the virtual disk partition /dev/sdb10p1. For this, we will use kpartx to get access to the disk partitions provided by our /dev/sdb10 partition:

$ sudo kpartx -v -a /dev/sdb10

add map sdb10p1 (251:1): 0 42989877 linear /dev/sdb10 63
After the partitions are mapped, we can look at the filesystem before resizing it with the ntfsresize command. We use this command to know the good size for resizing the file system.

$ sudo ntfsresize --info /dev/mapper/sdb10p1

ntfsresize v2.0.0 (libntfs 10:0:0)

Device name : /dev/mapper/sdb10p1

NTFS volume version: 3.1

Cluster size : 4096 bytes

Current volume size: 5354623488 bytes (5355 MB)

Current device size: 22010817024 bytes (22011 MB)

Checking filesystem consistency ...

100.00 percent completed

Accounting clusters ...

Space in use : 4786 MB (89.4%)

Collecting resizing constraints ...

You might resize at 4785565696 bytes or 4786 MB (freeing 569 MB).

Please make a test run using both the -n and -s options before real resizing!
And we can do the resize by using the Current device size as the new file system size.

$ sudo ntfsresize -s 22010817024 /dev/mapper/sdb10p1

ntfsresize v2.0.0 (libntfs 10:0:0)

Device name : /dev/mapper/sdb10p1

NTFS volume version: 3.1

Cluster size : 4096 bytes

Current volume size: 5354623488 bytes (5355 MB)

Current device size: 22010817024 bytes (22011 MB)

New volume size : 22010810880 bytes (22011 MB)

Checking filesystem consistency ...

100.00 percent completed

Accounting clusters ...

Space in use : 4786 MB (89.4%)

Collecting resizing constraints ...

WARNING: Every sanity check passed and only the dangerous operations left.

Make sure that important data has been backed up! Power outage or computer

crash may result major data loss!

Are you sure you want to proceed (y/[n])? y

Schedule chkdsk for NTFS consistency check at Windows boot time ...

Resetting $LogFile ... (this might take a while)

Updating $BadClust file ...

Updating $Bitmap file ...

Updating Boot record ...

Syncing device ...

Successfully resized NTFS on device '/dev/mapper/sdb10p1'.

At this stage, our virtual machine disk image was moved from a plain file to a raw disk partition that it uses entirely.

Change the virtual machine definition

The virtual machine definition must now be changed to use our partition. You can do this by copying the XML definition to another file, thus creating a new virtual machine. This is the best thing to do so that you can still use the old configuration. If you do such copy, you have to change the uuid as well as the network mac address.

The disk type parameter must be changed to block and the dev parameter must now point to the device partition.

<domain type='kvm'>

...

<disk type='block' device='disk'>

<source dev='/dev/sdb10'/>

<target dev='hda' bus='ide'/>

</disk>

...

</domain>
After this, start the virtual machine!

The next step is to setup virtio to boost performance by using paravirtualization.

4 comments
To add a comment, you must be connected. Login to add a comment

How to fix GNAT symbolic traceback crash on Ubuntu

By stephane.carrez

When you use the GNAT symbolic traceback feature with gcc 4.4 on Ubuntu 10.04, a segmentation fault occurs. This article explains why and proposes a workaround until the problem is fixed in the distribution.

Symbolic Traceback

The GNU Ada Compiler provides a support package to dump the exception traceback with symbols.

with Ada.Exceptions;
  use Ada.Exceptions;
with GNAT.Traceback.Symbolic;
  use GNAT.Traceback.Symbolic;
with Ada.Text_IO; use Ada.Text_IO;
...
exception
  when E : others =>
    Put_Line ("Exception: " & Exception_Name (E));
    Put_Line (Symbolic_Traceback (E));

GNAT Symbolic Traceback crash

On Ubuntu 10.04 and probably on other Debian distributions, the symbolic traceback crashes in convert_addresses:

Program received signal SIGSEGV, Segmentation fault.
0xb7ab20a6 in convert_addresses () from /usr/lib/libgnat-4.4.so.1
(gdb) where
#0  0xb7ab20a6 in convert_addresses () from /usr/lib/libgnat-4.4.so.1
#1  0xb7ab1f2c in gnat__traceback__symbolic__symbolic_traceback () from /usr/lib/libgnat-4.4.so.1
#2  0xb7ab2054 in gnat__traceback__symbolic__symbolic_traceback__2 () from /usr/lib/libgnat-4.4.so.1

The problem is caused by a patch that was applied on GCC 4.4 sources and which introduces a bug in convert_addresses function. Basically, the function is missing a filename argument which causes other arguments to be incorrect.

void convert_addresses (const char* filename,
         void* addrs[], int n_addr,
         char* buf, int*  len)

Since convert_addresses is provided by the libgnat-4.4.so dynamic library, we can easily replace this function by linking our program with the correct implementation. Get the convert_addresses.c, compile it and add it when you link your program:

$ gcc -c convert_addresses.c
$ gnatmake -Pproject -largs convert_addresses.o
To add a comment, you must be connected. Login to add a comment

IPSec Meshed Network Configuration on the Cloud

By stephane.carrez 3 comments

Having to manage several servers on the Internet, I needed a way to create a secure internal network. Our servers are somewhere in the cloud and the solution that was adopted was to setup the GNU/Linux IPsec stack and an IP-IP tunnel between each server.

The following article describes how to setup the IPSec network and IP-IP tunnel. These steps were executed on 9 servers running Ubuntu 8.0.4 and one server running Ubuntu 10.0.4.

IPSec Configuration

We must install the following packages. The ipsec-tools package provides the utilities to setup and configure the IPSec stack and the racoon package provides the IKE server to manage the security associations.

$ sudo apt-get install ipsec-tools racoon tcpdump

Configure /etc/ipsec-tools.conf

The /etc/ipsec-tools.conf configuration file must define the policy entries (SPD) that describe which traffic has to be encrypted. We must define one SPD for each direction (two SPDs for each tunnel).

On the 90.1.1.1 server and to setup the IPSec tunnel to 201.10.10.10, the configuration looks like:

spdadd 90.1.1.1 201.10.10.10 any -P out ipsec
    esp/transport//require
    ah/transport//require;

spdadd 201.10.10.10  90.1.1.1 any -P in ipsec
    esp/transport//require
    ah/transport//require;

Configure Racoon

The Racoon configuration is defined in /etc/racoon/racoon.conf. Racoon can use several authentication mechanisms to verify that an IPSec association can be created with a given peer. To make the configuration simple and identical on every server, I have used RSA certificate. RSA certificates are very easy to manage and they provide a really good authentication.

remote anonymous {
   exchange_mode main,base;
   lifetime time 12 hour ;

   certificate_type plain_rsa "/etc/racoon/ipsec.key";
   peers_certfile plain_rsa "/etc/racoon/ipsec.pub";
   proposal {
      encryption_algorithm 3des;
      hash_algorithm sha256;
      authentication_method rsasig;
      dh_group modp1024;
  }
  generate_policy off;
}

sainfo anonymous {
  pfs_group modp1024;
  encryption_algorithm 3des;
  authentication_algorithm hmac_sha256;
  compression_algorithm deflate;
}

RSA Key Generation

The RSA public and private keys have to be generated using the plainrsa-gen tool.

plainrsa-gen -b 4096 -f /etc/racoon/ipsec.key

The public key part must be extracted from the generate key file and is identified by : PUB. You must extract that line and, remove the # start character and put the line in the ipsec.pub file.

# : PUB 0sXXXXXXX

Test

To verify the configuration, connect to one server and run a ping command to the second server. Connect to the second server and run a tcpdump to observe the packets coming from the other server:

$ sudo  tcpdump -n host 
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
17:34:47.377153 IP 90.1.1.1 > 201.10.10.10: AH(spi=0x0c57e022,seq=0xab9): ESP(spi=0x093415ec,seq=0xab9), length 100
17:34:47.377316 IP 201.10.10.10 >90.1.1.1: AH(spi=0x02ff6158,seq=0x9e3): ESP(spi=0x01375aa7,seq=0x9e3), length 100
17:34:48.379033 IP 90.1.1.1 > 201.10.10.10: AH(spi=0x0c57e022,seq=0xaba): ESP(spi=0x093415ec,seq=0xaba), length 100
17:34:48.379186 IP 201.10.10.10 > 90.1.1.1: AH(spi=0x02ff6158,seq=0x9e4): ESP(spi=0x01375aa7,seq=0x9e4), length 100

IP-IP Tunnels

Now that the servers can connect with each other using IPSec, we create a local network with private addresses that our internal services are going to use. Each server will have its public IP address and an internal address.

In other words, the IP-IP tunnel simulates a local network.

Setup the endpoint (90.1.1.1)

Create the tunnel interface. The Linux kernel must have the tun module installed. The following command creates a tunnel on the host 90.1.1.1 to the remote host 201.10.10.10.

ip tunnel add tun0 mode ipip \
    remote  201.10.10.10 local 90.1.1.1

Bind the tunnel interface to an IP address and configure the target IP (10.0.0.1 is our local address, 10.0.0.2 is the remote endpoint):

ifconfig tun0 10.0.0.1 netmask 255.255.255.0 \
     pointopoint 10.0.0.2 

Setup the client (201.10.10.10)

Create the tunnel interface. The Linux kernel must have the tun module installed. The following command creates a tunnel on the host 201.10.10.10 to the remote host 90.1.1.1.

ip tunnel add tun0 mode ipip \
    remote 90.1.1.1 local 201.10.10.10

Bind the tunnel interface to an IP address and configure the target IP (10.0.0.2 is our local address, 10.0.0.1 is the remote endpoint):

ifconfig tun0 10.0.0.2 netmask 255.255.255.0 \
    pointopoint 10.0.0.1

Test

Once the tunnel is created, you should get the tun0 interface and be able to ping the remote peers in the 10.0 network.

$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.707 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.541 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.630 ms

Firewall Configuration

With the IPsec stack and tunnels in place, it is still necessary to get a good firewall configuration to allow the IPsec traffic, block the non-IPsec traffic (in case of miss-configuration) and protect the server.

The IPSec traffic needs the IKE protocol (UDP port 500) to establish the security associations. The ah protocol will be used to authenticate the peers and the esp protocol to encrypt the payload. The IPsec traffic is controlled by the following rules (for the 201.10.10.10 server):

ip=90.1.1.1
iptables -A INPUT -p ah -i eth0 -s $ip -j ACCEPT
iptables -A INPUT -p esp -i eth0 -s $ip -j ACCEPT
iptables -A INPUT -p udp --sport 500 --dport 500 \
           -s $ip -j ACCEPT

iptables -A OUTPUT -p ah -o eth0 -d $ip -j ACCEPT
iptables -A OUTPUT -p esp -o eth0 -d $ip -j ACCEPT
iptables -A OUTPUT -p udp --sport 500 --dport 500 \
          -d $ip -j ACCEPT

The IP-IP tunnel brings another problem to the firewall configuration. Once extracted, the packets have to match the firewall rules. The iptables ipsec policy is used to accept the packets that are associated with an IPSec policy.

iptables -A INPUT -m policy --pol ipsec --dir in \
           -p 4 -j ACCEPT
iptables -A OUTPUT -m policy --pol ipsec --dir out \
           -p 4 -j ACCEPT

Troubles

Setting up the IPsec stack is not easy and does not work immediately. The Linux kernel does not bring any clue to spot the issue.

  1. Make sure there is no firewall that block the AH/ESP/IKE packets (disable any firewall if necessary)
  2. Make sure the SPD associations correspond to the peers (Check /etc/ipsec-tools.conf on both servers)
  3. Make sure Racoon daemon is running and that it does not report any error (Check /var/log/daemon.log)
3 comments
To add a comment, you must be connected. Login to add a comment

Install Epson Stylus PX700W printer with CUPS on Ubuntu 8.10

By stephane.carrez 6 comments

To print on a PX700W with Cups, you may have installed the pipslite driver from www.avasys.jp. I installed the .deb package which was supposed to be for Ubuntu. However, after installation, printing was not working and the following error message was logged in /var/log/cups/error_log:

PID 9766 (/usr/lib/cups/filter/pipslite-wrapper) crashed on signal 11!

Step 1: Get pipslite package

Get the source pipslite package. The binary (.deb) does not work for Ubuntu 8.10 and crashes.

Download for Epson Stylus Photo PX700W/TX700W,Artisan 700 for CUPS

Step 2: Get Ubuntu development libraries

Some Ubuntu development libraries are necessary:

sudo apt-get install libcups2-dev gdk-imlib11-dev \
  libcups2-dev libcupsimage2-dev libltdl7-dev libltdl7

Step 3: Build the sources

Configure as follows:

tar xvzf pipslite_1.3.0-2.tar.gz
cd pipslite-1.3.0
./configure --prefix=/usr

Then, build using make and install:

make
sudo make install

Step 4: Ubuntu Printer Configuration

Get the Epson Stylus PX700W PPD file and copy it to /usr/share/cups/model/eksppx700w.ppd.

Run the Ubuntu printer configuration:

sudo system-config-printer

Make sure the printer is connected. Then, add a printer by clicking the "New" button. The printer should be identified. Choose to use a manual PPD file and give the eksppx700w.ppd file.

My printer is connected on the network using the Wifi connection (IP address 192.168.1.10).

Epson Stylus PX700W Printer Configuration

Try to print a test page. If it does not work, look at the /var/log/cups/error_log file in case there are some errors.

6 comments
To add a comment, you must be connected. Login to add a comment

Server configuration management: track changes with subversion and be notified

By stephane.carrez 1 comment

The overall idea is to put the server configuration files stored in /etc directory under a version control system: subversion. The VCS is configured to send an email to the system administrators. The email contains the differences with a previous version. A cron script is executed every day to automatically commit the changes, thus triggering the email.

The best practice is of course that each system administrator commits their changes after they validated the new running configuration. If they do so, they are able to specify a comment which is helpful to understand what was done.

Install subversion

First, you should install subversion with its tools.

 sudo apt-get install -y subversion subversion-tools

Mail notification

For the mail notification, you may use postfix, exim or sendmail. But to avoid to setup a complete mail system, you may just use a simple mail client. For this, you can use the combination of esmtp and procmail.

 sudo apt-get install -y procmail esmtp

Create the subversion repository

The subversion repository will contain all the version and history of your /etc. It must be protected carefully because it contains sensitive information.

 sudo mkdir /home/svn
 sudo svnadmin create /home/svn/repos
 sudo chmod 700 /home/svn
 sudo chmod 700 /home/svn/repos

Now, setup the subversion repository to send an email for each commit. For this, copy or rename the post-commit.tmpl file and edit it to specify to whom you want the email to be sent:

 sudo cp /home/svn/repos/hooks/post-commit.tmpl  \
           /home/svn/repos/hooks/post-commit

and change the last line to something like (with your email address)

 /usr/share/subversion/hook-scripts/commit-email.pl \
  --from yoda+mercure@alliance.com \
  "$REPOS" "$REV" yoda@alliance.com

Initial import

To initialize the repository, we can use the svn import command:

 sudo svn import -m 'Initial import of /etc' \
               /etc file:///home/svn/repos/etc

Subversion repository setup in /etc

Now the hard stuff is to turn /etc into a subversion environment without breaking the server. For this, we extract the subversion /etc repository somewhere and copy only the subversion files in /etc.

 sudo mkdir /home/svn/last
 sudo sh -c "cd /home/svn/last && svn co file:///home/svn/repos/etc"
 sudo sh -c "cd /home/svn/last/etc && tar cf - `find . -name .svn` | (cd /etc && tar xvf -)"

At this step, everything is ready. You can go in /etc directory and use all the subversion commands. Example:

 sudo svn log /etc/hosts

to see the changes in the hosts file.

Auto-commit and detection of changes

The goal now is to detect every day the changes that were made and send a mail with the changes to the supervisor. For this, you create a cron script that you put in /etc/cron.daily. The script will be executed every day at 6:25am. It will commit the changes that were made and send an email for the new files.

 #!/bin/sh
 SVN_ETC=/etc
 HOST=`hostname`
 # Commit those changes
 cd $SVN_ETC && svn commit -m "Saving changes in /etc on $HOST"
 # Email address to which changes are sent
 EMAIL_TO="TO_EMAIL"
 STATUS=`cd $SVN_ETC && svn status`
 if test "T$STATUS" != "T"; then
   (echo "Subject: New files in /etc on $HOST";
    echo "To: $EMAIL_TO";
    echo "The following files are new and should be checked in:";
    echo "$STATUS") | sendmail -f'FROM_EMAIL' $EMAIL_TO
 fi

In this script you will replace TO_EMAIL and FROM_EMAIL by real email addresses.

Complete setup script

To help setup and configure all this easily, I'm now using a script that configures everything. You can download it: mk-etc-repository. The usage of the script is really simple, you just need to specify the email address for the notification:

 sudo sh mk-etc-repository sysadmin@mycompany.com
1 comment
To add a comment, you must be connected. Login to add a comment

Upgrading Symfony projects to release 1.2

By stephane.carrez 3 comments

I had two projects to upgrade on my desktop running Ubuntu 8. The upgrade process is described in Upgrading Projects from 1.1 to 1.2 but even if you follow it, this is really not easy.

Symfony Installation

First, I had to remove a previous symfony installation:

 sudo pear uninstall symfony/symfony

For some reasons, I had still the Propel plugin installed, so I've removed what remains with the command;

 $ sudo rm -rf /usr/share/php/symfony         

Then, I've installed the new Symfony version with pear:

 $ sudo pear install symfony/symfony-1.2.3    

(The Symfony installation is described in Installation 1.2).

Upgrade to 1.2

After Symfony installation, I've started to migrate my applications to the new version. You have to run these commands in each project:

 $ php symfony project:upgrade1.2
 $ php symfony propel:build-model

The prope:build-forms command failed because I had to remove old plugins and update the sfGuardPlugin:

 $ symfony plugin:install sfGuardPlugin
 $ php symfony propel:build-forms
 $ php symfony propel:build-filters

Then clear Symfony cache (the sudo was necessary because some files are owned by www-data user)

 $ sudo php symfony cc

Problems and fixes

Now come the issues!!!. You have to check you application and fix each problem. Unlike Ada or Java, there is no compilation check so every change of API is only detected when you execute the code.

After upgrading, I got this error:

 Fatal error: Call to undefined method DebugPDOStatement::setInt()
  in /home/.../lib/model /AssertViewerPager.php on line 35

This is due to the upgrade of Propel from 1.2 to 1.3. Propel is now using PHP Data Objects, which is a very good thing. The following must be changed:

 $statement = $con->prepareStatement("SELECT ID, VALUE 
                                       FROM Dictionary WHERE KEY = ?");
 $statement->setString(1, $value);
 $rs = $statement->executeQuery();
 while($rs->next()) {
   print "ID: " . $rs->getString("ID") . "  VALUE: " 
           . $rs->getString("VALUE") . "\n";
 }

into

 $statement = $con->prepare("SELECT ID, VALUE 
                                        FROM Dictionary WHERE KEY = ?");
 $statement->bindValue(1, $value);
 $statement->execute();
 while ($row = $statement->fetch()) {
   print "ID: " . $row['ID'] . "  VALUE: " . $row['VALUE'] . "\n";
 }

After fixting the database issues, I found another cryptic error:

 Catchable fatal error: Argument 1 passed to
   sfPatternRouting::configureRoute()
   must be an instance of sfRoute, string given,
  called in /usr/share/php/symfony/routing/
   sfPatternRouting.class.php on line 245 
 and defined in /usr/share/php/symfony/routing 
  /sfPatternRouting.class.php on line 256

It was caused by the sfGuardPlugin which was not updated correctly. I had to remove it and install it from scratch.

Another error caused by the migration:

 Fatal error: Call to undefined method 
   BasePeer::getmapbuilder() 
  in /home/.../lib/model /om/
  BaseLogEmitterPeer.php on line 66

To solve this, I had to remove the 'om' directory and rebuild the model with:

 symfony propel:build-model

There could be other issues that I've not reported... In any case, making sure that everything works after the upgrade is painful.

3 comments
To add a comment, you must be connected. Login to add a comment

Audit errors reported by linux kernel - why you must care

By stephane.carrez

Today I had to migrate the mysql storage to another partition because the /var partition was not large enough and the database was growing. After moving the files, updating the mysql configuration files to point to the new partition, mysql refused to start: it pretend it had no permission to access the directory. The directory was owned by mysql and it had the all the rights to write on files. What could happen?

After looking at the kernel logs, I saw this kind of message:

[173919.699270] audit(1229883052.863:39): type=1503 operation="inode_create" requested_mask="w::" denied_mask="w::" name="/data/var/mysql" pid=21625 profile="/usr/sbin/mysqld" namespace="default"

This kernel log is produced by the AppArmor kernel extension which restricts the access to resources to programs. Indeed, it tells that /usr/sbin/mysqld is not able to access the file /data/var/mysql. To fix the problem, you have to change the AppArmor configuration by editing the file /etc/apparmor.d/usr.sbin.mysqld.

 # vim:syntax=apparmor
 # Last Modified: Tue Jun 19 17:37:30 2007
 #include <tunables/global>

 /usr/sbin/mysqld {
  #include <abstractions/base>
  #include <abstractions/nameservice>
  #include <abstractions/user-tmp>
  #include <abstractions/mysql>

  capability dac_override,
  capability setgid,
  capability setuid,

  /etc/hosts.allow r,
  /etc/hosts.deny r,

  /etc/group              m,
  /etc/passwd             m,

  /etc/mysql/*.pem r,
  /etc/mysql/conf.d/ r,
  /etc/mysql/conf.d/* r,
  /etc/mysql/my.cnf r,
  /usr/sbin/mysqld mr,
  /usr/share/mysql/** r,
 __ /var/lib/mysql/ r,__     #  ''Must be updated''
  __/var/lib/mysql/** rwk,__  # ''Must be updated''
  /var/log/mysql/ r,
  /var/log/mysql/* rw,
  /var/run/mysqld/mysqld.pid w,
  /var/run/mysqld/mysqld.sock w,
}

The two lines must be fixed to point to the new directory, in the example:

 __ /data/var/mysql/ r,__
  __/data/var/mysql/* rw,__

After changing the files, you must restart Apparmor:

$ sudo /etc/init.d/apparmor restart

After the fix, the mysql server was able to start again and the audit error was not reported any more.

To add a comment, you must be connected. Login to add a comment

Restoring a complete system after a hard disk failure: bacula to the rescue!!!

By stephane.carrez

Step 1: Boot on your Ubuntu 8.04 CD

Since the disk that crashed contained the system, my computer was not even able to boot. A first step for me was to boot on the Ubuntu CDrom without installing Ubuntu again. After booting I was able to check my other disks, look at the kernel logs to realize that the disk was really completely dead without any hope to recover anything. By looking at my second hard disk, I was able to evaluate what was lost and needed to be recovered. If you have no other disk, you have to setup a new disk to proceed. Booting on the CD also helped me discover some room on my second disk where I would install a new system.

Step 2: Install the system

If the system has gone, you may have to re-install it from scratch. This is what I had to do. Having found an old debian partition on my second hard disk, I decided to install Ubuntu 8.0.4 Desktop on it. After 15 minutes, my computer was working again, running Ubuntu 8.0.4 as before. Still, my data were lost.

Step 3: Restore with bacula

Bacula is a great network backup solution that I put in place 2 years ago. Every night my bacula server is creating an incremental, differential or full backup of my computer (zebulon). It is the first time thought that I had to recover a full content. For the recovery, you have to use the Bacula Console and use the restore command.

ciceron $ bconsole

Every action made in bacula creates a job that is recorded in the database. The first thing is to identify those jobs that did the full, differential and incremental backups.

  • list jobs
 | JobId | Name      | StartTime           | Type | Level | JobFiles  | JobBytes       | JobStatus |
 |   877 | Zebulon   | 2007-12-02 02:22:27 | B    | F     | 1,245,258 | 31,026,036,274 | T         |
 | 1,067 | Zebulon   | 2008-02-03 00:52:18 | B    | F     |         0 |              0 | f         |
 | 1,319 | Zebulon   | 2008-04-26 22:28:29 | B    | D     |   207,801 |  6,048,511,830 | T         |
 | 1,328 | Zebulon   | 2008-04-29 22:17:04 | B    | I     |         0 |              0 | E         |
 | 1,331 | Zebulon   | 2008-04-30 22:17:04 | B    | I     |     1,025 |    761,323,545 | T         |
 | 1,511 | Zebulon   | 2008-06-29 22:47:57 | B    | I     |    77,997 |  9,050,108,256 | T         |
 | 1,514 | Zebulon   | 2008-06-30 22:16:40 | B    | I     |       968 |    613,957,318 | T         |
 | 1,517 | Zebulon   | 2008-07-01 22:16:38 | B    | I     |    16,710 |    866,232,575 | T         |
 | 1,520 | Zebulon   | 2008-07-02 22:17:00 | B    | I     |    11,530 |    887,021,057 | T         |

In result above is just an extract of the list command. Job 877 is a full backup (level F) and I had no other recent full backups than this one. It must be restored first. Since bacula has pruned the files, it has lost all the information about its contain (my backup could have been improved). Anyway, it is possible to restore completely this full backup. Jobs 1067 and 1328 cannot be used because they were in errors (I had many of them because the computer is off when the daily backup is started or for some other reasons). This is not a problem, bacula just ignores those jobs for the restore. To restore the full backup use the restore command:

  * __restore__
 
  First you select one or more JobIds that contain files
  to be restored. You will be presented several methods
  of specifying the JobIds. Then you will be allowed to
  select which files from those JobIds are to be restored.

After this, the bacula restore command prompts for a restore method. You can restore a files selectively, find files or restore a complete job or complete client. For me, I had to restore the full backup (job 877) so I selected the Enter list of comma separated JobIds to select method with my full backup job id:

 To select the JobIds, you have the following choices:
   1: List last 20 Jobs run
   2: List Jobs where a given File is saved
   3: Enter list of comma separated JobIds to select
   4: Enter SQL list command
   5: Select the most recent backup for a client
   6: Select backup for a client before a specified time
   7: Enter a list of files to restore
   8: Enter a list of files to restore before a specified time
   9: Find the JobIds of the most recent backup for a client
  10: Find the JobIds for a backup for a client before a specified time
  11: Enter a list of directories to restore for found JobIds
  12: Cancel
 Select item:  (1-12): __3__
   Enter JobId(s), comma separated, to restore: __877__
   You have selected the following JobId: 877
   
   Building directory tree for JobId 877 ...
   There were no files inserted into the tree, so file selection
   is not possible.Most likely your retention policy pruned the files
   
   Do you want to restore all the files? (yes|no):     __yes__

After this step, bacula searches which volumes (backup files, DVD, tapes) contain the backup:

   Bootstrap records written to /var/lib/bacula/janus-dir.restore.12.bsr
   
   The job will require the following
     Volume(s)            Storage(s)                SD Device(s)
   ===================================================
   
     Full-0013            File                      FileStorage
     Full-0014            File                      FileStorage
     Full-0015            File                      FileStorage
     Full-0016            File                      FileStorage
     Full-0017            File                      FileStorage
     Full-0035            File                      FileStorage
     Full-0036            File                      FileStorage
     Full-0037            File                      FileStorage
   
   1,245,258 files selected to be restored.

Now, I had to choose the client for the restore. For some reasons, I had to choose my crashed computer (zebulon):

   Defined Clients:
       1: janus-fd
       2: zebulon-fd
   Select the Client (1-2):    __2__

Bacula describes the restore job and you have a chance to change some parameters. In general, the restore process is made by the bacula daemon on the computer that you want to restore (ie, the client). This is natural, your computer X crashed and you want to recover on it. In my case, I wanted to recover on bacula server (called janus).

    Run Restore job
    JobName:         RestoreFiles
    Bootstrap:       /var/lib/bacula/janus-dir.restore.13.bsr
    Where:           /tmp/bacula-restores
    Replace:         always
    FileSet:         Janus Files
    Backup Client:   zebulon-fd
    Restore Client:  zebulon-fd
    Storage:         File
    When:            2008-07-05 14:16:28
    Catalog:         MyCatalog
    Priority:        10
    OK to run? (yes/mod/no): __mod__
    Parameters to modify:
     1: Level
     2: Storage
     3: Job
     4: FileSet
     5: Restore Client
     6: When
     7: Priority
     8: Bootstrap
     9: Where
    10: File Relocation
    11: Replace
    12: JobId
    Select parameter to modify (1-12): __5__
    The defined Client resources are:
     1: janus-fd
     2: zebulon-fd
   Select Client (File daemon) resource (1-2):__ 1__
    Run Restore job
    JobName:         RestoreFiles 
    Bootstrap:       /var/lib/bacula/janus-dir.restore.13.bsr
    Where:           /tmp/bacula-restores
    Replace:         always
    FileSet:         Janus Files
    Backup Client:   zebulon-fd
    Restore Client:  janus-fd
    Storage:         File
    When:            2008-07-05 14:16:28
    Catalog:         MyCatalog
    Priority:        10
    OK to run? (yes/mod/no):  __yes__

The restore process runs in background and a message and an email are sent after the restore job has finished. In my case, the files were restored on my bacula server in a /tmp/bacula-restores directory. When the restore process finished, that directory contained all my files.... back in December 2007. The differential backup was restored in the same say because the files were pruned too. Other jobs were restored as follows, using the same restore command:

    * __restore__
   
    First you select one or more JobIds that contain files
    to be restored. You will be presented several methods
    of specifying the JobIds. Then you will be allowed to
    select which files from those JobIds are to be restored.
   
    To select the JobIds, you have the following choices:
     1: List last 20 Jobs run
     2: List Jobs where a given File is saved
     3: Enter list of comma separated JobIds to select
     4: Enter SQL list command
     5: Select the most recent backup for a client
     6: Select backup for a client before a specified time
     7: Enter a list of files to restore
     8: Enter a list of files to restore before a specified time
     9: Find the JobIds of the most recent backup for a client
    10: Find the JobIds for a backup for a client before a specified time
    11: Enter a list of directories to restore for found JobIds
    12: Cancel
    Select item:  (1-12):__ 3__
    Enter JobId(s), comma separated, to restore: __1331,1511,1514,1517,1520__
    You have selected the following JobIds: 1331,1511,1514,1517,1520
   
    Building directory tree for JobId 1331 ...
    Building directory tree for JobId 1511 ...  +++++++++++++++++++++++++++++++++
    Building directory tree for JobId 1517 ...  +++++++++++++++++++++++++ 
    Building directory tree for JobId 1520 ...  +++++++++++++++++++++++++++++
    5 Jobs, 75,552 files inserted into the tree.
   
    You are now entering file selection mode where you add (mark) and
    remove (unmark) files to be restored. No files are initially added, unless
    you used the "all" keyword on the command line.
    Enter "done" to leave this mode.
   
    cwd is: /
    $ __mark *__
    79,536 files marked.
    $ __done__
    Bootstrap records written to /var/lib/bacula/janus-dir.restore.14.bsr
   
    The job will require the following
   Volume(s)            Storage(s)                SD Device(s)
    ======================================================
   
   Incr-0002            File                      FileStorage
   Incr-0005            File                      FileStorage
   Incr-0001            File                      FileStorage
   Incr-0006            File                      FileStorage
   
   79,536 files selected to be restored.

After the restore jobs finished, all my files were restored back to July 2nd 2008.

Lesson learned and conclusion

  1. Backup is vital in computer world. You don't want to loose your photos, emails and documents. When you loose one of them, you just cry. When you loose everything, you....die.
  2. My bacula configuration is not perfect. In particular it should do a full backup every 3 or 6 months. In the past I only used some file recovery but I've never tested a full recovery. This was an error (without bad consequences hopefully). Every change in bacula configuration must be followed by a full recovery test.
  3. The system partitions (/ and /usr) were not backup. Even if we can restore them with an installation, this may not be a good idea. You loose the configuration files and the knowledge of all the packages you have installed. Loosing this is not a big deal but it is a matter of time.
  4. It is necessary to test on a regular basis that we can recover from the backup. The problem is absolutely not the software itself. The problem is the backup configuration and backup needs that change over the time.

I am very thankful to the Bacula development team for their software. It is really a professional backup solution. I knew that for sure but now I can say I tested it in real situation. The hard disk failure only costs me time: time to install, time to recover the backup and time to write this story....

Ubuntu Server and Ubuntu Desktop

By stephane.carrez

I started to use Linux in 1994 with a Debian distribution. It worked quite well on a 133Mhz Pentium with only 128Mb of memory. It was stable and gave good performance (given the memory and speed of cpu at that time).

Then, I have used a Linux Red Hat distribution, starting at version 5 and then 6. It was in 1998 on a 300Mhz Pentium II with 256Mb of memory. KDE and Gnome were not there at that time (if my memory is right) and the X11 environment was not as powerful as today but worked reasonably well.

In 2002, I decided to switch to a Mandrake distribution because it was close to RedHat and it offered a better support for French language, more applications. The switch was easy: the packages are managed in the same way, some administration tools are specific to Mandrake but with a simpler and easier interface. Mandrake did a good job at simplifying the interface to end-users. I started with a Mandrake 8.1, upgraded to a 9.1 and then 10.2. However, each time I did an upgrade, I made a complete installation (because I added a disk or switched some hardware). One of my system was not upgraded and it stayed in Mandrake 8 because the system was remote (it did not have a display) and the remote upgrade was difficult (at least, I didn't know how to do it). Each time I upgraded, the KDE environment had different behaviors. Not big differences, but still annoying for a end-user (different behaviors on the mouse, the keyboard and how you type accented letters, keyboard shortcuts that changed, menus that changed, the session restore that worked differently or did not worked, ...).

In 2005, I have been convinced by my colleges at Solsoft to use a Debian distribution. The administration and installation of packages was supposed to be easier, and the system upgrade was possible remotely. Well, I had to learn the Debian packaging stuff (I was used to the RPM) and I was using Debian at work since 2002, so let's try it. The installation was easy and worked well. Again, the KDE environment, the keyboard layouts, system behavior became different compared to Mandrake 10.2, but after some adaptations it worked (still it takes you some time). Then, came the problems. I needed specific Debian packages and they were in the 'testing' category. I installed them and of course broke some package dependencies. That's a nightmare and always a hassle,... but with some time and Debian administration knowledge I've managed to solve the problems.

Meanwhile, the Ubuntu distribution came in. The big difference with Debian is that the distribution contains packages which are updated more regularly, work well together and appears to be much more tested and stable over time. The first computer I've switch was my Linux router and I have installed the Ubuntu Server distribution (the server was running a heavily stripped-down Mandrake 10 distribution). The Ubuntu Server distribution was installed easily and it contained quite all the packages I needed. I also installed a few specific packages for monitoring, backup system and everything went well. Upgrading the packages is similar to Debian (since it's the same package tools) but it appears to me it's a little bit more stable.

Then, I decided to switch my desktop station to use the Ubuntu Desktop distribution. It's probably the easiest installation I have ever seen. It worked quite out of the box without having to specify or tell any funcky parameters. But it was too simple: not enough utility packages, no development package, no KDE, the printer setting was a nightmare (USB printer). Anyway, the system works now but it required quite a lot of Debian/Linux administration knowledge to set it up. Basically, the problem lies on the Gnome, KDE, X11 and user applications that come with the Ubuntu Desktop.

Despite the Ubuntu Desktop installation issues, I'm very happy to have switch to Ubuntu. I guess this is the Linux distribution that will give me less problems in the future, will allow me to stay reasonably up to date with new packages and security fixes. The Ubuntu community has still some progress to make to package the distribution in a way that non-Linux users can use it. That's their challenge.