Java 2 Ada

Using the Gnome and KDE Secret Service API in Ada

By stephane.carrez

The Gnome and KDE desktop environments have designed a shared service API to allow applications to protect, retrieve and manage their secret data such as passwords and private keys. The Secret Service API defines the mechanisms and operations that can be used by applications to use the service.

The libsecret is the C library that gives access to the Secret Service API. The Ada Libsecret is an Ada binding for the C library. The Ada binding does not allow to access and use all of the functionalities implemented by the C library but it implements the most useful operations allowing to store, retrieve and delete some application secret data.

Understanding the Secret Service API

At first glance, the Secret Service API is not easy to use. Each secret is stored together with lookup attributes and a label. Lookup attributes are formed of key/value pairs. The label is the user friendly name that desktop key manager will use to display some information to the end user.

ada-libsecret-dbus.png

The Secret Service API is implemented by a keyring manager such as gnome-keyring-daemon or kwalletd. This is a daemon that is started when a user opens a desktop session. It manages the application secrets and protects their access. The secret database can be locked in which case the access to secrets is forbidden. Unlocking is possible but requires authentication by the user (in most cases a dialog popup window opens and asks to unlock the keyring).

When a client application wishes to retrieve one of its secret, it builds the lookup attributes that correspond to the secret to retrieve. The lookup attributes are not encrypted and they are not part of the secret. The client application uses the D-Bus IPC mechanism to ask the keyring manager for the secret. The keyring manager will manage for unlocking the database by asking the user to confirm the access. The keyring manager will then look in its database for the secret associated with the lookup attributes.

Note that the label cannot be used as a key to retrieve the secret since the same label can be associated with different lookup attributes.

Using the Ada Secret Service API

Setting up the project

After building and installing the Ada Libsecret library you will add the following line to your GNAT project file:

with "secret";

This definition will give you access to the Secret package and will handle the build and link support to use the libsecret C library.

Setting the lookup attributes

Attributes are defined by the Secret.Attributes package which provides the Map type that represents the lookup attributes. First, you will add the following with clause:

with Secret.Attributes;

to make available the operations and types provided by the package. Then, you will declare the attributes instance by using:

   List : Secret.Attributes.Map;

At this stage, the lookup attributes are empty and you can check that by using the Is_Null function that will return True in that case. You must now add at least one key/value pair in the attributes by using the Insert procedure:

   List.Insert ("secret-tool", "key-password");
   List.Insert ("user", "joe");

Applications are free about what attributes they use. The attributes have to be unique so that the application can identify and retrieve them. For example, the svn command uses two attributes to store the password to authenticate to svn remote repositories: domain and user. The domain represents the server URL and the user represents the user name to use for the connection. By using these two attributes, it is possible to store several passwords for different svn accounts.

Storing a secret

To store a secret, we will use the operations and types from the Secret.Services and Secret.Values packages. The following definitions:

with Secret.Services;
with Secret.Values;

will bring such definitions to the program. The secret service is represented by the Service_Type type and we will declare an instance of it as follows:

   Service : Secret.Services.Service_Type;

This service instance is a proxy to the Secret Service API and it communicates to the gnome-keyring-daemon by using the D-Bus protocol.

The secret value itself is represented by the Secret_Type and we can define and create such secret by using the Create function as follows:

   Value : Secret.Values.Secret_Type := Secret.Values.Create ("my-secret");

Storing the secret is done by the Store operation which associates the secret value to the lookup attributes and a label. As explained before, the lookup attributes represent the unique key to identify the secret. The label is used to give a user friendly name to the association. This label is used by the desktop password and key manager to give information to the user.

   Service.Store (List, "Secret tool password", Value);

Retreiving a secret

Retreiving a secret follows the same steps but involves using the Lookup function that returns the secret value from the lookup attributes. Care must be made to provide the same lookup attributes that were used during the store phase.

   Value : Secret.Values.Secret_Type := Service.Lookup (List);

The secret value should be checked by using the Is_Null function to verify that the value was found. The secret value itself is accessed by using the Get_Value function.

   if not Value.Is_Null then
      Ada.Text_IO.Put_Line (Value.Get_Value);
   end if;

Conclusion

By using the Ada Secret Service API, Ada applications can now securely store private information and protect resources for their users. The API is fairly simple and can be used to store OAuth access tokens, database passwords, and more...

Read the Ada Libsecret Documentation to learn more about the API.

To add a comment, you must be connected. Login to add a comment

IPSec Meshed Network Configuration on the Cloud

By stephane.carrez 3 comments

Having to manage several servers on the Internet, I needed a way to create a secure internal network. Our servers are somewhere in the cloud and the solution that was adopted was to setup the GNU/Linux IPsec stack and an IP-IP tunnel between each server.

The following article describes how to setup the IPSec network and IP-IP tunnel. These steps were executed on 9 servers running Ubuntu 8.0.4 and one server running Ubuntu 10.0.4.

IPSec Configuration

We must install the following packages. The ipsec-tools package provides the utilities to setup and configure the IPSec stack and the racoon package provides the IKE server to manage the security associations.

$ sudo apt-get install ipsec-tools racoon tcpdump

Configure /etc/ipsec-tools.conf

The /etc/ipsec-tools.conf configuration file must define the policy entries (SPD) that describe which traffic has to be encrypted. We must define one SPD for each direction (two SPDs for each tunnel).

On the 90.1.1.1 server and to setup the IPSec tunnel to 201.10.10.10, the configuration looks like:

spdadd 90.1.1.1 201.10.10.10 any -P out ipsec
    esp/transport//require
    ah/transport//require;

spdadd 201.10.10.10  90.1.1.1 any -P in ipsec
    esp/transport//require
    ah/transport//require;

Configure Racoon

The Racoon configuration is defined in /etc/racoon/racoon.conf. Racoon can use several authentication mechanisms to verify that an IPSec association can be created with a given peer. To make the configuration simple and identical on every server, I have used RSA certificate. RSA certificates are very easy to manage and they provide a really good authentication.

remote anonymous {
   exchange_mode main,base;
   lifetime time 12 hour ;

   certificate_type plain_rsa "/etc/racoon/ipsec.key";
   peers_certfile plain_rsa "/etc/racoon/ipsec.pub";
   proposal {
      encryption_algorithm 3des;
      hash_algorithm sha256;
      authentication_method rsasig;
      dh_group modp1024;
  }
  generate_policy off;
}

sainfo anonymous {
  pfs_group modp1024;
  encryption_algorithm 3des;
  authentication_algorithm hmac_sha256;
  compression_algorithm deflate;
}

RSA Key Generation

The RSA public and private keys have to be generated using the plainrsa-gen tool.

plainrsa-gen -b 4096 -f /etc/racoon/ipsec.key

The public key part must be extracted from the generate key file and is identified by : PUB. You must extract that line and, remove the # start character and put the line in the ipsec.pub file.

# : PUB 0sXXXXXXX

Test

To verify the configuration, connect to one server and run a ping command to the second server. Connect to the second server and run a tcpdump to observe the packets coming from the other server:

$ sudo  tcpdump -n host 
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
17:34:47.377153 IP 90.1.1.1 > 201.10.10.10: AH(spi=0x0c57e022,seq=0xab9): ESP(spi=0x093415ec,seq=0xab9), length 100
17:34:47.377316 IP 201.10.10.10 >90.1.1.1: AH(spi=0x02ff6158,seq=0x9e3): ESP(spi=0x01375aa7,seq=0x9e3), length 100
17:34:48.379033 IP 90.1.1.1 > 201.10.10.10: AH(spi=0x0c57e022,seq=0xaba): ESP(spi=0x093415ec,seq=0xaba), length 100
17:34:48.379186 IP 201.10.10.10 > 90.1.1.1: AH(spi=0x02ff6158,seq=0x9e4): ESP(spi=0x01375aa7,seq=0x9e4), length 100

IP-IP Tunnels

Now that the servers can connect with each other using IPSec, we create a local network with private addresses that our internal services are going to use. Each server will have its public IP address and an internal address.

In other words, the IP-IP tunnel simulates a local network.

Setup the endpoint (90.1.1.1)

Create the tunnel interface. The Linux kernel must have the tun module installed. The following command creates a tunnel on the host 90.1.1.1 to the remote host 201.10.10.10.

ip tunnel add tun0 mode ipip \
    remote  201.10.10.10 local 90.1.1.1

Bind the tunnel interface to an IP address and configure the target IP (10.0.0.1 is our local address, 10.0.0.2 is the remote endpoint):

ifconfig tun0 10.0.0.1 netmask 255.255.255.0 \
     pointopoint 10.0.0.2 

Setup the client (201.10.10.10)

Create the tunnel interface. The Linux kernel must have the tun module installed. The following command creates a tunnel on the host 201.10.10.10 to the remote host 90.1.1.1.

ip tunnel add tun0 mode ipip \
    remote 90.1.1.1 local 201.10.10.10

Bind the tunnel interface to an IP address and configure the target IP (10.0.0.2 is our local address, 10.0.0.1 is the remote endpoint):

ifconfig tun0 10.0.0.2 netmask 255.255.255.0 \
    pointopoint 10.0.0.1

Test

Once the tunnel is created, you should get the tun0 interface and be able to ping the remote peers in the 10.0 network.

$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.707 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.541 ms
64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=0.630 ms

Firewall Configuration

With the IPsec stack and tunnels in place, it is still necessary to get a good firewall configuration to allow the IPsec traffic, block the non-IPsec traffic (in case of miss-configuration) and protect the server.

The IPSec traffic needs the IKE protocol (UDP port 500) to establish the security associations. The ah protocol will be used to authenticate the peers and the esp protocol to encrypt the payload. The IPsec traffic is controlled by the following rules (for the 201.10.10.10 server):

ip=90.1.1.1
iptables -A INPUT -p ah -i eth0 -s $ip -j ACCEPT
iptables -A INPUT -p esp -i eth0 -s $ip -j ACCEPT
iptables -A INPUT -p udp --sport 500 --dport 500 \
           -s $ip -j ACCEPT

iptables -A OUTPUT -p ah -o eth0 -d $ip -j ACCEPT
iptables -A OUTPUT -p esp -o eth0 -d $ip -j ACCEPT
iptables -A OUTPUT -p udp --sport 500 --dport 500 \
          -d $ip -j ACCEPT

The IP-IP tunnel brings another problem to the firewall configuration. Once extracted, the packets have to match the firewall rules. The iptables ipsec policy is used to accept the packets that are associated with an IPSec policy.

iptables -A INPUT -m policy --pol ipsec --dir in \
           -p 4 -j ACCEPT
iptables -A OUTPUT -m policy --pol ipsec --dir out \
           -p 4 -j ACCEPT

Troubles

Setting up the IPsec stack is not easy and does not work immediately. The Linux kernel does not bring any clue to spot the issue.

  1. Make sure there is no firewall that block the AH/ESP/IKE packets (disable any firewall if necessary)
  2. Make sure the SPD associations correspond to the peers (Check /etc/ipsec-tools.conf on both servers)
  3. Make sure Racoon daemon is running and that it does not report any error (Check /var/log/daemon.log)
3 comments
To add a comment, you must be connected. Login to add a comment

Server configuration management: track changes with subversion and be notified

By stephane.carrez 1 comment

The overall idea is to put the server configuration files stored in /etc directory under a version control system: subversion. The VCS is configured to send an email to the system administrators. The email contains the differences with a previous version. A cron script is executed every day to automatically commit the changes, thus triggering the email.

The best practice is of course that each system administrator commits their changes after they validated the new running configuration. If they do so, they are able to specify a comment which is helpful to understand what was done.

Install subversion

First, you should install subversion with its tools.

 sudo apt-get install -y subversion subversion-tools

Mail notification

For the mail notification, you may use postfix, exim or sendmail. But to avoid to setup a complete mail system, you may just use a simple mail client. For this, you can use the combination of esmtp and procmail.

 sudo apt-get install -y procmail esmtp

Create the subversion repository

The subversion repository will contain all the version and history of your /etc. It must be protected carefully because it contains sensitive information.

 sudo mkdir /home/svn
 sudo svnadmin create /home/svn/repos
 sudo chmod 700 /home/svn
 sudo chmod 700 /home/svn/repos

Now, setup the subversion repository to send an email for each commit. For this, copy or rename the post-commit.tmpl file and edit it to specify to whom you want the email to be sent:

 sudo cp /home/svn/repos/hooks/post-commit.tmpl  \
           /home/svn/repos/hooks/post-commit

and change the last line to something like (with your email address)

 /usr/share/subversion/hook-scripts/commit-email.pl \
  --from yoda+mercure@alliance.com \
  "$REPOS" "$REV" yoda@alliance.com

Initial import

To initialize the repository, we can use the svn import command:

 sudo svn import -m 'Initial import of /etc' \
               /etc file:///home/svn/repos/etc

Subversion repository setup in /etc

Now the hard stuff is to turn /etc into a subversion environment without breaking the server. For this, we extract the subversion /etc repository somewhere and copy only the subversion files in /etc.

 sudo mkdir /home/svn/last
 sudo sh -c "cd /home/svn/last && svn co file:///home/svn/repos/etc"
 sudo sh -c "cd /home/svn/last/etc && tar cf - `find . -name .svn` | (cd /etc && tar xvf -)"

At this step, everything is ready. You can go in /etc directory and use all the subversion commands. Example:

 sudo svn log /etc/hosts

to see the changes in the hosts file.

Auto-commit and detection of changes

The goal now is to detect every day the changes that were made and send a mail with the changes to the supervisor. For this, you create a cron script that you put in /etc/cron.daily. The script will be executed every day at 6:25am. It will commit the changes that were made and send an email for the new files.

 #!/bin/sh
 SVN_ETC=/etc
 HOST=`hostname`
 # Commit those changes
 cd $SVN_ETC && svn commit -m "Saving changes in /etc on $HOST"
 # Email address to which changes are sent
 EMAIL_TO="TO_EMAIL"
 STATUS=`cd $SVN_ETC && svn status`
 if test "T$STATUS" != "T"; then
   (echo "Subject: New files in /etc on $HOST";
    echo "To: $EMAIL_TO";
    echo "The following files are new and should be checked in:";
    echo "$STATUS") | sendmail -f'FROM_EMAIL' $EMAIL_TO
 fi

In this script you will replace TO_EMAIL and FROM_EMAIL by real email addresses.

Complete setup script

To help setup and configure all this easily, I'm now using a script that configures everything. You can download it: mk-etc-repository. The usage of the script is really simple, you just need to specify the email address for the notification:

 sudo sh mk-etc-repository sysadmin@mycompany.com
1 comment
To add a comment, you must be connected. Login to add a comment

Audit errors reported by linux kernel - why you must care

By stephane.carrez

Today I had to migrate the mysql storage to another partition because the /var partition was not large enough and the database was growing. After moving the files, updating the mysql configuration files to point to the new partition, mysql refused to start: it pretend it had no permission to access the directory. The directory was owned by mysql and it had the all the rights to write on files. What could happen?

After looking at the kernel logs, I saw this kind of message:

[173919.699270] audit(1229883052.863:39): type=1503 operation="inode_create" requested_mask="w::" denied_mask="w::" name="/data/var/mysql" pid=21625 profile="/usr/sbin/mysqld" namespace="default"

This kernel log is produced by the AppArmor kernel extension which restricts the access to resources to programs. Indeed, it tells that /usr/sbin/mysqld is not able to access the file /data/var/mysql. To fix the problem, you have to change the AppArmor configuration by editing the file /etc/apparmor.d/usr.sbin.mysqld.

 # vim:syntax=apparmor
 # Last Modified: Tue Jun 19 17:37:30 2007
 #include <tunables/global>

 /usr/sbin/mysqld {
  #include <abstractions/base>
  #include <abstractions/nameservice>
  #include <abstractions/user-tmp>
  #include <abstractions/mysql>

  capability dac_override,
  capability setgid,
  capability setuid,

  /etc/hosts.allow r,
  /etc/hosts.deny r,

  /etc/group              m,
  /etc/passwd             m,

  /etc/mysql/*.pem r,
  /etc/mysql/conf.d/ r,
  /etc/mysql/conf.d/* r,
  /etc/mysql/my.cnf r,
  /usr/sbin/mysqld mr,
  /usr/share/mysql/** r,
 __ /var/lib/mysql/ r,__     #  ''Must be updated''
  __/var/lib/mysql/** rwk,__  # ''Must be updated''
  /var/log/mysql/ r,
  /var/log/mysql/* rw,
  /var/run/mysqld/mysqld.pid w,
  /var/run/mysqld/mysqld.sock w,
}

The two lines must be fixed to point to the new directory, in the example:

 __ /data/var/mysql/ r,__
  __/data/var/mysql/* rw,__

After changing the files, you must restart Apparmor:

$ sudo /etc/init.d/apparmor restart

After the fix, the mysql server was able to start again and the audit error was not reported any more.

To add a comment, you must be connected. Login to add a comment

Viadeo passwords are not secure: can you trust web 2.0 applications?

By stephane.carrez

My first answer is NO never trust them unless you have the proof and confidence that they do it right.

Last day I experimented the Viadeo Lost password feature. Shortly after I received an email and I was very surprised, and chocked, to see my password displayed in clear text in the email. You could say that it's nice, after all this is your password and they give it back to you upon your request. This is a big security breach. If they are able to send me my password in clear text, it means they are not storing it in a secure way. If their database is hacked, user passwords are available: the hacker can connect on your behalf. More, after looking at the user profile, the hacker can easily guess other information and try to login on other applications (your home, your work, ...). Why? Because most people use the same password in every application.

DO NOT trust Viadeo. Use a password that is not sensitive.

My second answer is YES you can trust those applications but look at them and get information from them.

We, at Planzone, are taking the security aspects as critical to our business. Passwords are encrypted and they cannot be decrypted. More, we are using SSL connections for the authentication (login), as well as during the complete session. The password and any information you type is crypted over the network: nobody can get nor stole it.

How is it done? It's well known and easy. When the password is recieved, it is encrypted with SHA-1 (Secure Hash Algorithm) using a private key. The result is saved in the database. It is proven, mathematically, that you cannot retrieve back the password from the hash value. To authenticate someone, the login password is encrypted too and the password verification is made on the hash value. If the hash are identical, the passwords used to create the secure hash are identical. This is why, at Planzone, if you use the Lost my password link, we cannot send you the password. Instead, we send you a secure key that allows you (for the next 24 hours) to change your password.

There is nothing terrific about this. This is just good security practices!

The next step for all these application is to rely on OpenId for the user identity. Done right, this could solve some security issues.

Deploying a J2EE application behind an Apache server in a production environment

By stephane.carrez

In a production environment, you should not put your JBoss application as a Web front-end. Instead, you should use an Apache server and configure it to redirect specific Web application requests to your J2EE server. There are many many advantages in doing this:

  • The Apache server can serve static files (CSS, images, javascript files) faster than JBoss/Tomcat.
  • When you need it, you can activate SSL on Apache without having to change your application.
  • The Apache SSL implementation is faster compared to the Tomcat implementation (and a lot easier to configure!).
  • You can have a better control of HTTP headers. No need to develop any servlet filter for that.
  • You can get compression out of the box. No need to develop another servlet filter either (no need to configure Tomcat connector either!).

I assume here that the Apache server is already installed with the following modules and these modules are enabled.

jk headers expires ssl deflate rewrite

If they are not enabled, you can enable them using the command:

sudo a2enmod jk

Step 1: Explode your Web or J2EE application

For Apache to serve the static files, it is necessary to have those files available in a directory that the Apache server can access. For this, explode your J2EE application (EAR file) and all the Web applications which have static files to be served by Apache. You will do this in a directory somewhere with one of the command:

mkdir ''myapplication''
cd myapplication && jar xf ../''myapplication''.ear
mv ''mywebapp''.war ''mywebapp''-new.war && mkdir ''mywebapp.war''
cd ''mywebapp''.war && jar xf ../''mywebapp''-new.war

If your EAR file contains a WAR file, you have to explode it as well (the static files to be served are there!). It is also a good practice to explode it in a directory having the same name as the WAR. Once everything is exploded, you can also configure JBoss to directly use the exploded directory (this will speed up JBoss startup significantly).

Step 2: Create a site configuration file

For good practices, you should write a configuration file that corresponds to the site that you are going to manage. This allows to enable or not a server configuration which will be useful during the maintenance. For this, create a file in /etc/apache2/sites-available and put an initial content (replace myserver.mydomain.com with your server name and server-installation-dir with the path of your installation directory):

 <VirtualHost _default_:80>
       ServerAdmin webmaster@localhost
       ServerAlias ''myserver.mydomain''.com
       ServerName ''myserver.mydomain''.com
       DocumentRoot /''server-installation-dir''
       <Directory />
               Options FollowSymLinks
               AllowOverride None
       </Directory>
       ErrorLog /var/log/apache2/''myserver''-error.log
       LogLevel warn
       CustomLog /var/log/apache2/''myserver''-access.log combined
 </VirtualHost>

The server-installation-dir should point to the WAR exploded directory.

It is also a good practice to use a specific log file for each server (virtual host) that you configure in Apache. Restrict the number of options to the minimum so that you do not activate an option that could compromise the security and also to keep the configuration understandable and manageable.

You may find additional information about virtual hosts on Apache Virtual Host documentation.

Step 3: Configure Apache mod_jk

The Apache server can redirect requests to JBoss/Tomcat by using the mod_jk module (jk). Edit the file /etc/apache2/mods-available/jk.load and define the following properties:

 
JkWorkersFile /etc/apache2/worker.properties
JkShmFile     /var/log/apache2/mod_jk.shm
JkLogFile     /var/log/apache2/mod_jk.log
JkLogLevel    info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "

Write a /etc/apache2/worker.properties file with the following example (be sure to replace some of the paths):

workers.tomcat_home=''directory where Tomcat is installed''
workers.java_home=''directory of the JDK installation home'' 
ps=/
worker.list=workerName
worker.workerName.port=8009
worker.workerName.host=localhost
worker.workerName.type=ajp13
worker.wokerName.lbfactor=1

The workerName is the name you are going to use within your site configuration file to tell Apache to which JBoss/Tomcat the requests are going to be forwarded.

You may find additional information on The Apache Tomcat Connector - Webserver HowTo

Step 4: Configure mod_jk in your site configuration file

Now that mod_jk is configured, you have to setup your site configuration file to redirect some of your URLs to your JBoss/Tomcat server through the AJP connector. This is done by the JkMount and JkUnMount directives. For this, add the following lines:

<VirtualHost _default_:80>
 ....
 JkMount /''mywebapp''/* ''workerName''
 JkOptions +ForwardURICompat
</VirtualHost>

where mywebap is the Web application context of your Web application when it is running. All request to /mywebapp will be redirected to JBoss/Tomcat. If Apache has to serve static files located in the same context, you have to use:

   JkUnMount /mywebapp/*.css workerName
   JkUnMount /mywebapp/*.js workerName
   JkUnMount /mywebapp/*.html workerName
   JkUnMount /mywebapp/*.png workerName

The Javascript, CSS, images and HTML files will not be served by JBoss/Tomcat because the JK connector is not activated for these links.

You could also only mount the dynamic files that your Web application is serving (like *.jsp, *.do, *.jsf or *.seam). This may not be the best solution if your Web application has specific servlets that are mapped without any extension (like an XMLRPC servlet, a Seam resource servlet and others). This is why, it is best to mount everything to your Web application and then manually specify what is static and served by Apache directly. This will prevent you from big surprises!

Step 5: Configure caching and compression

Compression can be activated easily by adding the following line at the top of your site configuration file (see the deflate module):

       AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/x-javascript

Browser caching is activated and controlled by the expires module in Apache. You may add the following options within the <Directory> section controlling the static files.

       # enable expirations
       ExpiresActive On
       # Activate the browser caching (CSS, images and scripts should not
       # change)
       ExpiresByType text/css A1296000
       ExpiresByType image/png A1296000
       ExpiresByType image/gif A1296000
       ExpiresByType image/jpg A1296000
       ExpiresByType text/javascript A1296000

You may find additional information on Apache Module mod_expires and Apache Module mod_deflate.

Step 6: Harden you Apache configuration

JBoss can add headers in the HTTP response. The X-Powered-By header exposes what implementation is behind your site. This header is created by a servlet filter that is activated by default in JBoss web configuration files (server/default/deploy/jbossweb-tomcat55.sar/conf/web.xml). You can either disable this filter by commenting the following lines:

  <!-- <filter-mapping>
     <filter-name>CommonHeadersFilter</filter-name>
     <url-pattern>/*</url-pattern>
  </filter-mapping> -->

If you cannot change this, don't worry! The Apache server can remove those headers for you. Just add the following directives in your site configuration file:

   # For security reasons, do not expose who serves the page
   <LocationMatch '^/''mywebap''/.*'>
       Header unset 'X-Powered-By'
   </LocationMatch>

Removing this header is also good for performance as it reduces the size of responses. You may have a look at the Apache documentation Apache Module mod_headers.

By default, the Apache server sends a complete signature in the Server header response. You should verify the /etc/apache2/apache2.conf file and make sure you have the following options:

ServerSignature Off
ServerTokens Prod

To verify that your server generates the good response headers, you may use the wget -S command or Firebug to look at those headers.