Java 2 Ada

Ada, Java and Python database access

By Stephane Carrez

How does Ada, Java and Python compare with each other when they are used to connect to a database? This was the main motivation for me to write the SQL Benchmark and write this article.

The database also has a serious impact on such benchmark and I've measured the following three famous databases:

The purpose of the benchmark is to be able to have a simple comparison between these different databases and different programming languages. For this, a very simple database table is created with only two integer columns one of them being the primary key with auto increment. For example the SQLite table is created with the following SQL:

CREATE table test_simple (
  value INTEGER

The database table is filled with a simple INSERT statement which is also benchmarked. The goal is not to demonstrate and show the faster insert method, nor the faster query for a given database or language.


The SQL benchmarks are simple and they are implemented in the same way for each language so that we can get a rough comparison between languages for a given database. The SELECT query retrieves all the database table rows but it includes a LIMIT to restrict the number of rows returned. The query is executed with different values for the limit so that a simple graph can be drawn. For each database, the SQL query looks like:

SELECT * FROM test_simple LIMIT 10

The SQL statements are executed 10000 times for SELECT queries, 1000 times for INSERT and 100 times for DROP/CREATE statements.

Each SQL benchmark program generates an XML file that contains the results as well as resource statistics taken from the /proc/self/stat file. An Ada tool is provided to gather the results, prepare the data for plotting and produce an Excel file with the results.

Python code

def execute(self):
  self.sql = "SELECT * FROM test_simple LIMIT " + str(self.expect_count)
  repeat = self.repeat()
  db = self.connection()
  stmt = db.cursor()

  for i in range(0, repeat):
    row_count = 0
    for row in stmt:
      row_count = row_count + 1

    if row_count != self.expect_count:
      raise Exception('Invalid result count:' + str(row_count))

Java code
public void execute() throws SQLException {
  PreparedStatement stmt
 = mConnection.prepareStatement("SELECT * FROM test_simple LIMIT " + mExpectCount);

  for (int i = 0; i < mRepeat; i++) {
    if (stmt.execute()) {
      ResultSet rs = stmt.getResultSet();
      int count = 0;
      while ( {
      if (count != mExpectCount) {
        throw new SQLException("Invalid result count: " + count);
    } else {
      throw new SQLException("No result");
Ada code
procedure Select_Table_N (Context : in out Context_Type) is
   DB    : constant ADO.Sessions.Master_Session := Context.Get_Session;
   Count : Natural;
   Stmt  : ADO.Statements.Query_Statement
        := DB.Create_Statement ("SELECT * FROM test_simple LIMIT " & Positive'Image (LIMIT));
   for I in 1 .. Context.Repeat loop
      Count := 0;
      while Stmt.Has_Elements loop
         Count := Count + 1;
      end loop;
      if Count /= LIMIT then
         raise Benchmark_Error with "Invalid result count:" & Natural'Image (Count);
      end if;
   end loop;
end Select_Table_N;

The benchmark were executed on an Intel i7-3770S CPU @3.10Ghz with 8-cores running Ubuntu 16.04 64-bits. The following database versions are used:

  • MariaDB 10.0.36
  • PostgreSQL 9.5.14

Resource usage comparison

The first point to note is the fact that both Python and Ada require only one thread to run the SQL benchmark. On its side, the Java VM and database drivers need 20 threads to run.

The second point is not surprising: Java needs 1000% more memory than Ada and Python uses 59% more memory than Ada. What is measured is the the VM RSS size which means this is really the memory that is physically mapped at a given time.

The SQLite database requires less resource than others. The result below don't take into account the resource used by the MariaDB and PostgreSQL servers. At that time, the MariaDB server was using 125Mb and the PostgreSQL server was using 31Mb.


Speed comparison

Looking at the CPU time used to run the benchmark, Ada appears as a clear winner. The Java PostgreSQL driver appears to be very slow at connecting and disconnecting to the database, and this is the main reason why it is slower than others.


It is interesting to note however that both Java and Python provide very good performance results with SQLite database when the number of rows returned by the query is less than 100. With more than 500 rows, Ada becomes faster than others.


With a PostgreSQL database, Ada is always faster even with small result sets.



Conclusion and references

SQLite as an embedded database is used on more than 1 billion of devices as it is included in all smartphones (Android, iOS). It provides very good performances for small databases.

With client-server model, MariaDB and PostgreSQL are suffering a little when compared to SQLite.

For bigger databases, Ada provides the best performance and furthermore it appears to be more predictable that other languages (ie, linear curves).

The Excel result file is available in: sql-benchmark-results.xls

Sources of the benchmarks are available in the following GitHub repository:

To add a comment, you must be connected. Login to add a comment

Ada Database Objects 0.4.0 is available

By Stephane Carrez

The Ada Database Objects is an Object Relational Mapping for the Ada05 programming language. It allows to map database objects into Ada records and access databases easily. Most of the concepts developped for ADO come from the Java Hibernate ORM. ADO supports MySQL and SQLite databases.

The new version brings:

  • Support to reload query definitions,
  • It optimizes session factory implementation,
  • It allows to customize the MySQL database connection by using MySQL SET

This version can be downloaded at

To add a comment, you must be connected. Login to add a comment

Fixing the mysql_thread_end cleanup bug in libmysql

By Stephane Carrez

If you see the following message when an application stops, the fix is for you.

Error in my_thread_global_end(): 5 threads didn't exit

The message comes from the MySQL C connector when the library cleans up before the program exits.

The MySQL C client connector has a bug that exists for many years in multi-threaded applications: it does not cleanup correctly threads that use the library. Interestingly, MySQL documentation says you must call mysql_thread_end yourself before a thread stops. This is a shame as it forces developers to find impossible and dirty workaround, one of them is nicely explained in Bug 846602 - MySQL C API missing THR_KEY_mysys.

Root cause

The origin of the bug comes from a wrong use or a miss-understanding of the POSIX Threads. Indeed, the library uses the thread-specific data management through pthread_getspecific and pthread_setspecific to store some dynamically allocated object associated with each thread. The POSIX 1003.1c standard has defined a cleanup mechanism that can be used to automatically reclaim storage that was allocated for thread specific data.

The pthread_key_create is the operation called to create a new thread specific data key. When the key is created, it allows to register a cleanup handler that will be called with the thread specific data before the thread exits. A cleanup handler is as simple as the following function:

static void thread_cleanup(void* data)

With such cleanup handler, the pthread data key must be created as follows:

static pthread_key_t  THR_KEY_mysys;
  pthread_key_create(&THR_KEY_mysys, thread_cleanup);

The MySQL C connector does not specify any cleanup handler when creating the pthread data key. The patch attached to this post defines the cleanup handler and installs it as described above.

Fixing MySQL C connector

To fix the MySQL connector, you will need the sources, apply the patch and build the sources. Download the Connector/C sources from MySQL site. The patch is against 6.0.2 but you should be able to apply it to other versions easily (it patches the file mysys/my_thr_init.c that was not changed since 2009). Extract the sources and apply the patch:

tar xzf mysql-connector-c-6.0.2.tar.gz
patch < fix-mysql-connector.diff

To build the connector, you will need CMake. The build process is explained in the sources (read BUILD.unix).

cmake -G "Unix Makefiles"
make install

Beware that before building you may have to change some configuration setup according to your system (check include/mysql_version.h).



To add a comment, you must be connected. Login to add a comment

Ada Database Objects 0.3.0 is available

By Stephane Carrez

The Ada Database Objects is an Object Relational Mapping for the Ada05 programming language. It allows to map database objects into Ada records and access databases easily. Most of the concepts developped for ADO come from the Java Hibernate ORM. ADO supports MySQL and SQLite databases.

The new version brings:

  • Support to update database records when a field is really modified,
  • Customization of the SQLite database connection by using SQLite PRAGMAs,
  • Escape of MySQL or SQLite reserved keywords,
  • Support for blob type.

This version can be downloaded at

To add a comment, you must be connected. Login to add a comment

Using phpMyAdmin with a remote mysql on the ReadyNAS Duo

By Stephane Carrez

This article explains how to setup and use phpMyAdmin with a MySQL server running on the ReadyNAS duo. This configuration does not require installing phpMyAdmin on the ReadyNAS. Instead it provides a secure configuration while allowing to administer the database remotely. The idea is to use an SSH tunnel to establish a secure connection to the MySQL server. This article deals with the ReadyNAS duo but it can be applied to any MySQL server (I've used this mechanism for the administration of Planzone).

Creating the SSH Tunnel

The ssh tunnel is created when we use the ssh -L option. The first port (3307) specifies the local port and localhost:3306 specifies the MySQL port on the remote endpoint. The tunnel is active only when we are connected with ssh. This is useful to guarantee that nobody can access the MySQL server while we are not connected. The MySQL traffic will also be encrypted by SSH. On your host, type the following command (replace pollux by your ReadyNAS host name).

$ ssh -L 3307:localhost:3306 -l root pollux
root@pollux's password: 
Linux nas-D2-24-F2 #1 Tue Jun 9 13:59:28 PDT 2009 padre unknown

Keep this connection open until you don't need the tunnel any more. As soon as you exit, the tunnel will be closed.

Setting up MySQL server

By default the MySQL server does not listen on any TCP/IP port but instead it uses Unix sockets (Unix sockets are faster and listening on TCP/IP ports could create security leaks if not done correctly). For the ssh tunnel to work, the MySQL server must listen on the a TCP/IP port on the localhost address.

Look at the configuration file /etc/mysql/my.cnf and make sure it contains the following lines (uncomment bind-address and comment the skip-networking):

socket        = /var/run/mysqld/mysqld.sock

port          = 3306
bind-address  =

You should then restart MySQL if you changed the configuration file:

nas-D2-24-F2:/ # /etc/init.d/mysql stop
nas-D2-24-F2:/ # /etc/init.d/mysql start

Setting up mysql grant

A grant definition is necessary so that we can connect to MySQL server on the TCP/IP port. From the MySQL server point of view, the connection is from the localhost (the tunnel endpoint on the ReadyNAS). Connect to MySQL and type the following commands (you can use pwgen to make a password on Ubuntu):

mysql> grant all privileges on *.* to 'root'@'' identified by 'Lugo6cho';
Query OK, 0 rows affected (0.04 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.02 sec)

Testing the connection

On another terminal window, try to connect to the MySQL server through the tunnel.

zebulon $ mysql -u root --host --port 3307

If the client fails with Can't connect to MySQL server on '' (111), verify that the tunnel is up and that the port is correct. If the client fails with Access denied for user 'root'@'' , verify the MySQL access rights and update the grant.

Update phpMyAdmin configuration

Now, we just need to add the tunnel local endpoint to the phpMyAdmin configuration. For this, add the following lines to the phpMyAdmin configuration file (/etc/phpmyadmin/ on Ubuntu).

$cfg['Servers'][$i]['host'] = ''; 
$cfg['Servers'][$i]['port'] = '3307';
$cfg['Servers'][$i]['connect_type'] = 'tcp'; 
$cfg['Servers'][$i]['AllowNoPassword'] = FALSE;


Now, connect to your phpMyAddmin and login on the configuration. Once you are logged in, you'll administer the remote MySQL server.

To add a comment, you must be connected. Login to add a comment

Installing Mysql server on a ReadyNAS duo

By Stephane Carrez 11 comments

Installing the APT extension

The ReadyNAS duo runs a Debian Sarge distribution. The apt-get commands are not available and we must install the APT addon:

  1. Download the APT extension. I've used the following link:
  2. Go in the ReadyNAS FrontView with your browser and go to System -> Update -> Local Update
  3. Upload the APT binary file. The ReadyNAS verifies that content and if it is correct it displays a description of the addon.
  4. Acknowledge the installation of the addon

After installation, the ReadyNAS must be restarted. Shortly after, you will receive an email:

Subject: Addon Package Progress (nas-XX-XX-XX)
Successfully installed APT.

Debian Sarge Package update

It may be good to check the debian packages. Connect to the ReadyNAS using ssh and run the following commands:

apt-get update

Installation of Mysql server

The mysql-server-5.0 is supposed to be installed according to dpkg -i command. However, the files are not there and they have probably be removed. This is also the case for some utilities which are used by some installation scripts.


The /usr/bin/chfn utility is missing and we need it for Mysql installation. We must re-install the passwd package. Download it and install it as follows;

# dpkg -i passwd_4.0.3-31sarge9_sparc.deb

The /usr/bin/logger is also missing. We must re-install the bsdutils_2.12p-4sarge2_sparc.deb package. Download it and install it as follows:

# dpkg -i bsdutils_2.12p-4sarge2_sparc.deb

Edit the /etc/mysql/my.cnf file and change the line:

user            = admin


user           = mysql

Additional packages

As reported by RoB (see comments), the following packages are also necessary for mysql-server:

# apt-get install libreadline5 libdbi-perl

Get Mysql packages

The mysql packages are part of (backport from Etch). You should download the following:

    Mysql installation

    Install the Mysql server debian packages:

    # dpkg -i libmysqlclient15off_5.0.32-7etch5~bpo31+1_sparc.deb \
    mysql-common_5.0.32-7etch5~bpo31+1_all.deb  \
    mysql-server-5.0_5.0.32-7etch5~bpo31+1_sparc.deb \
    mysql-client-5.0_5.0.32-7etch5~bpo31+1_sparc.deb \
    Installation Problem and Solution

    If in the installation process you see a message '/etc/init.d/mysql' not found, force the extraction of mysql-server files without the execution of the installation script:

    # dpkg --extract mysql-server-5.0_5.0.32-7etch5~bpo31+1_sparc.deb /

    After that, redo the dpkg -i command.

    If you see some errors in the logs:

    [ERROR] /usr/sbin/mysqld: Can't find file: './mysql/host.frm' (errno: 13)

    Fix ownership of some existing directories:

    # chown mysql /var/lib/mysql
    # chown mysql /var/lib/mysql/mysql

    and restart mysql

    Fix /etc/mysql/debian.cnf

    After installation, the debian.cnf file used by mysqlcheck uses a user that does not exist. You can either create the debian-sys-maint user in mysql or change it to 'root' by editing the file /etc/mysql/debian.cnf and change user and password to use 'root'.

    Fix startup scripts

    Remove the following two startup scripts, they are not necessary for us:

    1. rm /etc/rc2.d/S18mysql-ndb
    2. rm /etc/rc2.d/S17mysql-ndb-mgm

    Testing the database

    Verify that the Mysql database is running:

    nas-D2-24-F2:/var/log# mysql
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 6
    Server version: 5.0.32-Debian_7etch5~bpo31+1 Debian etch distribution
    Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
    To add a comment, you must be connected. Login to add a comment

    Tuning mysql configuration for the ReadyNAS duo

    By Stephane Carrez

    After installing mysql server on a Ready NAS duo, it is necessary to tune the configuration to make the server run well on this small hardware. This article describes a possible configuration for tuning the Mysql server.

    Mysql Temporary directory

    Mysql uses files in the temporary directory to store temporary tables. Depending on your database and your queries, temporary tables could be quite large. To avoid problems in the /tmp partition becomming full, the best thing is to use a directory in the /c partition

    tmpdir          = /c/backup/tmp

    Make sure the directory exist before starting mysql:

    # mkdir -p /c/backup/tmp

    Mysql storage engine

    After playing with a reasonably big database and the MyISAM storage engine, it turns out that the mysql server was sometimes crashing and barking at some corrupted myisam tables. I switched to the InnoDB storage engine, which is better for transactions anyway. Since the readynas does not have a lot of memory I've used the following configuration:

    default_storage_engine = InnoDB
    thread_cache_size = 0
    innodb_buffer_pool_size = 6M
    innodb_thread_concurrency = 1

    Other mysql settings

    To reduce the resources used by the mysql server to the minimum, I changed the max number of connections to a small number.

    key_buffer_size = 16k
    sort_buffer_size = 100k
    max_connection = 10

    I'm using these settings for almost 6 months now; my bacula database now contains a table with 5 milions of rows. Of course you can't expect big performance but the mysql server is stable.

    To add a comment, you must be connected. Login to add a comment

    Simple mysql database backup for ReadyNAS duo

    By Stephane Carrez

    With a mysql database running on the ReadyNAS duo, it becomes necessary to put in place a backup of the database. This article describes a simple method to automatically backup the mysql database.

    All actions described here require that you are connected to your ReadyNAS duo using SSH (See Connecting to a ReadyNAS duo using SSH)

    ssh -l root pollux
    root@pollux's password:
    Last login: Sat Jan  9 12:59:54 2010 from zebulon
    Last login: Sat Jan  9 15:34:19 2010 from zebulon on pts/0
    Linux nas-D2-24-F2 #1 Fri Mar 20 04:41:57 PDT 2009 padre unknown

    Backup Directory Preparation

    First, we have to create a protected directory which will contain the backups:

    nas-D2-24-F2:# mkdir /c/backup-mysql
    nas-D2-24-F2:# chmod 700 /c/backup-mysql

    Mysql Backup User

    To make the backup, a special user should be used to restrict the rights to the minimum. Basically, the user only needs the SELECT and LOCK TABLES privileges. The database access should be protected with a password.

    nas-D2-24-F2:# mysql -u root -p
    Enter password:
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 7
    Server version: 5.0.32-Debian_7etch5~bpo31+1-log Debian etch distribution
    Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
        TO 'dump'@'localhost' identified by 'XXXX';
    Query OK, 0 rows affected (0.04 sec)
    mysql> flush privileges;
    Query OK, 0 rows affected (0.02 sec)
    mysql> quit

    Backup Script

    To make the backup, you have to write a simple script which uses mysqldump and compresses the backup file. This script is written in /etc/cron.daily directory. This way it will be run automatically by the cron daemon each day at 6:25am (Look at the /etc/crontab file).

    Create the file /etc/cron.daily/backup-mysql and put the content below.

    D=`date --iso-8601`
    DB_LIST="bacula mysql"
    for i in $DB_LIST; do
      mysqldump --user=dump \
        --password=XXXX \
        --opt $i | gzip -c > $BKP_DIR/$i-$D.sql.gz &&
      chmod 400 $BKP_DIR/$i-$D.sql.gz

    You have to update the DB_LIST variable to put the name of the databases you want to backup.

    You have to protect the script because it contains the password of our backup user. The script must be executable.

    nas-D2-24-F2:# cd /etc/cron.daily
    nas-D2-24-F2:# chmod 700 backup-mysql

    Test the script

    It's necessary to execute the script at least once to make sure it backups what you need.

    nas-D2-24-F2:# ./backup-mysql

    Then, check that a backup file was created correctly.

    Test the database backup or restore

    You may also test that the backup SQL file is correct by creating a fake database and importing the file. You can do this with the following command:

    nas-D2-24-F2:# mysql -u root -p 
    create database test-backup;

    Then decompress the backup file and import it with the mysql command into new test (or the real database if you want to restore it).

    nas-D2-24-F2:# gzip -c \
    -d /c/backup-mysql/xxx.sql.gz | \
    mysql --force --force -u root -p test-backup


    This process remains simple and is very easy to put in place. However, it has some limitations because it is best to make sure no applications are writing to the database when the backup is running. Otherwise you may backup data which are not consistent.

    To add a comment, you must be connected. Login to add a comment

    Fault tolerant EJB interceptor: a solution to optimistic locking errors and other transient faults

    By Stephane Carrez 4 comments

    Fault tolerance is often necessary in application servers. The J2EE standard defines an interceptor mechanism that can be used to implement the first steps for fault tolerance. The pattern that I present in this article is the solution that I have implemented for the Planzone service and which is used with success for the last two years.

    Identify the Fault to recover

    The first step is to identify the faults that can be recovered from others. Our application is using MySQL and Hibernate and we have identified the following three transient faults (or recoverable faults).

    StaleObjectStateException (Optimistic Locking)

    Optimistic locking is a pattern used to optimize database transactions. Instead of locking the database tables and rows when values are updated, we allow other transactions to access these values. Concurrent writes are possible and they must be detected. For this optimistic locking uses a version counter, or a timestamp or state comparison to detect concurrent writes.

    When a concurrent write is detected, Hibernate raises a StaleObjectStateException exception. When such exception occurs, the state of objects associated with the current hibernate session is unknown. (See Transactions and Concurrency)

    As far as Planzone is concerned, we get 3 exceptions per 10000 calls.

    LockAcquisitionException (Database deadlocks)

    On the database side, the server can detect deadlock situation and report an error. When a deadlock is detected between two clients, the server generates an error for one client and the second one can proceed. When such error is reported, the client can retry the operation (See InnoDB Lock Modes).

    As far as Planzone is concerned, we get 1 or 2 exceptions per 10000 calls.

    JDBCConnectionException (Connection failure)

    Sometimes the connection to the database is lost either because the database server crashed or because it was restarted due to maintenance reasons. Server crash is rare but it can occur. For Planzone, we had 3 crashes during the last 2 years (one crash every 240 day). During the same period we also had to stop and restart the server 2 times for a server upgrade.

    Restarting the call after a database connection failure is a little bit more complex. It is necessary to sleep some time before retrying.

    EJB Interceptor

    To create our fault tolerant mechanism we use an EJB interceptor which is invoked for each EJB method call. For this the interceptor defines a method marked with the @ArroundInvoke annotation. Its role is to catch the transient faults and retry the call. The example below retries the call at most 10 times.

    The EJB interceptor method receives an InvocationContext parameter which allows to have access to the target object, parameters and method to invoke. The proceed method allows to transfer the control to the next interceptor and to the EJB method. The real implementation is a little bit more complex due to logging but the overall idea is here.

    class RetryInterceptor {
      public Object retry(InvocationContext context) throws Exception {
        for (int retry = 0; ; retry++) {
          try {
            return context.proceed();
          } catch (LockAcquisitionException ex) {
             if (retry > 10) {
              throw ex;
         } catch (StaleObjectStateException ex) {
           if (retry > 10) {
            throw ex;
        } catch (final JDBCConnectionException ex) {
          if (retry > 10) {
            throw ex;
          Thread.sleep(500L + retry * 1000L);

    EJB Interface

    For the purpose of this article, the EJB interface is declared as follows. Our choice was to define an ILocal and an IRemote interface to allow the creation of local and remote services.

    public interface Service {
        interface ILocal extends Service {
        interface IRemote extends Service {

    EJB Declaration

    The interceptor is associated with the EJB implementation class by using the @Interceptors annotation. The same interceptor class can be associated with several EJBs.

    @Stateless(name = "Service")
    public class ServiceBean
      implements Service.ILocal, Service.IRemote {


    To test the solution, I recommend to write a unit test. The unit test I wrote did the following:

    • A first thread executes the EJB method call.
    • The transaction commit operation is overriden by the unit test.
    • When the commit is called, a second thread is activated to simulate the concurrent call before committing.
    • The second thread performs the EJB method call in such a way that it will trigger the StaleObjectStateException when the first thread resumes
    • When the second thread finished, the first thread can perform the real commit and the StaleObjectStateException is raised by Hibernate because the object was modified.
    • The interceptor catches the exception and retries the call which will succeed.

    The full design of such test is outside of the scope of this article. It is also specific to each application.

    To add a comment, you must be connected. Login to add a comment

    Audit errors reported by linux kernel - why you must care

    By Stephane Carrez

    Today I had to migrate the mysql storage to another partition because the /var partition was not large enough and the database was growing. After moving the files, updating the mysql configuration files to point to the new partition, mysql refused to start: it pretend it had no permission to access the directory. The directory was owned by mysql and it had the all the rights to write on files. What could happen?

    After looking at the kernel logs, I saw this kind of message:

    [173919.699270] audit(1229883052.863:39): type=1503 operation="inode_create" requested_mask="w::" denied_mask="w::" name="/data/var/mysql" pid=21625 profile="/usr/sbin/mysqld" namespace="default"

    This kernel log is produced by the AppArmor kernel extension which restricts the access to resources to programs. Indeed, it tells that /usr/sbin/mysqld is not able to access the file /data/var/mysql. To fix the problem, you have to change the AppArmor configuration by editing the file /etc/apparmor.d/usr.sbin.mysqld.

     # vim:syntax=apparmor
     # Last Modified: Tue Jun 19 17:37:30 2007
     #include <tunables/global>
     /usr/sbin/mysqld {
      #include <abstractions/base>
      #include <abstractions/nameservice>
      #include <abstractions/user-tmp>
      #include <abstractions/mysql>
      capability dac_override,
      capability setgid,
      capability setuid,
      /etc/hosts.allow r,
      /etc/hosts.deny r,
      /etc/group              m,
      /etc/passwd             m,
      /etc/mysql/*.pem r,
      /etc/mysql/conf.d/ r,
      /etc/mysql/conf.d/* r,
      /etc/mysql/my.cnf r,
      /usr/sbin/mysqld mr,
      /usr/share/mysql/** r,
     __ /var/lib/mysql/ r,__     #  ''Must be updated''
      __/var/lib/mysql/** rwk,__  # ''Must be updated''
      /var/log/mysql/ r,
      /var/log/mysql/* rw,
      /var/run/mysqld/ w,
      /var/run/mysqld/mysqld.sock w,

    The two lines must be fixed to point to the new directory, in the example:

     __ /data/var/mysql/ r,__
      __/data/var/mysql/* rw,__

    After changing the files, you must restart Apparmor:

    $ sudo /etc/init.d/apparmor restart

    After the fix, the mysql server was able to start again and the audit error was not reported any more.

    To add a comment, you must be connected. Login to add a comment