Tag - lvm

Extending an ext4 LVM partition

By Stephane Carrez 2 comments

From time to time a disk partition becomes full and it becomes desirable to grow the partition. Since I often don't remember how to do this, I wrote this short description to keep track of how to do it.

Extending the LVM partition

The first step is to extend the size of the LVM partition. This is done easily by using the lvextend (8) command. You just need to specify the amount and the LVM partition. In my case, the vg02-ext volume was using 60G and the +40G command will grow its size to 100G. Note that you can run this command while your file system is mounted (if you grow the size of your LVM partition).

$ sudo lvextend --size +40G /dev/mapper/vg02-ext Extending logical volume ext to 100.00 GiB Logical volume ext successfully resized

Preparing the ext4 filesystem for resize

Before resizing the ext4 filesystem, you must make sure it is not mounted:

$ sudo umount /ext

The file system must be clean and you should run the e2fsck (8) command to check it:

$ sudo e2fsck -f /dev/mapper/vg02-ext e2fsck 1.42 (29-Nov-2011) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/mapper/vg02-ext: 1974269/3932160 files (0.1% non-contiguous), 14942044/15728640 blocks 5.392u 1.476s 0:48.25 14.2% 0+0k 3208184+48io 2pf+0w

Resizing the ext4 filesystem

The last step is to resize the ext4 file system by using the resize2fs (8) command. The command can enlarge or shrink an unmounted file system of type ext2, ext3 or ext4. The command only needs the block device path to operate.

$ sudo resize2fs /dev/mapper/vg02-ext resize2fs 1.42 (29-Nov-2011) Resizing the filesystem on /dev/mapper/vg02-ext to 26214400 (4k) blocks. The filesystem on /dev/mapper/vg02-ext is now 26214400 blocks long.

After the resize, we can re-mount the ext4 partition:

$ sudo mount -a
2 comments
To add a comment, you must be connected. Login to add a comment

How to expand disk capacity on a ReadyNAS Duo from 1 TB to 2 TB

By Stephane Carrez 3 comments

This article describes the process to increase the disk capacity of a ReadyNAS Duo configuration from 1 TB to 2 TB In my case, my X-RAID configuration was broken due to a faulty disk. I took the opportunity to repair the redundancy and also to increase the capacity. The process is simple but very long. It took me 4 days, several reboots and many disk synchronisation.

To replace the 1TB disks, I bought two Seagate ST2000DL003-9VT166 hard disks which offer 2TB (they are referenced in the hardware compatibility list). I then followed the following process :

  1. Upgrade to the latest RAIDiator firmware (4.1.7)
  2. Replace a first disk by the new larger disk (in my case the faulty disk)
  3. Wait until the disks are fully synchronized (Status should be Redundant)
  4. Shutdown properly and restart the ReadyNAS
  5. Make sure the disks are fully synchronized. If not, wait for synchronization to finish.
  6. Replace the second disk by the larger disk
  7. Wait until the disks are fully synchronized (Status should be Redundant)
  8. Shutdown properly and restart the ReadyNAS
  9. After the reboot, the ReadyNAS triggers a disk expand
  10. Another reboot is necessary after which ReadyNAS triggers the file system expansion

ReadyNAS Disk Expansion

The disk expansion happens at the very end and is fairly quick. Before the disk expansion, and when the new disks are installed, you will see that the disk partition table has not changed. The fdisk /dev/hdc command reports:

Device Boot   Start      End      Blocks   Id  System
/dev/hdc1         1      255     2048000   83  Linux
/dev/hdc2       255      287      256000   82  Linux swap
/dev/hdc3       287   121575   974242116    5  Extended
/dev/hdc5       287   121575   974242115+  8e  Linux LVM

Since ReadyNAS uses LVM to manage the disks, you can use pvdisplay to look at the available space. At this stage, everything is used.

nas-D2-24-F2:/var/log/frontview# pvdisplay
  --- Physical volume ---
  PV Name           /dev/hdc5
  VG Name           c
  PV Size           929.09 GB / not usable 0   
  Allocatable       yes (but full)
  PE Size (KByte)   32768
  Total PE          29731
  Free PE           0
  Allocated PE      29731
  PV UUID           huL1xb-0v0O-vJ6K-LqaK-P4kf-q4Wm-SFeYCX

After the reboot, the ReadyNAS will start the disk expand process. It will do this only if the two disks are redundant. After expand, the partition looks as follows:

Device Boot    Start       End      Blocks   Id  System
/dev/hdc1        1       255     2048000   83  Linux
/dev/hdc2      255       287      256000   82  Linux swap
/dev/hdc3      287    243201  1951200343    5  Extended
/dev/hdc5      287    121575   974242115+  8e  Linux LVM
/dev/hdc6   121575    243200   976950032   8e  Linux LVM

Once the partition table is fixed, you are asked to reboot:

The first stage of the in-place volume expansion is done.
Please reboot the device to complete the volume expansion.

After the reboot, the LVM volumes are increased. You can check with pvdisplay which now reports the new disk partition and with lvdisplay which takes into account the two physical volumes.

nas-D2-24-F2:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/hdc5
  VG Name               c
  PV Size               929.09 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              29731
  Free PE               0
  Allocated PE          29731
  PV UUID               huL1xb-0v0O-vJ6K-LqaK-P4kf-q4Wm-SFeYCX
   
  --- Physical volume ---
  PV Name               /dev/hdc6
  VG Name               c
  PV Size               931.69 GB / not usable 0   
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              29814
  Free PE               0
  Allocated PE          29814
  PV UUID               TOqmR2-fYOq-jf0q-n1ka-N9K1-B2CB-oyU23Y

nas-D2-24-F2:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/c/c
  VG Name                c
  LV UUID                2CzUXf-uzSD-DGcS-KePF-6elz-XveS-xePwHf
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                1.82 TB
  Current LE             59545
  Segments               2
  Allocation             inherit
  Read ahead sectors     0
  Block device           254:0

The last step is now to resize the file system. The ReadyNAS reports the following alter the LVM volume is expanded:

Your system will now begin online expansion. 
Please do not reboot until you receive notification that the expansion is complete.

And while the expansion is in progress, you will see that the ReadyNAS uses resize2fs to grow the file system. If you look at the running processes, you will see the following:

root 1371  ?  S    21:41   0:00 /bin/bash /frontview/bin/expand_online
root 1537  ?  Ss   21:41   0:00 /frontview/bin/blink_expand
root 1538  ?  S    21:41   0:42 resize2fs -pf /dev/c/c

Data volume has been successfully expanded to 1853 GB.

nas-D2-24-F2:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/hdc1             1.9G  590M  1.3G  30% /
tmpfs                  16k     0   16k   0% /USB
/dev/c/c              1.8T  888G  966G  48% /c

Problems and hints

  • If at some point, the ReadyNAS enter in re-synchronization after a reboot even if disks are already synchronized. Check that the disks are in good health. Look at the /etc/rc3.d directory and make sure the rc3 script is called only once through the Sxxx symbolic links (See Frontview shows 100% disk usage)
  • If you suspect something wrong, use ssh to connect to the ReadyNAS and look at /var/log/messages or /var/log/kern.log to see if there is not some hardware issue.
  • Check the file /etc/frontview/raid.conf and verify that the two lines are similar and indicate the reference of your new disk (this file is rebuilt after each reboot).
  • Look at the /proc/xraid/configuration file. It indicates a lot of information about the current X-RAID status and synchronisation process.
  • At the last resort, read and understand the /etc/hotplug/sata.agent script which contains the details of the resynchronisation and expansion process.
3 comments
To add a comment, you must be connected. Login to add a comment

Installing an SSD device on Ubuntu

By Stephane Carrez

This article explains the steps for the installation of an SSD device on an existing Ubuntu desktop PC.

Disk Performances

First of all, let's have a look at the disk read performance with the hdparm utility. The desktop PC has three disks, /dev/sda being the new SSD device (an OCZ Vertex 2 SATA II 3.5" SSD).

$ sudo -i hdparm -t /dev/sda /dev/sdb /dev/sdc

The three disks have the following performance:

sda: OCZ-VERTEX2 3.5        229.47 MB/sec
sdb: WDC WD3000GLFS-01F8U0  122.29 MB/sec
sdc: ST3200822A             59.23 MB/sec

The SSD device appears to be 2 times faster than a 10000 rpm disk.

Plan for the move

The first step is to plan for the move and define what files should be located on the SSD device.

Identify files used frequently

To benefit of the high read performance, files used frequently could be moved to the SSD device. To identify them, you can use the find command and the -amin option. This option will not work if the file system is mounted with noatime. The -amin option indicates a number of minutes. To find the files that were accessed during the last 24 hours, you may use the following command:

$ sudo find /home -amin -1440

In most cases, files accessed frequently are the system files (in /bin, /etc, /lib, ..., /usr/bin, /usr/lib, /usr/share, ...) and users' files located in /home.

Identify Files that change frequently

Some people argue that files modified frequently should not be located on an SSD device (write endurance and write performance).

On a Linux system, the system files that are changed on regular basis are in general grouped together in the /var directory. Some configuration files are modified by system daemons while they are running. The list of system directories that changes can be limited to:

/etc    (cups/printers.conf.0, mtab,  lvm/cache, resolv.conf, ...)
/var    (log/*, cache/*, tmp/*, lib/*...)
/boot   (grub/grubenv modified after booting)

Temporary Files

On Linux temporary files are stored in one of the following directories. Several KDE applications are saving temporary files in the .kde/tmp-host directory for each user. These temporary files could be moved to a ram file system.

/tmp
/var/tmp
/home/$user/.kde/tmp-$host

Move plan

The final plan was to create one partition for the root file system and three LVM partitions for /usr, /var and /home directories.

Partition the drive

The drive must be partitioned with fdisk. I created one bootable partition and a second partition with what remains.

$ sudo fdisk -l /dev/sda

Disk /dev/sda: 120.0 GB, 120034123776 bytes

255 heads, 63 sectors/track, 14593 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00070355



Device Boot Start End Blocks Id System

/dev/sda1 * 1 1295 10402056 83 Linux

/dev/sda2 1296 14593 106816185 83 Linux

To ease future management of partitions, it is useful to use LVM and create a volume group.

$ sudo vgcreate vg01 /dev/sda2

Volume group "vg01" successfully created

The partitions are then created by using lvcreate. More space can be allocated on them by using the lvextend utility.

$ sudo lvcreate -L 10G -n sys vg01

Logical volume "sys" created

$ sudo lvcreate -L 10G -n var vg01

Logical volume "var" created

$ sudo lvcreate -L 4G -n swap vg01

Logical volume "swap" created

$ sudo lvcreate -L 60G -n home vg01

Logical volume "home" created

The LVM partitions are available through the device mapper and they can be accessed by their name:

$ ls -l /dev/vg01/

total 0

lrwxrwxrwx 1 root root 19 2011-02-20 14:03 home -> ../mapper/vg01-home

lrwxrwxrwx 1 root root 19 2011-02-20 14:03 swap -> ../mapper/vg01-swap

lrwxrwxrwx 1 root root 18 2011-02-20 14:03 sys -> ../mapper/vg01-sys

lrwxrwxrwx 1 root root 18 2011-02-20 14:03 var -> ../mapper/vg01-var

Format the partition

Format the file system with ext4 as it integrates various improvements which are useful for the SSD storage (Extents, Delayed allocation). Other file systems will work very well too.

$ sudo mkfs -t ext4 /dev/vg01/sys

Move the files

To move files from one system to another place, it is safer to use the tar command instead of a simple cp. Indeed, the tar command is able to copy special files without problems while not all cp commands support the copy of special files.

$ sudo mount /dev/vg01/sys /target

$ sudo -i

# cd /usr

# tar --one-file-system -cf - . | (cd /target; tar xf -)

If the file system to move is located on another LVM partition, it is easier and safer to use the pvmove utility to move physical extents from one physical volume to another one.

Change the mount point

Edit the /etc/fstab file and change the old mount point to the new one. The noatime mount option tells the kernel to avoid updating the file access time when it is read.

/dev/vg01/sys  /usr  ext4 noatime  0 2
/dev/vg02/home /home ext4 noatime  0 2
/dev/vg01/var  /var  ext4 noatime  0 2

Tune the IO Scheduler

For the SSD drive, it is best to disable the Linux IO scheduler. For this, we will activate the noop IO scheduler. Other disks will use the default IO scheduler or another one. Add the following lines in /etc/rc.local file:

test -f /sys/block/sda/queue/scheduler &&
  echo noop > /sys/block/sda/queue/scheduler

References

LVM

ext4

http://www.ocztechnologyforum.com/forum/showthread.php?54379-Linux-Tips-tweaks-and-alignment

http://www.storagesearch.com/ssdmyths-endurance.html

To add a comment, you must be connected. Login to add a comment