Logical Volume Management (LVM)

Overview

This document contains brief reminder tips on using LVM2. To find out more about LVM, refer to the excellent guide at http://tldp.org/HOWTO/LVM-HOWTO/index.html.

Check the LVM2 FAQ before using LVM2 as some features do not work with the 2.6 Kernel (as at LVM version 2.00.08). In particular check the status of;

  • Snapshots on LVM1 don't work with kernel 2.6 and attempting to use them can severely screw up your system.
  • Don't adjust the size of mounted volumes.

Also note that you cannot mount ext3 filesystem snapshots with 2.4 kernels and lvm 1.0.8. ext2 file systems are OK.

Initial Setup

Some distributions include support for LVM out-of-the-box. Also, you really should consider running LVM2 on top of RAID 1. See the 'Partitioning RAID / LVM on RAID' chapter of the Raid HOWTO.

If you're going to put your root partition on LVM, you'll also need a good rescue CD that supports LVM2. E.g. the 'rescue' mode of the Mandrake distribution CDs does not have support for RAID and LVM. If my memory is correct, Knoppix 3.4 comes with support for LVM1, not LVM2. It's fairly easy to remaster a Knoppix cd and install LVM2 instead.

If you're building a new system from scratch, I'd experiment with installing LVM2 on RAID and completely restoring the system from a tar using your rescue CD. It's far easier to sort out failures with a small installation before doing the real thing.

Distro LVM Software RAID Root on LVM

Mandrake 9.2 LVM1 yes yes Mandrake 10 LVM2 yes yes

Note: I get errors upgrading 2.6 kernels on Mandrake 10.0 and have to manually complete the upgrade. E.g.

installing /var/cache/urpmi/rpms/kernel-2.6.3.19mdk-1-1mdk.i586.rpm
Preparing...                ##################################################
   1:kernel-2.6.3.19mdk     ##################################################
cp: cannot stat `(0xffffe000)': No such file or directory
cp: cannot stat `(0xffffe000)': No such file or directory

This seems to result in the soft symbolic links for System.map and config not being set to the new kernel. Manually updating these and re-running lilo fixes it.

Extending an existing volume

Increases the existing volume by 10G.

Note: You'll need to do something like 'telinit 1' (which switches the server to single user mode, shutting down all services and disallowing any other logins) from the root console to be able to unmount active volumes like '/var'. If you're accessing your server remotely, you don't want to 'telinit '1. In that case, you'll need to manually stop all the services that are using /var. A long term solution is to maintain /etc/rc2.d/ to run the minimal services required, including SSHD. See also CreatingRescuePartition for details of how you can maintain two separately bootable versions of the operating system.

  1. umount /var
  2. lvextend -L+10G /dev/vg0/var
  3. e2fsck -f /dev/vg0/var
  4. resize2fs /dev/vg0/var
  5. mount /var

Reducing an existing volume

This examples reduces the size to a specified 1Gb. Backup your data before-hand!

  1. umount /var
  2. e2fsck -f /dev/vg0/var
  3. resize2fs /dev/vg0/var 1G
  4. lvreduce /dev/vg0/var -L1G
  5. mount /var

Snapshots

With LVM1 I've had a couple of occasions where rebooting with active snapshots screwed things up rather badly. I can't remember what I was doing on the previous occasion, but the most recent was writing a grub boot sector to the second disk in a RAID1 array, which probably caused some re-mirroring. On re-booting, vgchange failed to activate the volumes, reporting an error including 'lvmsnapshotfillCOWpage'. A google search suggests that LVM1 snapshots are related to physical devices, so this error might also occur where a RAID1 drive is swapped.

This was rectified by removing the snapshots. If you've got LVM version 1.0.3 or greater the snapshots can be removed with

  • vgscan -r [VgNameToRemoveSnapshotsFrom]

I hate to think what happens if you've got an older version...

LVM2 Snapshots

With the 2.6 kernel, you may need to load the snapshot module before you can create the snapshot

  • modprobe dm-snapshot
  • lvcreate -L1G -s -n snap /dev/vg0/database

Version Information

  • cat /proc/lvm/global

Removing a Physical Volume

Disable the volume from being used with:

  • pvchange -x n /dev/hda3

Move all allocated extents off the volume with:

  • pvmove /dev/hda3

Remove the physical volume from the volume group. You can test it first:

  • vgreduce -A y -t vg0 /dev/hda3

Then really do it:

  • vgreduce -A y vg0 /dev/hda3

Then finally wipe the label so that LVM will no longer recognise it:

  • pvremove /dev/hda3

Debian Notes

  • lvm.conf detectmdcomponent should be set to 1 not 0

  • lvm.conf filters - ensure cdrom drive is correctly excluded and not unintentionally excluding a hard disk.

  • pvscan -P to check what LVM2 actually sees.

      • Frank Dean 13-Dec-2004

Starting from a Rescue Disk

If you're starting a system from a rescue disk, e.g. Knoppix that contains logical volumes, you may need to manually bring up LVM with the following commands:

# vgscan
# vgchange -ay

Note: If you're using RAID, start the raid array first. See LinuxSoftwareRaid

-- Frank Dean - 30 Mar 2010

Trouble Shooting

pvmove fails with 'mirror: Required device-mapper target(s) not detected in your kernel'

Try

  • modprobe dm-mirror

Further Reading

Debain Root on LVM on RAID


Related Topics: LinuxSoftwareRaid, CreatingRescuePartition


-- Frank Dean - 18 Oct 2004