Prevent GParted probe starting LVM Volume Groups (#259)

A user reported that it was not possible to deactivate an active LVM
Physical Volume.  They were using GParted app from GParted Live 1.6.0.
This behaviour has been reproduced on GParted Live 1.6.0-3 (Debian SID
as of 2024-04-08) and Fedora Rawhide as of 2024-08-15, both recent
development branches of their respective distributions.  Was not able to
replicate this on these latest releases: Debian 12, Fedora 40 and
Ubuntu 24.04 LTS.

GParted did deactivate the LVM Volume Group containing the Physical
Volume but device refresh probing triggered a udev rule which
re-activated the Volume Group.  Summary:

1. GParted read the partition table by calling ped_disk_get().
2. Libparted opened the whole disk and every partition read-write to
   flush the caches for coherency.  On closing the file handles ...
3. The kernel generated partition remove and add uevents.
4. Udev triggered the rule to start the LVM VG.

Details obtained with the help of udevadm monitor, strace and
journalctl:
    GParted  | set_devices_thread()
    GParted  |   set_device_from_disk()
    GParted  |     get_device()
    GParted  |     get_disk()
    libparted|       ped_disk_new()
    libparted|         openat(AT_FDCWD, "/dev/sdb", O_RDWR) = 7
    libparted|         ioctl(7, BLKFLSBUF)         = 0
    libparted|         openat(AT_FDCWD, "/dev/sdb1", O_RDWR) = 8
    libparted|         ioctl(8, BLKFLSBUF)         = 0
    libparted|         fsync(8)                    = 0
    libparted|         close(8)                    = 0
    KERNEL   | change   /devices/pci0000:00/.../block/sdb/sdb1 (block)
    UDEV     | change   /devices/pci0000:00/.../block/sdb/sdb1 (block)
    libparted|         fsync(7)                    = 0
    libparted|         close(7)                    = 0
    KERNEL   | remove   /devices/pci0000:00/.../block/sdb/sdb1 (block)
    KERNEL   | change   /devices/pci0000:00/.../block/sdb (block)
    KERNEL   | add      /devices/pci0000:00/.../block/sdb/sdb1 (block)
    UDEV     | remove   /devices/pci0000:00/.../block/sdb/sdb1 (block)
    UDEV     | change   /devices/pci0000:00/.../block/sdb (block)
    UDEV     | add      /devices/pci0000:00/.../block/sdb/sdb1 (block)
    SYSLOG   | lvm[62502]: PV /dev/sdb1 online, VG testvg is complete.
    KERNEL   | add      /devices/virtual/bdi/253:0 (block)
    KERNEL   | add      /devices/virtual/block/dm-0 (block)
    KERNEL   | change   /devices/virtual/block/dm-0 (block)
    UDEV     | add      /devices/virtual/bdi/253:0 (block)
    UDEV     | add      /devices/virtual/block/dm-0 (block)
    UDEV     | change   /devices/virtual/block/dm-0 (block)
    SYSLOG   | systemd[1]: Started lvm-activate-testvg.service - /usr/sbin/lvm vgchange -aay --autoactivation event testvg.
    SYSLOG   | lvm[62504]:   1 logical volume(s) in volume group "testvg" now active
    SYSLOG   | systemd[1]: lvm-activate-testvg.service: Deactivated successfully.

    # grep 'lvm vgchange -a' /usr/lib/udev/rules.d/*lvm*
    /usr/lib/udev/rules.d/69-dm-lvm.rules:... RUN+="... /usr/sbin/lvm vgchange -aay --autoactivation event $env{LVM_VG_NAME_COMPLETE}"

Evaluation using systemd's Locking Block Device Access [1]. Took a BSD
file lock on /dev/sdb while GParted was probing the drive.  Used Python
from another terminal:
    # python
    >>> import fcntl
    >>> f = open('/dev/sdb', 'wb')
    >>> fcntl.flock(f, fcntl.LOCK_EX|fcntl.LOCK_NB)
Ran GParted.  Released the lock by closing the open file when GParted
display had updated.
    >>> f.close()

The lock temporarily stopped the Volume Group being activated so GParted
displayed it as inactive, but as soon as the lock was released the udev
rule fired and the Volume Group was activated.  This is an even worse
situation as GParted displayed the Volume Group as inactive but it was
actually active.  Therefore GParted can't use this method.

This type of issue has been encountered before with bcache devices [2]
and Linux Software RAID arrays [3] being automatically started by device
probing.  Fix using the same method of temporarily adding in a blank
override rule which does nothing.

[1] systemd - Locking Block Device Access
    https://systemd.io/BLOCK_DEVICE_LOCKING/
[2] 8640f91a4f
    Prevent GParted probe starting stopped bcache (#183)
[3] a255abf343
    Prevent GParted starting stopped Linux Software RAID arrays (#709640)

Closes #259 - Trying to deactivate LVM PV fails
This commit is contained in:
Mike Fleetwood 2024-08-19 21:58:26 +01:00
parent 3d1b2921a6
commit eb04be7b03
1 changed files with 4 additions and 2 deletions

View File

@ -180,7 +180,7 @@ fi
#
# Create temporary blank overrides for all udev rules which automatically
# start Linux Software RAID array members and Bcache devices.
# start Linux Software RAID array members, LVM Volume Groups and Bcache devices.
#
# Udev stores volatile / temporary runtime rules in directory /run/udev/rules.d.
# Volatile / temporary rules are used to override system default rules from
@ -199,6 +199,7 @@ if test -d /run/udev; then
do
test -d $udev_default_rules_dir || continue
egrep -l '^[^#].*mdadm (-I|--incremental)' $udev_default_rules_dir/*.rules 2> /dev/null
egrep -l 'lvm vgchange -a' $udev_default_rules_dir/*lvm*.rules 2> /dev/null
ls $udev_default_rules_dir/*bcache*.rules 2> /dev/null
done | sed 's,.*/lib/udev,/run/udev,g' | sort -u`
fi
@ -232,7 +233,8 @@ status=$?
#
# Clear any temporary override udev rules used to stop udev automatically
# starting Linux Software RAID array members and Bcache devices.
# starting Linux Software RAID array members, LVM Volume Groups and Bcache
# devices.
#
for rule in $UDEV_TEMP_RULES; do
rm -f "$rule"