Prevent GParted probe starting stopped bcache (#183)

From the setup in the previous commit, unregister (stop) all of the
bcache backing and cache devices.
    # bcache unregister /dev/sdb2
    # bcache unregister /dev/sdb1
    # bcache unregister /dev/sdc1
    # bcache show
    Name        Type        State            Bname       AttachToDev
    /dev/sdb2   1 (data)    inactive         Non-Exist   Alone
    /dev/sdb1   1 (data)    inactive         Non-Exist   Alone
    /dev/sdc1   3 (cache)   inactive         N/A         N/A

Run GParted.  Just the device scanning causes the stopped bcache devices
to be restarted.
    # bcache show
    Name        Type        State            Bname       AttachToDev
    /dev/sdb2   1 (data)    clean(running)   bcache1     /dev/sdc1
    /dev/sdb1   1 (data)    clean(running)   bcache0     /dev/sdc1
    /dev/sdc1   3 (cache)   active           N/A         N/A

This is nothing new with this patchset, but as a result of existing udev
behaviour.  The chain of events goes like this:

1. GParted calls ped_device_get() on each whole device;
2. Libparted opens each partition read-write to flush the cache;
3. When each is closed the kernel emits a block change event;
4. Udev fires block rules to detect the possibly changed content;
5. Udev fires bcache register (AKA start) rule.

More details with the help of udevadm monitor, strace and syslog:
    GParted  | set_devices_thread()
    GParted  |   ped_device_get("/dev/sdb")
    Libparted|     ...
    Libparted|     openat(AT_FDCWD, "/dev/sdb1", O_WRONLY) = 9
    Libparted|     ioctl(9, BLKFLSBUF)        = 0
    Libparted|     close(9)
    KERNEL   | change   /devices/.../block/sdb/sdb1 (block)
    KERNEL   | add      /devices/virtual/bdi/250:0 (bdi)
    KERNEL   | add      /devices/virtual/block/bcache0 (block)
    KERNEL   | change   /devices/virtual/block/bcache0 (block)
    UDEV     | change   /devices/.../block/sdb/sdb1 (block)
    UDEV     | add      /devices/virtual/bdi/250:0 (bdi)
    UDEV     | add      /devices/virtual/block/bcache0 (block)
    UDEV     | change   /devices/virtual/block/bcache0 (block)
    SYSLOG   | kernel: bcache: register_bdev() registered backing device sdb1

    # grep bcache-register /lib/udev/rules.d/69-bcache.rules
    RUN+="bcache-register $tempnode"

Fix this by temporarily adding a blank udev override rule to suppress
automatic starting of bcache devices, just as was previously done for
Linux Software RAID arrays [1].

[1] a255abf343
    Prevent GParted starting stopped Linux Software RAID arrays (#709640)

Closes #183 - Basic support for bcache
This commit is contained in:
Mike Fleetwood 2022-01-08 12:02:03 +00:00 committed by Curtis Gedak
parent 013c992428
commit 8640f91a4f
1 changed files with 7 additions and 6 deletions

View File

@ -176,7 +176,7 @@ fi
#
# Create temporary blank overrides for all udev rules which automatically
# start Linux Software RAID array members.
# start Linux Software RAID array members and Bcache devices.
#
# Udev stores volatile / temporary runtime rules in directory /run/udev/rules.d.
# Older versions use /dev/.udev/rules.d instead, and even older versions don't
@ -189,16 +189,17 @@ fi
# found in 64-md-raid.rules and/or 65-md-incremental.rules, depending on the
# distribution and age. The rules may be commented out or not exist at all.
#
UDEV_TEMP_MDADM_RULES='' # List of temporary override rules files.
UDEV_TEMP_RULES='' # List of temporary override rules files.
for udev_temp_d in /run/udev /dev/.udev; do
if test -d "$udev_temp_d"; then
test ! -d "$udev_temp_d/rules.d" && mkdir "$udev_temp_d/rules.d"
udev_mdadm_rules=`egrep -l '^[^#].*mdadm (-I|--incremental)' /lib/udev/rules.d/*.rules 2> /dev/null`
UDEV_TEMP_MDADM_RULES=`echo "$udev_mdadm_rules" | sed 's,^/lib/udev,'"$udev_temp_d"','`
udev_bcache_rules=`ls /lib/udev/rules.d/*bcache*.rules 2> /dev/null`
UDEV_TEMP_RULES=`echo $udev_mdadm_rules $udev_bcache_rules | sed 's,/lib/udev,/run/udev,g'`
break
fi
done
for rule in $UDEV_TEMP_MDADM_RULES; do
for rule in $UDEV_TEMP_RULES; do
touch "$rule"
done
@ -228,9 +229,9 @@ status=$?
#
# Clear any temporary override udev rules used to stop udev automatically
# starting Linux Software RAID array members.
# starting Linux Software RAID array members and Bcache devices.
#
for rule in $UDEV_TEMP_MDADM_RULES; do
for rule in $UDEV_TEMP_RULES; do
rm -f "$rule"
done