Prevent GParted starting stopped Linux Software RAID arrays (#709640)

Applying operations or just scanning the partitions in GParted was
causing all stopped Linux Software RAID arrays to be automatically
started.  This is not new with this patch set, but as a result of the
following behaviour which has existed for a long time.  Chain of events
goes likes this:

 1) Gparted calls commit_to_os() to update the kernel with the new
    partition table;
 2) Libparted calls ioctl() BLKPG_DEL_PARTITION on every partition to
    delete every partition from the kernel.  Succeeds on non-busy
    partitions only;
 3) Kernel emits udev partition remove event on every removed partition;
 4) Libparted calls ioctl() BLKPG_ADD_PARTITION on every non-busy
    partition to re-add the partition to the kernel;
 5) Kernel emits udev partition add event on every added partition;
 6) Udev rule:
      SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="linux_raid_member", \
              RUN+="/sbin/mdadm -I $tempnode"
    from either /lib/udef/rules.d/64-md-raid.rules or
    .../65-md-incremental.rules incrementally starts the member in a
    Linux Software RAID array.

Fix by temporarily adding blank override rules files which does nothing,
so that when the udev add and remove events for Linux Software RAID
array member partitions fire nothing is done; but only when required.
Note that really old versions of udev don't have rules to incrementally
start array members and some distributions comment out such rules.

Bug #709640 - Linux Swap Suspend and Software RAID partitions not
              recognised
This commit is contained in:
Mike Fleetwood 2013-10-11 15:22:45 +01:00 committed by Curtis Gedak
parent d2e1130ad2
commit a255abf343
1 changed files with 36 additions and 0 deletions

36
gparted.in Normal file → Executable file
View File

@ -121,6 +121,34 @@ if test "x$HAVE_SYSTEMCTL" = "xyes"; then
systemctl --runtime mask --quiet -- $MOUNTLIST systemctl --runtime mask --quiet -- $MOUNTLIST
fi fi
#
# Create temporary blank overrides for all udev rules which automatically
# start Linux Software RAID array members.
#
# Udev stores volatile / temporary runtime rules in directory /run/udev/rules.d.
# Older versions use /dev/.udev/rules.d instead, and even older versions don't
# have such a directory at all. Volatile / temporary rules are use to override
# default rules from /lib/udev/rules.d. (Permanent local administrative rules
# in directory /etc/udev/rules.d override all others). See udev(7) manual page
# from various versions of udev for details.
#
# Default udev rules containing mdadm to incrementally start array members are
# found in 64-md-raid.rules and/or 65-md-incremental.rules, depending on the
# distribution and age. The rules may be commented out or not exist at all.
#
UDEV_TEMP_MDADM_RULES='' # List of temporary override rules files.
for udev_temp_d in /run/udev /dev/.udev; do
if test -d "$udev_temp_d"; then
test ! -d "$udev_temp_d/rules.d" && mkdir "$udev_temp_d/rules.d"
udev_mdadm_rules=`egrep -l '^[^#].*mdadm (-I|--incremental)' /lib/udev/rules.d/*.rules 2> /dev/null`
UDEV_TEMP_MDADM_RULES=`echo "$udev_mdadm_rules" | sed 's,^/lib/udev,'"$udev_temp_d"','`
break
fi
done
for rule in $UDEV_TEMP_MDADM_RULES; do
touch "$rule"
done
# #
# Use both udisks and hal-lock for invocation if both binaries exist and both # Use both udisks and hal-lock for invocation if both binaries exist and both
# daemons are running. # daemons are running.
@ -150,6 +178,14 @@ else
$BASE_CMD $BASE_CMD
fi fi
#
# Clear any temporary override udev rules used to stop udev automatically
# starting Linux Software RAID array members.
#
for rule in $UDEV_TEMP_MDADM_RULES; do
rm -f "$rule"
done
# #
# Use systemctl to restore that status of any mount points changed above # Use systemctl to restore that status of any mount points changed above
# #