Automatically load the cache of SWRaid information for the first time if
any of the querying methods are called before the first explicit
load_cache() call. Means we can't accidentally use the class and
incorrectly find no SWRaid members when they do exist.
Bug 756829 - SWRaid member detection enhancements
Busy file systems are accessed via a mount point, LVM Physical Volumes
are activated via the Volume Group name and busy SWRaid members are
accessed via the array device, /dev entry. Therefore choose to show the
array device in the mount point field for busy SWRaid members.
The kernel device name for an SWRaid array (without leading "/dev/") is
the same as used in /proc/mdstat and /proc/partitions. Therefore the
array device (with leading "/dev/") displayed in GParted will match
between the mount point for busy SWRaid members and the array itself as
used in the device combo box.
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda1[2] sdb1[3]
524224 blocks super 1.0 [2/2] [UU]
...
# cat /proc/partitions
major minor #blocks name
8 0 33554432 sda
8 1 524288 sda1
...
8 16 33554432 sdb
8 17 524288 sdb1
...
9 1 524224 md1
...
Bug 756829 - SWRaid member detection enhancements
In cases where blkid wrongly reports a file system instead of an SWRaid
member (sometimes confused by metadata 0.90/1.0 mirror array or old
version not recognising SWRaid members), the UUID and label are
obviously wrong too. Therefore have to use the UUID and label returned
by the mdadm query command and never anything reported by blkid or any
file system specific command.
Example of blkid reporting the wrong type, UUID and label for /dev/sda1
and the correct values for /dev/sdb1:
# blkid | egrep 'sd[ab]1'
/dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot"
/dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member"
# mdadm -E -s -v
ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1
devices=/dev/sda1,/dev/sdb1
...
ARRAY /dev/md127 level=raid1 num-devices=2 UUID=8dc7483c:d74ee0a8:b6a8dc3c:a57e43f8
devices=/dev/sdb6,/dev/sda6
...
NOTES:
* In mdadm terminology the label is called the array name, hence name=
parameter for array md/1 in the above output.
* Metadata 0.90 arrays don't support naming, hence the missing name=
parameter for array md127 in the above output.
Bug 756829 - SWRaid member detection enhancements
Add active attribute to the cache of SWRaid members. Move parsing of
/proc/mdstat to discover busy SWRaid members into the cache loading
code. New parsing code is a little different because it is finding all
members of active arrays rather than determining if a specific member is
active.
Bug 756829 - SWRaid member detection enhancements
Detection of Linux SWRaid members currently fails in a number of cases:
1) Arrays which use metadata type 0.90 or 1.0 store the super block at
the end of the partition. So file system signatures in at least
linear and mirrored arrays occur at the same offsets in the
underlying partitions. As libparted only recognises file systems
this is what is detected, rather than an SWRaid member.
# mdadm -E -s -v
ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1
devices=/dev/sda1,/dev/sdb1
...
# wipefs /dev/sda1
offset type
----------------------------------------------------------------
0x438 ext4 [filesystem]
LABEL: chimney-boot
UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501
0x1fffe000 linux_raid_member [raid]
LABEL: chimney:1
UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a
# parted /dev/sda print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 34.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 538MB 537MB primary ext4 boot, raid
...
2) Again with metadata type 0.90 or 1.0 arrays blkid may report the
contained file system instead of an SWRaid member. Have a single
example of this configuration with a mirrored array containing the
/boot file system. Blkid reports one member as ext4 and the other as
SWRaid!
# blkid | egrep 'sd[ab]1'
/dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot"
/dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member"
Bypassing the blkid cache gets the correct result.
# blkid -c /dev/null /dev/sda1
/dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member"
However this can't be used because if a user has a floppy configured
in the BIOS but no floppy attached, GParted will wait for minutes as
the kernel tries to access non-existent hardware on behalf of the
blkid query. See commit:
18f863151c
Fix long scan problem when BIOS floppy setting incorrect
3) Old versions of blkid don't recognise SWRaid members at all so always
report the file system when found. Occurs with blkid v1.0 on
RedHat / CentOS 5.
The only way I can see how to fix all these cases is to use the mdadm
command to query the configured arrays. Then use this information for
first choice when detecting partition content, making the order: SWRaid
members, libparted, blkid and internal.
GParted shell wrapper already creates temporary blank udev rules to
prevent Linux Software RAID arrays being automatically started when
GParted refreshes its device information[1]. However an administrator
could manually stop or start arrays or change their configuration
between refreshes so GParted must load this information every refresh.
On my desktop with 4 internal hard drives and 3 testing Linux Software
RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the
device refresh time.
[1] a255abf343
Prevent GParted starting stopped Linux Software RAID arrays (#709640)
Bug 756829 - SWRaid member detection enhancements