Since the only use of SWRaid_Info::get_uuid() assign the returned value
this doesn't actually save any copy construction. Do it for consistency
with the other get_*() methods in SWRaid_Info.
Closes!94 - Make more getter methods use return-by-constant-reference
Have to use a second constant reference variable array_path_2 in
GParted_Core::set_mountpoints() because by design C++ does not implement
rebinding of references [1].
[1] why doesn't C++ allow rebinding a reference?
https://stackoverflow.com/questions/27037744/why-doesnt-c-allow-rebinding-a-referenceCloses!94 - Make more getter methods use return-by-constant-reference
The previous commit, made mdadm recognised IMSM and DDF type ATARAID
members get displayed as "linux-raid" (Linux Software RAID array
member). This was because of query method 1 in detect_filesystems().
Fix this now by exposing and using the fstype of the member from the
SWRaid_Info cache.
Closes#75 - Errors with GPT on RAID 0 ATARAID array
Since mdadm release 3.0 (2009-06-02) [1] it has also supported external
metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously
only managed by dmraid.
A number of distributions have switched to use mdadm and kernel MD
(Multiple Devices) driver for managing these Firmware / BIOS / ATARAID
arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3],
SLES >= 12 [4], Ubuntu >= 16.04 LTS.
Therefore additionally parse members in these ATARAID arrays included in
mdadm output, and when activated using the kernel MD driver, in file
/proc/mdstat. Add fstype to the SWRaid_Info cache records to
distinguish members apart. So far the rest of the GParted code
continues to treat all members as FS_LINUX_SWRAID. This will be
resolved in following commits.
Note that this in no way affects how GParted shows and partitions the
array device itself, even those managed by dmraid and use the GParted
DMRaid module. It only affects how GParted shows the member drives
themselves.
[1] mdadm ANNOUNCE-3.0 file
https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0
[2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem
https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html
"... Fedora 14 uses mdraid with external metadata to access ISW /
IMSM (Intel firmware RAID) sets. mdraid sets are configured and
controlled through the mdadm utility."
[3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys
"mdraid also supports other metadata formats, known as external
metadata. Red Hat Enterprise Linux 6 uses mdraid with external
metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid
sets are configured and controlled through the mdadm utility."
[4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM
and DDF
https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007
"For IMSM and DDF RAIDs the mdadm driver is used unconditionally."
Closes#75 - Errors with GPT on RAID 0 ATARAID array
It made the code look a little messy, is easily resolved in the build
system and made the dependencies more complicated than needed. Each
GParted header was tracked via multiple different names (different
numbers of "../include/" prefixes). For example just looking at how
DialogFeatures.o depends on Utils.h:
$ cd src
$ make DialogFeatures.o
$ egrep ' [^ ]*Utils.h' .deps/DialogFeatures.Po
../include/DialogFeatures.h ../include/../include/Utils.h \
../include/../include/../include/../include/../include/../include/Utils.h \
../include/../include/../include/Utils.h \
After removing "../include/" from the GParted header #includes, just
need to add "-I../include" to the compile command via the AM_CPPFLAGS in
src/Makefile.am. Now the dependencies on GParted header files are
tracked under a single name (with a single "../include/" prefix). Now
DialogFeatures.o only depends on a single name to Utils.h:
$ make DialogFeatures.o
$ egrep ' [^ ]*Utils.h' .deps/DialogFeatures.Po
../include/DialogFeatures.h ../include/Utils.h ../include/i18n.h \
The SWRaid_Info cache is loaded from "mdadm" command output and
/proc/mdstat file. It contains the member name which is used to access
the cache, therefore switch to using BlockSpecial objects so that
comparison is performed using the major, minor device number.
Bug 767842 - File system usage missing when tools report alternate block
device names
Automatically load the cache of SWRaid information for the first time if
any of the querying methods are called before the first explicit
load_cache() call. Means we can't accidentally use the class and
incorrectly find no SWRaid members when they do exist.
Bug 756829 - SWRaid member detection enhancements
Busy file systems are accessed via a mount point, LVM Physical Volumes
are activated via the Volume Group name and busy SWRaid members are
accessed via the array device, /dev entry. Therefore choose to show the
array device in the mount point field for busy SWRaid members.
The kernel device name for an SWRaid array (without leading "/dev/") is
the same as used in /proc/mdstat and /proc/partitions. Therefore the
array device (with leading "/dev/") displayed in GParted will match
between the mount point for busy SWRaid members and the array itself as
used in the device combo box.
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda1[2] sdb1[3]
524224 blocks super 1.0 [2/2] [UU]
...
# cat /proc/partitions
major minor #blocks name
8 0 33554432 sda
8 1 524288 sda1
...
8 16 33554432 sdb
8 17 524288 sdb1
...
9 1 524224 md1
...
Bug 756829 - SWRaid member detection enhancements
In cases where blkid wrongly reports a file system instead of an SWRaid
member (sometimes confused by metadata 0.90/1.0 mirror array or old
version not recognising SWRaid members), the UUID and label are
obviously wrong too. Therefore have to use the UUID and label returned
by the mdadm query command and never anything reported by blkid or any
file system specific command.
Example of blkid reporting the wrong type, UUID and label for /dev/sda1
and the correct values for /dev/sdb1:
# blkid | egrep 'sd[ab]1'
/dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot"
/dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member"
# mdadm -E -s -v
ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1
devices=/dev/sda1,/dev/sdb1
...
ARRAY /dev/md127 level=raid1 num-devices=2 UUID=8dc7483c:d74ee0a8:b6a8dc3c:a57e43f8
devices=/dev/sdb6,/dev/sda6
...
NOTES:
* In mdadm terminology the label is called the array name, hence name=
parameter for array md/1 in the above output.
* Metadata 0.90 arrays don't support naming, hence the missing name=
parameter for array md127 in the above output.
Bug 756829 - SWRaid member detection enhancements
Add active attribute to the cache of SWRaid members. Move parsing of
/proc/mdstat to discover busy SWRaid members into the cache loading
code. New parsing code is a little different because it is finding all
members of active arrays rather than determining if a specific member is
active.
Bug 756829 - SWRaid member detection enhancements
Detection of Linux SWRaid members currently fails in a number of cases:
1) Arrays which use metadata type 0.90 or 1.0 store the super block at
the end of the partition. So file system signatures in at least
linear and mirrored arrays occur at the same offsets in the
underlying partitions. As libparted only recognises file systems
this is what is detected, rather than an SWRaid member.
# mdadm -E -s -v
ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1
devices=/dev/sda1,/dev/sdb1
...
# wipefs /dev/sda1
offset type
----------------------------------------------------------------
0x438 ext4 [filesystem]
LABEL: chimney-boot
UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501
0x1fffe000 linux_raid_member [raid]
LABEL: chimney:1
UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a
# parted /dev/sda print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sda: 34.4GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 538MB 537MB primary ext4 boot, raid
...
2) Again with metadata type 0.90 or 1.0 arrays blkid may report the
contained file system instead of an SWRaid member. Have a single
example of this configuration with a mirrored array containing the
/boot file system. Blkid reports one member as ext4 and the other as
SWRaid!
# blkid | egrep 'sd[ab]1'
/dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot"
/dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member"
Bypassing the blkid cache gets the correct result.
# blkid -c /dev/null /dev/sda1
/dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member"
However this can't be used because if a user has a floppy configured
in the BIOS but no floppy attached, GParted will wait for minutes as
the kernel tries to access non-existent hardware on behalf of the
blkid query. See commit:
18f863151c
Fix long scan problem when BIOS floppy setting incorrect
3) Old versions of blkid don't recognise SWRaid members at all so always
report the file system when found. Occurs with blkid v1.0 on
RedHat / CentOS 5.
The only way I can see how to fix all these cases is to use the mdadm
command to query the configured arrays. Then use this information for
first choice when detecting partition content, making the order: SWRaid
members, libparted, blkid and internal.
GParted shell wrapper already creates temporary blank udev rules to
prevent Linux Software RAID arrays being automatically started when
GParted refreshes its device information[1]. However an administrator
could manually stop or start arrays or change their configuration
between refreshes so GParted must load this information every refresh.
On my desktop with 4 internal hard drives and 3 testing Linux Software
RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the
device refresh time.
[1] a255abf343
Prevent GParted starting stopped Linux Software RAID arrays (#709640)
Bug 756829 - SWRaid member detection enhancements