gparted/src/SWRaid_Info.cc

383 lines
13 KiB
C++
Raw Normal View History

Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
/* Copyright (C) 2015 Mike Fleetwood
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "SWRaid_Info.h"
#include "BlockSpecial.h"
#include "Utils.h"
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
#include <glibmm/ustring.h>
#include <glibmm/miscutils.h>
#include <fstream>
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
namespace GParted
{
// Data model:
// cache_initialised - Has the cache been loaded?
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// mdadm_found - Is the "mdadm" command available?
// swraid_info_cache - Vector of member information in Linux Software RAID arrays.
// Only active arrays have /dev entries.
// Notes:
// * BS(member) is short hand for constructor BlockSpecial(member).
// * Array is only displayed as the mount point to the user and
// never compared so not constructing BlockSpecial object for it.
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// E.g.
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
// //member , fstype , array , uuid , label , active
// [{BS("/dev/sda1"), FS_LINUX_SWRAID, "/dev/md1" , "15224a42-c25b-bcd9-15db-60004e5fe53a", "chimney:1", true },
// {BS("/dev/sdb1"), FS_LINUX_SWRAID, "/dev/md1" , "15224a42-c25b-bcd9-15db-60004e5fe53a", "chimney:1", true },
// {BS("/dev/sda2"), FS_LINUX_SWRAID, "" , "8dc7483c-d74e-e0a8-b6a8-dc3ca57e43f8", "" , false},
// {BS("/dev/sdb2"), FS_LINUX_SWRAID, "" , "8dc7483c-d74e-e0a8-b6a8-dc3ca57e43f8", "" , false},
// {BS("/dev/sdc") , FS_ATARAID , "/dev/md126", "43060c4c-b0c0-c371-60bf-d43082e97d3c", "" , true },
// {BS("/dev/sdd") , FS_ATARAID , "/dev/md126", "43060c4c-b0c0-c371-60bf-d43082e97d3c", "" , true }
// ]
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// Initialise static data elements
bool SWRaid_Info::cache_initialised = false;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
bool SWRaid_Info::mdadm_found = false;
std::vector<SWRaid_Member> SWRaid_Info::swraid_info_cache;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
void SWRaid_Info::load_cache()
{
set_command_found();
load_swraid_info_cache();
cache_initialised = true;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
bool SWRaid_Info::is_member( const Glib::ustring & member_path )
{
initialise_if_required();
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
if ( memb.member.m_name.length() > 0 )
return true;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
return false;
}
// Return member/array active status, or false when there is no such member.
bool SWRaid_Info::is_member_active( const Glib::ustring & member_path )
{
initialise_if_required();
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
return memb.active;
}
// Return "file system" type of the member, or FS_UNKNOWN if there is no such member.
FSType SWRaid_Info::get_fstype(const Glib::ustring& member_path)
{
initialise_if_required();
const SWRaid_Member& memb = get_cache_entry_by_member(member_path);
return memb.fstype;
}
// Return array /dev entry (e.g. "/dev/md1") containing the specified member, or "" if the
// array is not running or there is no such member.
const Glib::ustring& SWRaid_Info::get_array(const Glib::ustring& member_path)
{
initialise_if_required();
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
return memb.array;
}
// Return array UUID for the specified member, or "" when failed to parse the UUID or
// there is no such member.
Glib::ustring SWRaid_Info::get_uuid( const Glib::ustring & member_path )
{
initialise_if_required();
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
return memb.uuid;
}
// Return array label (array name in mdadm terminology) for the specified member, or ""
// when the array has no label or there is no such member.
// (Metadata 0.90 arrays don't have names. Metata 1.x arrays are always named, getting a
// default of hostname ":" array number when not otherwise specified).
Glib::ustring SWRaid_Info::get_label( const Glib::ustring & member_path )
{
initialise_if_required();
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
return memb.label;
}
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// Private methods
void SWRaid_Info::initialise_if_required()
{
if ( ! cache_initialised )
{
set_command_found();
load_swraid_info_cache();
cache_initialised = true;
}
}
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
void SWRaid_Info::set_command_found()
{
mdadm_found = ! Glib::find_program_in_path( "mdadm" ).empty();
}
void SWRaid_Info::load_swraid_info_cache()
{
Glib::ustring output, error;
swraid_info_cache.clear();
// Load SWRaid members into the cache. Load member device, array UUID and array
// label (array name in mdadm terminology).
if ( mdadm_found &&
! Utils::execute_command( "mdadm --examine --scan --verbose", output, error, true ) )
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
{
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
// Extract information about all array members. Example output:
// ARRAY metadata=imsm UUID=9a5e3477:e1e668ea:12066a1b:d3708608
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// devices=/dev/sdd,/dev/sdc,/dev/md/imsm0
// ARRAY /dev/md/MyRaid container=9a5e3477:e1e668ea:12066a1b:d3708608 member=0 UUID=47518beb:cc6ef9e7:c80cd1c7:5f6ecb28
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
//
// ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// devices=/dev/sda1,/dev/sdb1
// ARRAY /dev/md5 level=raid1 num-devices=2 UUID=8dc7483c:d74ee0a8:b6a8dc3c:a57e43f8
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// devices=/dev/sda6,/dev/sdb6
std::vector<Glib::ustring> lines;
Utils::split( output, lines, "\n" );
enum MDADM_LINE_TYPE
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
{
MDADM_LT_OTHER = 0,
MDADM_LT_ARRAY = 1,
MDADM_LT_DEVICES = 2
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
};
MDADM_LINE_TYPE mdadm_line_type = MDADM_LT_OTHER;
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
FSType fstype = FS_UNKNOWN;
Glib::ustring uuid;
Glib::ustring label;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
for ( unsigned int i = 0 ; i < lines.size() ; i ++ )
{
if ( lines[i].substr( 0, 6 ) == "ARRAY " )
{
mdadm_line_type = MDADM_LT_ARRAY;
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
if (lines[i].find("container=") != Glib::ustring::npos)
{
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
// Skip mdadm array containers which don't have
// any members.
mdadm_line_type = MDADM_LT_OTHER;
continue;
}
fstype = FS_UNKNOWN;
Glib::ustring metadata = Utils::regexp_label(lines[i],
"metadata=([[:graph:]]+)");
// Mdadm doesn't print a metadata tag for 0.90 version
// arrays, so accept empty.
if (metadata == "" || metadata == "1.0" ||
metadata == "1.1" || metadata == "1.2" )
{
fstype = FS_LINUX_SWRAID;
}
else if (metadata == "imsm" || metadata == "ddf")
{
fstype = FS_ATARAID;
}
else
{
// Skip unexpected mdadm array line.
mdadm_line_type = MDADM_LT_OTHER;
continue;
}
uuid = mdadm_to_canonical_uuid(
Utils::regexp_label( lines[i], "UUID=([[:graph:]]+)" ) );
label = Utils::regexp_label( lines[i], "name=(.*)$" );
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
else if ( mdadm_line_type == MDADM_LT_ARRAY &&
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
lines[i].find( "devices=" ) != Glib::ustring::npos )
{
mdadm_line_type = MDADM_LT_DEVICES;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
Glib::ustring devices_str = Utils::regexp_label( lines[i],
"devices=([[:graph:]]+)" );
std::vector<Glib::ustring> devices;
Utils::split( devices_str, devices, "," );
for ( unsigned int j = 0 ; j < devices.size() ; j ++ )
{
SWRaid_Member memb;
memb.member = BlockSpecial( devices[j] );
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
memb.fstype = fstype;
memb.array = "";
memb.uuid = uuid;
memb.label = label;
memb.active = false;
swraid_info_cache.push_back( memb );
}
uuid.clear();
label.clear();
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
else
{
mdadm_line_type = MDADM_LT_OTHER;
}
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
}
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
// For active array members, set array and active flag.
std::string line;
std::ifstream input( "/proc/mdstat" );
if ( input )
{
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
// Read /proc/mdstat extracting information about all active array
// members. Example /proc/mdstat:
// Personalities : [raid1]
// md127 : inactive sdd[1](S) sdc[0](S)
// 6306 blocks super external:imsm
//
// md126 : active raid1 sdc[1] sdd[0]
// 8383831 blocks super external:/md127/0 [2/2] [UU]
//
// md1 : active raid1 sdb1[3] sda1[2]
// 524224 blocks super 1.0 [2/2] [UU]
//
// md5 : active raid1 sda6[0] sdb6[1]
// 524224 blocks [2/2] [UU]
//
// unused devices: <none>
enum MDSTAT_LINE_TYPE
{
MDSTAT_LT_OTHER = 0,
MDSTAT_LT_ACTIVE = 1,
MDSTAT_LT_BLOCKS = 2
};
MDSTAT_LINE_TYPE mdstat_line_type = MDSTAT_LT_OTHER;
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
FSType fstype = FS_UNKNOWN;
Glib::ustring array;
std::vector<Glib::ustring> members;
while ( getline( input, line ) )
{
if ( line.find( " : active " ) != std::string::npos )
{
mdstat_line_type = MDSTAT_LT_ACTIVE;
// Found a line for an active array. Split into space
// separated fields.
std::vector<Glib::ustring> fields;
Utils::tokenize( line, fields, " " );
array = "/dev/" + fields[0];
members.clear();
for ( unsigned int i = 0 ; i < fields.size() ; i ++ )
{
Glib::ustring::size_type index = fields[i].find( "[" );
if ( index != Glib::ustring::npos )
{
// Field contains an "[" so got a short
// kernel device name of a member.
members.push_back( "/dev/" + fields[i].substr( 0, index ) );
}
}
}
else if ( mdstat_line_type == MDSTAT_LT_ACTIVE &&
line.find( " blocks " ) != std::string::npos )
{
mdstat_line_type = MDSTAT_LT_BLOCKS;
// Found a blocks line for an array.
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
fstype = FS_UNKNOWN;
Glib::ustring super = Utils::regexp_label(line, "super ([[:graph:]]+)");
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
// Kernel doesn't print the super type for 0.90 version
// arrays, so accept empty.
if (super == "" || super == "1.0" ||
super == "1.1" || super == "1.2" )
{
fstype = FS_LINUX_SWRAID;
}
else if (super.compare(0, 9, "external:") == 0)
{
fstype = FS_ATARAID;
}
else
{
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
// Skip unrecognised super type.
mdstat_line_type = MDSTAT_LT_OTHER;
continue;
}
for ( unsigned int i = 0 ; i < members.size() ; i ++ )
{
SWRaid_Member & memb = get_cache_entry_by_member( members[i] );
if ( memb.member.m_name.length() > 0 )
{
// Update existing cache entry, setting
// array and active flag.
memb.array = array;
memb.active = true;
}
else
{
// Member not already found in the cache.
// (Mdadm command possibly missing).
// Insert cache entry.
SWRaid_Member new_memb;
new_memb.member = BlockSpecial( members[i] );
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
new_memb.fstype = fstype;
new_memb.array = array;
new_memb.uuid = "";
new_memb.label = "";
new_memb.active = true;
swraid_info_cache.push_back( new_memb );
}
}
array.clear();
members.clear();
}
else
{
mdstat_line_type = MDSTAT_LT_OTHER;
}
}
input.close();
}
}
// Perform linear search of the cache to find the matching member.
// Returns found cache entry or not found substitute.
SWRaid_Member & SWRaid_Info::get_cache_entry_by_member( const Glib::ustring & member_path )
{
BlockSpecial bs = BlockSpecial( member_path );
for ( unsigned int i = 0 ; i < swraid_info_cache.size() ; i ++ )
{
if ( bs == swraid_info_cache[i].member )
return swraid_info_cache[i];
}
Parse ATARAID members from mdadm output and /proc/mdstat (#75) Since mdadm release 3.0 (2009-06-02) [1] it has also supported external metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously only managed by dmraid. A number of distributions have switched to use mdadm and kernel MD (Multiple Devices) driver for managing these Firmware / BIOS / ATARAID arrays. These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3], SLES >= 12 [4], Ubuntu >= 16.04 LTS. Therefore additionally parse members in these ATARAID arrays included in mdadm output, and when activated using the kernel MD driver, in file /proc/mdstat. Add fstype to the SWRaid_Info cache records to distinguish members apart. So far the rest of the GParted code continues to treat all members as FS_LINUX_SWRAID. This will be resolved in following commits. Note that this in no way affects how GParted shows and partitions the array device itself, even those managed by dmraid and use the GParted DMRaid module. It only affects how GParted shows the member drives themselves. [1] mdadm ANNOUNCE-3.0 file https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0 [2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html "... Fedora 14 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys "mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility." [4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM and DDF https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007 "For IMSM and DDF RAIDs the mdadm driver is used unconditionally." Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-11-17 03:39:02 -07:00
static SWRaid_Member memb = {BlockSpecial(), FS_UNKNOWN, "", "", "", false};
return memb;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
// Reformat mdadm printed UUID into canonical format. Returns "" if source not correctly
// formatted.
// E.g. "15224a42:c25bbcd9:15db6000:4e5fe53a" -> "15224a42-c25b-bcd9-15db-60004e5fe53a"
Glib::ustring SWRaid_Info::mdadm_to_canonical_uuid( const Glib::ustring & mdadm_uuid )
{
Glib::ustring verified_uuid = Utils::regexp_label( mdadm_uuid,
"^([[:xdigit:]]{8}:[[:xdigit:]]{8}:[[:xdigit:]]{8}:[[:xdigit:]]{8})$" );
if ( verified_uuid.empty() )
return verified_uuid;
Glib::ustring canonical_uuid = verified_uuid.substr( 0, 8) + "-" +
verified_uuid.substr( 9, 4) + "-" +
verified_uuid.substr(13, 4) + "-" +
verified_uuid.substr(18, 4) + "-" +
verified_uuid.substr(22, 4) + verified_uuid.substr(27, 8);
return canonical_uuid;
}
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
} //GParted