gparted/src/SWRaid_Info.cc

247 lines
8.4 KiB
C++
Raw Normal View History

Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
/* Copyright (C) 2015 Mike Fleetwood
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
#include "../include/SWRaid_Info.h"
#include "../include/Utils.h"
#include <glibmm/ustring.h>
#include <fstream>
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
namespace GParted
{
// Data model:
// mdadm_found - Is the "mdadm" command available?
// swraid_info_cache - Vector of member information in Linux Software RAID arrays.
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// E.g.
// //member , uuid , label , active
// [{"/dev/sda1", "15224a42-c25b-bcd9-15db-60004e5fe53a", "chimney:1", true },
// {"/dev/sda2", "15224a42-c25b-bcd9-15db-60004e5fe53a", "chimney:1", true },
// {"/dev/sda6", "8dc7483c-d74e-e0a8-b6a8-dc3ca57e43f8", "" , false},
// {"/dev/sdb6", "8dc7483c-d74e-e0a8-b6a8-dc3ca57e43f8", "" , false}
// ]
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// Initialise static data elements
bool SWRaid_Info::mdadm_found = false;
std::vector<SWRaid_Member> SWRaid_Info::swraid_info_cache;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
void SWRaid_Info::load_cache()
{
set_command_found();
load_swraid_info_cache();
}
bool SWRaid_Info::is_member( const Glib::ustring & member_path )
{
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
if ( memb.member == member_path )
return true;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
return false;
}
bool SWRaid_Info::is_member_active( const Glib::ustring & member_path )
{
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
if ( memb.member == member_path )
return memb.active;
return false; // No such member
}
// Return array UUID for the specified member, or "" when failed to parse the UUID or
// there is no such member.
Glib::ustring SWRaid_Info::get_uuid( const Glib::ustring & member_path )
{
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
return memb.uuid;
}
// Return array label (array name in mdadm terminology) for the specified member, or ""
// when the array has no label or there is no such member.
// (Metadata 0.90 arrays don't have names. Metata 1.x arrays are always named, getting a
// default of hostname ":" array number when not otherwise specified).
Glib::ustring SWRaid_Info::get_label( const Glib::ustring & member_path )
{
const SWRaid_Member & memb = get_cache_entry_by_member( member_path );
return memb.label;
}
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// Private methods
void SWRaid_Info::set_command_found()
{
mdadm_found = ! Glib::find_program_in_path( "mdadm" ).empty();
}
void SWRaid_Info::load_swraid_info_cache()
{
Glib::ustring output, error;
swraid_info_cache.clear();
if ( ! mdadm_found )
return;
// Load SWRaid members into the cache. Load member device, array UUID and array
// label (array name in mdadm terminology).
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
Glib::ustring cmd = "mdadm --examine --scan --verbose";
if ( ! Utils::execute_command( cmd, output, error, true ) )
{
// Extract information from Linux Software RAID arrays only, excluding
// IMSM and DDF arrays. Example output:
// ARRAY metadata=imsm UUID=9a5e3477:e1e668ea:12066a1b:d3708608
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// devices=/dev/sdd,/dev/sdc,/dev/md/imsm0
// ARRAY /dev/md/MyRaid container=9a5e3477:e1e668ea:12066a1b:d3708608 member=0 UUID=47518beb:cc6ef9e7:c80cd1c7:5f6ecb28
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
//
// ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// devices=/dev/sda1,/dev/sdb1
// ARRAY /dev/md5 level=raid1 num-devices=2 UUID=8dc7483c:d74ee0a8:b6a8dc3c:a57e43f8
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// devices=/dev/sda6,/dev/sdb6
std::vector<Glib::ustring> lines;
Utils::split( output, lines, "\n" );
enum LINE_TYPE
{
LINE_TYPE_OTHER = 0,
LINE_TYPE_ARRAY = 1,
LINE_TYPE_DEVICES = 2
};
LINE_TYPE line_type = LINE_TYPE_OTHER;
Glib::ustring uuid;
Glib::ustring label;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
for ( unsigned int i = 0 ; i < lines.size() ; i ++ )
{
Glib::ustring metadata_type;
if ( lines[i].substr( 0, 6 ) == "ARRAY " )
{
line_type = LINE_TYPE_ARRAY;
Glib::ustring metadata_type = Utils::regexp_label( lines[i],
"metadata=([[:graph:]]+)" );
// Mdadm with these flags doesn't seem to print the
// metadata tag for 0.90 version arrays. Accept no tag
// (or empty version) as well as "0.90".
if ( metadata_type != "" && metadata_type != "0.90" &&
metadata_type != "1.0" && metadata_type != "1.1" &&
metadata_type != "1.2" )
{
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
// Skip mdadm reported non-Linux Software RAID arrays
line_type = LINE_TYPE_OTHER;
continue;
}
uuid = mdadm_to_canonical_uuid(
Utils::regexp_label( lines[i], "UUID=([[:graph:]]+)" ) );
label = Utils::regexp_label( lines[i], "name=(.*)$" );
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
else if ( line_type == LINE_TYPE_ARRAY &&
lines[i].find( "devices=" ) != Glib::ustring::npos )
{
line_type = LINE_TYPE_DEVICES;
Glib::ustring devices_str = Utils::regexp_label( lines[i],
"devices=([[:graph:]]+)" );
std::vector<Glib::ustring> devices;
Utils::split( devices_str, devices, "," );
for ( unsigned int j = 0 ; j < devices.size() ; j ++ )
{
SWRaid_Member memb;
memb.member = devices[j];
memb.uuid = uuid;
memb.label = label;
memb.active = false;
swraid_info_cache.push_back( memb );
}
uuid.clear();
label.clear();
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
else
{
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
line_type = LINE_TYPE_OTHER;
}
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
}
// Set which SWRaid members are active.
std::string line;
std::ifstream input( "/proc/mdstat" );
if ( input )
{
// Read /proc/mdstat extracting members for active arrays, marking them
// active in the cache. Example fragment of /proc/mdstat:
// md1 : active raid1 sdb1[0] sdb2[1]
// 1047552 blocks super 1.2 [2/2] [UU]
while ( getline( input, line ) )
{
if ( line.find( " : active " ) != std::string::npos )
{
// Found a line for an active array. Split into space
// separated fields.
std::vector<Glib::ustring> fields;
Utils::tokenize( line, fields, " " );
for ( unsigned int i = 0 ; i < fields.size() ; i ++ )
{
Glib::ustring::size_type index = fields[i].find( "[" );
if ( index != Glib::ustring::npos )
{
// Field contains an "[" so got a short
// kernel device name of a member. Mark
// as active.
Glib::ustring mpath = "/dev/" +
fields[i].substr( 0, index );
SWRaid_Member & memb = get_cache_entry_by_member( mpath );
if ( memb.member == mpath )
memb.active = true;
}
}
}
}
input.close();
}
}
// Perform linear search of the cache to find the matching member.
// Returns found cache entry or not found substitute.
SWRaid_Member & SWRaid_Info::get_cache_entry_by_member( const Glib::ustring & member_path )
{
for ( unsigned int i = 0 ; i < swraid_info_cache.size() ; i ++ )
{
if ( member_path == swraid_info_cache[i].member )
return swraid_info_cache[i];
}
static SWRaid_Member memb = {"", "", "", false};
return memb;
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
}
// Reformat mdadm printed UUID into canonical format. Returns "" if source not correctly
// formatted.
// E.g. "15224a42:c25bbcd9:15db6000:4e5fe53a" -> "15224a42-c25b-bcd9-15db-60004e5fe53a"
Glib::ustring SWRaid_Info::mdadm_to_canonical_uuid( const Glib::ustring & mdadm_uuid )
{
Glib::ustring verified_uuid = Utils::regexp_label( mdadm_uuid,
"^([[:xdigit:]]{8}:[[:xdigit:]]{8}:[[:xdigit:]]{8}:[[:xdigit:]]{8})$" );
if ( verified_uuid.empty() )
return verified_uuid;
Glib::ustring canonical_uuid = verified_uuid.substr( 0, 8) + "-" +
verified_uuid.substr( 9, 4) + "-" +
verified_uuid.substr(13, 4) + "-" +
verified_uuid.substr(18, 4) + "-" +
verified_uuid.substr(22, 4) + verified_uuid.substr(27, 8);
return canonical_uuid;
}
Detect Linux SWRaid members by querying mdadm (#756829) Detection of Linux SWRaid members currently fails in a number of cases: 1) Arrays which use metadata type 0.90 or 1.0 store the super block at the end of the partition. So file system signatures in at least linear and mirrored arrays occur at the same offsets in the underlying partitions. As libparted only recognises file systems this is what is detected, rather than an SWRaid member. # mdadm -E -s -v ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2 UUID=15224a42:c25bbcd9:15db6000:4e5fe53a name=chimney:1 devices=/dev/sda1,/dev/sdb1 ... # wipefs /dev/sda1 offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] LABEL: chimney-boot UUID: 10ab5f7d-7d8a-4171-8b6a-5e973b402501 0x1fffe000 linux_raid_member [raid] LABEL: chimney:1 UUID: 15224a42-c25b-bcd9-15db-60004e5fe53a # parted /dev/sda print Model: ATA VBOX HARDDISK (scsi) Disk /dev/sda: 34.4GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 538MB 537MB primary ext4 boot, raid ... 2) Again with metadata type 0.90 or 1.0 arrays blkid may report the contained file system instead of an SWRaid member. Have a single example of this configuration with a mirrored array containing the /boot file system. Blkid reports one member as ext4 and the other as SWRaid! # blkid | egrep 'sd[ab]1' /dev/sda1: UUID="10ab5f7d-7d8a-4171-8b6a-5e973b402501" TYPE="ext4" LABEL="chimney-boot" /dev/sdb1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="0a095e45-9360-1b17-0ad1-1fe369e22b98" LABEL="chimney:1" TYPE="linux_raid_member" Bypassing the blkid cache gets the correct result. # blkid -c /dev/null /dev/sda1 /dev/sda1: UUID="15224a42-c25b-bcd9-15db-60004e5fe53a" UUID_SUB="d0460f90-d11a-e80a-ee1c-3d104dae7e5d" LABEL="chimney:1" TYPE="linux_raid_member" However this can't be used because if a user has a floppy configured in the BIOS but no floppy attached, GParted will wait for minutes as the kernel tries to access non-existent hardware on behalf of the blkid query. See commit: 18f863151c82934fe0a980853cc3deb1e439bec2 Fix long scan problem when BIOS floppy setting incorrect 3) Old versions of blkid don't recognise SWRaid members at all so always report the file system when found. Occurs with blkid v1.0 on RedHat / CentOS 5. The only way I can see how to fix all these cases is to use the mdadm command to query the configured arrays. Then use this information for first choice when detecting partition content, making the order: SWRaid members, libparted, blkid and internal. GParted shell wrapper already creates temporary blank udev rules to prevent Linux Software RAID arrays being automatically started when GParted refreshes its device information[1]. However an administrator could manually stop or start arrays or change their configuration between refreshes so GParted must load this information every refresh. On my desktop with 4 internal hard drives and 3 testing Linux Software RAID arrays, running mdadm adds between 0.20 and 0.30 seconds to the device refresh time. [1] a255abf3432ad106fac9c766f0816ada20be8e42 Prevent GParted starting stopped Linux Software RAID arrays (#709640) Bug 756829 - SWRaid member detection enhancements
2015-10-20 13:52:19 -06:00
} //GParted