A user reported that it was not possible to deactivate an active LVM
Physical Volume. They were using GParted app from GParted Live 1.6.0.
This behaviour has been reproduced on GParted Live 1.6.0-3 (Debian SID
as of 2024-04-08) and Fedora Rawhide as of 2024-08-15, both recent
development branches of their respective distributions. Was not able to
replicate this on these latest releases: Debian 12, Fedora 40 and
Ubuntu 24.04 LTS.
GParted did deactivate the LVM Volume Group containing the Physical
Volume but device refresh probing triggered a udev rule which
re-activated the Volume Group. Summary:
1. GParted read the partition table by calling ped_disk_get().
2. Libparted opened the whole disk and every partition read-write to
flush the caches for coherency. On closing the file handles ...
3. The kernel generated partition remove and add uevents.
4. Udev triggered the rule to start the LVM VG.
Details obtained with the help of udevadm monitor, strace and
journalctl:
GParted | set_devices_thread()
GParted | set_device_from_disk()
GParted | get_device()
GParted | get_disk()
libparted| ped_disk_new()
libparted| openat(AT_FDCWD, "/dev/sdb", O_RDWR) = 7
libparted| ioctl(7, BLKFLSBUF) = 0
libparted| openat(AT_FDCWD, "/dev/sdb1", O_RDWR) = 8
libparted| ioctl(8, BLKFLSBUF) = 0
libparted| fsync(8) = 0
libparted| close(8) = 0
KERNEL | change /devices/pci0000:00/.../block/sdb/sdb1 (block)
UDEV | change /devices/pci0000:00/.../block/sdb/sdb1 (block)
libparted| fsync(7) = 0
libparted| close(7) = 0
KERNEL | remove /devices/pci0000:00/.../block/sdb/sdb1 (block)
KERNEL | change /devices/pci0000:00/.../block/sdb (block)
KERNEL | add /devices/pci0000:00/.../block/sdb/sdb1 (block)
UDEV | remove /devices/pci0000:00/.../block/sdb/sdb1 (block)
UDEV | change /devices/pci0000:00/.../block/sdb (block)
UDEV | add /devices/pci0000:00/.../block/sdb/sdb1 (block)
SYSLOG | lvm[62502]: PV /dev/sdb1 online, VG testvg is complete.
KERNEL | add /devices/virtual/bdi/253:0 (block)
KERNEL | add /devices/virtual/block/dm-0 (block)
KERNEL | change /devices/virtual/block/dm-0 (block)
UDEV | add /devices/virtual/bdi/253:0 (block)
UDEV | add /devices/virtual/block/dm-0 (block)
UDEV | change /devices/virtual/block/dm-0 (block)
SYSLOG | systemd[1]: Started lvm-activate-testvg.service - /usr/sbin/lvm vgchange -aay --autoactivation event testvg.
SYSLOG | lvm[62504]: 1 logical volume(s) in volume group "testvg" now active
SYSLOG | systemd[1]: lvm-activate-testvg.service: Deactivated successfully.
# grep 'lvm vgchange -a' /usr/lib/udev/rules.d/*lvm*
/usr/lib/udev/rules.d/69-dm-lvm.rules:... RUN+="... /usr/sbin/lvm vgchange -aay --autoactivation event $env{LVM_VG_NAME_COMPLETE}"
Evaluation using systemd's Locking Block Device Access [1]. Took a BSD
file lock on /dev/sdb while GParted was probing the drive. Used Python
from another terminal:
# python
>>> import fcntl
>>> f = open('/dev/sdb', 'wb')
>>> fcntl.flock(f, fcntl.LOCK_EX|fcntl.LOCK_NB)
Ran GParted. Released the lock by closing the open file when GParted
display had updated.
>>> f.close()
The lock temporarily stopped the Volume Group being activated so GParted
displayed it as inactive, but as soon as the lock was released the udev
rule fired and the Volume Group was activated. This is an even worse
situation as GParted displayed the Volume Group as inactive but it was
actually active. Therefore GParted can't use this method.
This type of issue has been encountered before with bcache devices [2]
and Linux Software RAID arrays [3] being automatically started by device
probing. Fix using the same method of temporarily adding in a blank
override rule which does nothing.
[1] systemd - Locking Block Device Access
https://systemd.io/BLOCK_DEVICE_LOCKING/
[2] 8640f91a4f
Prevent GParted probe starting stopped bcache (#183)
[3] a255abf343
Prevent GParted starting stopped Linux Software RAID arrays (#709640)
Closes#259 - Trying to deactivate LVM PV fails
When blanking of udev rules was first tested [1][2] and added [3] all
the distributions at the time (CentOS 6, Debian 6, Fedora 19,
openSUSE 12.2, Ubuntu 12.04 LTS) stored the system default rules in
directory /lib/udev/rules.d. Now most distributions (CentOS Stream 9,
Debian 11, Fedora 38, Ubuntu 22.04 LTS, openSUSE Leap 15.4) store the
system default rules in directory /usr/lib/udev/rules.d. Most of these
distributions have a merged /usr file system [4][5] so /lib is a symlink
to /usr/lib and the system default rules can still found using the
original directory. But openSUSE 15.4 doesn't have a merged /usr so the
gparted shell wrapper doesn't find the system default rules in directory
/usr/lib/udev/rules.d and doesn't prevent auto starting of Linux
Software RAID arrays and bcache devices during a storage probe.
An extra consideration is that Alpine Linux 3.17 doesn't have a merged
/usr file system, but has both /lib/udev/rules.d and
/usr/lib/udev/rules.d directories with different rules files. Therefore
fix this by checking for system default udev rules in both directories.
[1] Bug 709640 - Linux Swap Suspend and Software RAID partitions not
recognised, comment 7
https://bugzilla.gnome.org/show_bug.cgi?id=709640#c7
[2] Bug 709640 - Linux Swap Suspend and Software RAID partitions not
recognised, comment 12
https://bugzilla.gnome.org/show_bug.cgi?id=709640#c12
[3] a255abf343
Prevent GParted starting stopped Linux Software RAID arrays (#709640)
[4] The Case for the /usr Merge
http://0pointer.de/blog/projects/the-usr-merge
[5] The Case for the /usr Merge
https://www.freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/Closes!116 - Systemd mount masking and udev rule location updates
After the previous commit "Stop masking the root file system mount
unit", GParted now reports this error to the terminal on Debian 10 and
11:
# gparted
To few arguments.
GParted 1.5.0-git
configuration --enable-online-resize
libparted 3.2
Debian installations, at least on PC hardware and using BIOS booting so
they don't have a /boot/efi file system, only have a single file system,
/ (root). That is now excluded from masking so gparted shell wrapper
runs systemctl without any mount units to mask. Hence the error.
# systemctl --runtime mask --quiet --
Too few arguments.
# echo $?
1
Fix this by only masking and unmasking units when the list of mounts
is non-empty.
Closes!116 - Systemd mount masking and udev rule location updates
Masking the root file system (-.mount) unit lead to a Debian package
upgrade failing as reported here [1]. This was fixed in systemd 245
[2][3] by not allowing perpetual units to be masked. As the root file
system can't be mounted or unmounted while GParted is running, it
doesn't need to be prevented by masking the unit. Therefore stop
masking the root file system mount unit.
[1] Debian bug #948710 - handle masked .mount unit more gracefully
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=948710
[2] systemd issue #14550 - Handle masked .mount units more gracefully
https://github.com/systemd/systemd/issues/14550
[3] core: never allow perpetual units to be masked
88414eed6fCloses!116 - Systemd mount masking and udev rule location updates
On RHEL / CentOS 8 GParted reports this error to the terminal when it is
closed:
# gparted
GParted 1.5.0-git
configuration --enable-online-resize
libparted 3.2
>> --runtime cannot be used with unmask
# $?
0
and leaves mount units masked:
# systemctl list-units '*.mount'
UNIT LOAD ACTIVE SUB DESCRIPTION
------------------------------------------------------------------
* -.mount masked active mounted Root Mount
* boot.mount masked active mounted boot.mount
...
This is because of this change [1] released in systemd 239. Systemd bug
9393 [2] was raised and the change was reverted [3] in systemd 240.
According to repology.org only RHEL / CentOS 8 (and clones) and Fedora
29 shipped with systemd 239 [4].
Fix by detecting non-zero exit status from systemctl and falling back to
directly removing the runtime mount unit mask files instead. Then have
to use systemctl daemon-reload to inform systemd to reload it's
configuration from disk to discover the masks have been removed.
[1] systemctl: when removing enablement or mask symlinks, cover both
/run and /etc
4910b35078
[2] systemctl no longer allows unmask in combination with --runtime
#9393https://github.com/systemd/systemd/issues/9393
[3] Revert "systemctl: when removing enablement or mask symlinks, cover
both /run and /etc"
1830ac51a4
[4] Versions for systemd
https://repology.org/project/systemd/versionsCloses!116 - Systemd mount masking and udev rule location updates
Udev stopped supporting volatile udev rules in /dev/.udev/rules.d in
udev 176, released 2012-01-11 [1]. The oldest supported distributions
use much more recent combined systemd and udev releases.
Distro EOL udevadm -V
Debian 9 2022-Jun 232
RHEL / CentOS 7 2024-Jun 219
Ubuntu 18.04 LTS 2023-Apr 237
Now udev only reads volatile rules from /run/udev/ruled.d [2]. Simplify
the code a little.
[1] udev 176 NEWS
https://git.kernel.org/pub/scm/linux/hotplug/udev.git/tree/NEWS?h=176
"A writable /run directory (ususally tmpfs) is required now for a
fully functional udev, there is no longer a fallback to /dev/.udev."
[2] man 7 udev
"RULES FILES
The udev rules are read from the files located in the system rules
directory /usr/lib/udev/rules.d, the volatile runtime directory
/run/udev/rules.d and the local administration directory
/etc/udev/rules.d."
From the setup in the previous commit, unregister (stop) all of the
bcache backing and cache devices.
# bcache unregister /dev/sdb2
# bcache unregister /dev/sdb1
# bcache unregister /dev/sdc1
# bcache show
Name Type State Bname AttachToDev
/dev/sdb2 1 (data) inactive Non-Exist Alone
/dev/sdb1 1 (data) inactive Non-Exist Alone
/dev/sdc1 3 (cache) inactive N/A N/A
Run GParted. Just the device scanning causes the stopped bcache devices
to be restarted.
# bcache show
Name Type State Bname AttachToDev
/dev/sdb2 1 (data) clean(running) bcache1 /dev/sdc1
/dev/sdb1 1 (data) clean(running) bcache0 /dev/sdc1
/dev/sdc1 3 (cache) active N/A N/A
This is nothing new with this patchset, but as a result of existing udev
behaviour. The chain of events goes like this:
1. GParted calls ped_device_get() on each whole device;
2. Libparted opens each partition read-write to flush the cache;
3. When each is closed the kernel emits a block change event;
4. Udev fires block rules to detect the possibly changed content;
5. Udev fires bcache register (AKA start) rule.
More details with the help of udevadm monitor, strace and syslog:
GParted | set_devices_thread()
GParted | ped_device_get("/dev/sdb")
Libparted| ...
Libparted| openat(AT_FDCWD, "/dev/sdb1", O_WRONLY) = 9
Libparted| ioctl(9, BLKFLSBUF) = 0
Libparted| close(9)
KERNEL | change /devices/.../block/sdb/sdb1 (block)
KERNEL | add /devices/virtual/bdi/250:0 (bdi)
KERNEL | add /devices/virtual/block/bcache0 (block)
KERNEL | change /devices/virtual/block/bcache0 (block)
UDEV | change /devices/.../block/sdb/sdb1 (block)
UDEV | add /devices/virtual/bdi/250:0 (bdi)
UDEV | add /devices/virtual/block/bcache0 (block)
UDEV | change /devices/virtual/block/bcache0 (block)
SYSLOG | kernel: bcache: register_bdev() registered backing device sdb1
# grep bcache-register /lib/udev/rules.d/69-bcache.rules
RUN+="bcache-register $tempnode"
Fix this by temporarily adding a blank udev override rule to suppress
automatic starting of bcache devices, just as was previously done for
Linux Software RAID arrays [1].
[1] a255abf343
Prevent GParted starting stopped Linux Software RAID arrays (#709640)
Closes#183 - Basic support for bcache
Debian (and derived) distros with the udisks2 [1] repository and the
additional 'udisks2-inhibit' executable had the location changed from:
/usr/lib/udisks2/
to:
/usr/libexec/udisks2/
with udisks2 version 2.8.4-2 and the following commit:
f6744a33 - Move the daemons to /usr/libexec now that's allowed in the policy
f6744a3364
Distros such as Fedora and openSUSE are unaffected as the udisks [2]
repository does not contain 'udisks2-inhibit'.
[1] udisks2 Debian (and derived) repository
https://salsa.debian.org/utopia-team/udisks2
[2] udisks repository
https://github.com/storaged-project/udisksCloses!84 - Handle change in path for udisks2-inhibit executable
Executables which are not intended for execution by users, but by other
programs, should be installed into /usr/libexec [1][2]. gpartedbin
falls into this category. Update it's installation accordingly.
Standard Autotools details: gpartedbin will be installed into
EPREFIX/libexec by default. To install gpartedbin into a different
directory set libexecdir when configuring the build system. Like this
from git:
./autogen.sh --libexecdir=DIR
or like this from tar release:
./configure --libexecdir=DIR
[1] Filesystem Hierarchy Standard, version 3.0,
4.7. /usr/libexec : Binaries run by other programs (optional)
https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch04s07.html
"/usr/libexec includes internal binaries that are not intended to be
executed directly by users or shell scripts.
"
[2] GNU Coding Standards, June 12, 2020,
7.2.5 Variables for Installation Directories
https://www.gnu.org/prep/standards/html_node/Directory-Variables.html
"libexecdir
The directory for installing executable programs to be run by other
programs rather than by users. This directory should normally be
/usr/local/libexec, but write it as $(exec_prefix)/libexec. (If you
are using Autoconf, write it as '@libexecdir@'.)
"
Closes#85 - Please install gpartedbin under /usr/libexec instead of
/usr/sbin
gparted shell wrapper always exits with a 0 status even if gpartedbin
fails. For example make gpartedbin fail with a non-zero exit status
like this:
$ (unset DISPLAY; unset XAUTHORITY; /usr/sbin/gpartedbin)
(gpartedbin:3936): Gtk-WARNING **: 16:36:06.263: cannot open display:
$ echo $?
1
However the gparted shell wrapper instead exits with successful status
0:
$ (unset DISPLAY; unset XAUTHORITY; gparted)
(gpartedbin:4282): Gtk-WARNING **: 16:39:23.514: cannot open display:
$ echo $?
0
Fix this.
On Ubuntu the gparted shell wrapper still attempts to mask lots of
non-block device based file systems. Remove the --quiet option from the
systemctl --runtime mask command to see:
$ gparted
Created symlink /run/systemd/system/snap-gnome\x2d3\x2d34\x2d1804-66.mount -> /dev/null.
Created symlink /run/systemd/system/snap-core-10583.mount -> /dev/null.
Created symlink /run/systemd/system/boot-efi.mount -> /dev/null.
Created symlink /run/systemd/system/snap-gtk\x2dcommon\x2dthemes-1514.mount -> /dev/null.
Created symlink /run/systemd/system/snap-core-10577.mount -> /dev/null.
Created symlink /run/systemd/system/snap-core18-1944.mount -> /dev/null.
Created symlink /run/systemd/system/run-user-1000-doc.mount -> /dev/null.
Created symlink /run/systemd/system/snap-gtk\x2dcommon\x2dthemes-1506.mount -> /dev/null.
Created symlink /run/systemd/system/snap-gnome\x2d3\x2d28\x2d1804-128.mount -> /dev/null.
Created symlink /run/systemd/system/snap-snap\x2dstore-518.mount -> /dev/null.
Created symlink /run/systemd/system/snap-gnome\x2d3\x2d28\x2d1804-145.mount -> /dev/null.
Created symlink /run/systemd/system/snap-core18-1932.mount -> /dev/null.
Created symlink /run/systemd/system/snap-snap\x2dstore-467.mount -> /dev/null.
Created symlink /run/systemd/system/snap-gnome\x2d3\x2d34\x2d1804-60.mount -> /dev/null.
Created symlink /run/systemd/system/-.mount -> /dev/null.
GParted 1.0.0
configuration --enable-libparted-dmraid --enable-online-resize
libparted 3.3
The gparted shell wrapper is currently looking for non-masked Systemd
mount units where the 'What' property starts "/dev/". However Ubuntu
also uses snap packages which are mounted file images via loop devices:
$ grep '^/dev/' /proc/mounts | sort
/dev/fuse /run/user/1000/doc fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
/dev/loop0 /snap/core/10583 squashfs ro,nodev,relatime 0 0
/dev/loop10 /snap/snap-store/518 squashfs ro,nodev,relatime 0 0
/dev/loop11 /snap/snap-store/467 squashfs ro,nodev,relatime 0 0
/dev/loop12 /snap/gtk-common-themes/1506 squashfs ro,nodev,relatime 0 0
/dev/loop1 /snap/core/10577 squashfs ro,nodev,relatime 0 0
/dev/loop3 /snap/core18/1944 squashfs ro,nodev,relatime 0 0
/dev/loop4 /snap/core18/1932 squashfs ro,nodev,relatime 0 0
/dev/loop5 /snap/gnome-3-34-1804/66 squashfs ro,nodev,relatime 0 0
/dev/loop6 /snap/gnome-3-28-1804/128 squashfs ro,nodev,relatime 0 0
/dev/loop7 /snap/gnome-3-34-1804/60 squashfs ro,nodev,relatime 0 0
/dev/loop8 /snap/gnome-3-28-1804/145 squashfs ro,nodev,relatime 0 0
/dev/loop9 /snap/gtk-common-themes/1514 squashfs ro,nodev,relatime 0 0
/dev/sda1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0
/dev/sda5 / ext4 rw,relatime,errors=remount-ro 0 0
Fix by excluding:
1. Device name "/dev/fuse" because it's a character not a block device
and the mount point is associated with snap,
2. Device names starting "/dev/loop" and where the mount point starts
"/snap/" [1]. This is to allow for use of GParted with explicitly
named loop devices.
[1] The system /snap directory
https://snapcraft.io/docs/system-snap-directoryCloses#129 - Unit \xe2\x97\x8f.service does not exist, proceeding
anyway
The gparted shell wrapper masks Systemd mount units to prevent it
automounting file systems while GParted is running [1], excluding
virtual file system which GParted isn't interested in [2]. The problem
is that there are a lot of virtual file systems and they have changed
between Fedora 19 and 33 so now the exclusion list is out of date.
Run GParted on Fedora 33 and query the mount units while it is running:
$ systemctl list-units -t mount --full --all
UNIT LOAD ACTIVE SUB DESCRIPTION
-.mount loaded active mounted Root Mount
* boot.mount masked active mounted /boot
dev-hugepages.mount loaded active mounted Huge Pages File System
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
* home.mount masked active mounted /home
* proc-fs-nfsd.mount masked inactive dead proc-fs-nfsd.mount
proc-sys-fs-binfmt_misc.mount loaded inactive dead Arbitrary Executable File Formats File System
run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
* run-user-1000.mount masked active mounted /run/user/1000
* run-user-42.mount masked active mounted /run/user/42
sys-fs-fuse-connections.mount loaded active mounted FUSE Control File System
sys-kernel-config.mount loaded active mounted Kernel Configuration File System
sys-kernel-debug.mount loaded active mounted Kernel Debug File System
* sys-kernel-tracing.mount masked active mounted /sys/kernel/tracing
* sysroot.mount masked inactive dead sysroot.mount
* tmp.mount masked active mounted /tmp
* var-lib-machines.mount masked inactive dead var-lib-machines.mount
* var-lib-nfs-rpc_pipefs.mount masked active mounted /var/lib/nfs/rpc_pipefs
* var.mount masked inactive dead var.mount
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
19 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
So it masked these virtual file systems which didn't need to be masked:
* proc-fs-nfsd.mount masked inactive dead proc-fs-nfsd.mount
* run-user-1000.mount masked active mounted /run/user/1000
* run-user-42.mount masked active mounted /run/user/42
* sys-kernel-tracing.mount masked active mounted /sys/kernel/tracing
* var-lib-machines.mount masked inactive dead var-lib-machines.mount
* var-lib-nfs-rpc_pipefs.mount masked active mounted /var/lib/nfs/rpc_pipefs
Lines from /proc/partitions for some of these virtual file systems:
$ egrep '/run/user|/sys/kernel/tracing|/var/lib/nfs/rpc_pipefs' /proc/mounts
tmpfs /run/user/42 tmpfs rw,seclabel,nosuid,nodev,relatime,size=202656k,nr_inodes=50664,mode=700,uid=42,gid=42,inode64 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=202656k,nr_inodes=50664,mode=700,uid=1000,gid=1000,inode64 0 0
none /sys/kernel/tracing tracefs rw,seclabel,relatime 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
And for contrast the lines from /proc/mounts for disk backed file systems:
$ egrep '^/dev/' /proc/mounts
/dev/sda1 /boot ext4 rw,seclabel,relatime 0 0
/dev/sda2 / btrfs rw,seclabel,relatime,space_cache,subvolid=258,subvol=/root 0 0
/dev/sda2 /home btrfs rw,seclabel,relatime,space_cache,subvolid=256,subvol=/home 0 0
Going back to first principles GParted cares that Systemd doesn't
automount file systems on block devices. So instead only mask mount
units which are on block devices. Where the 'What' property starts
"/dev/".
Systemd maintains hundreds of properties for each unit.
$ systemctl show boot.mount | wc -l
221
The properties of interest for all mount units can be queries like this:
$ systemctl show --all --property=What,Id,LoadState '*.mount'
...
What=sunrpc
Id=var-lib-nfs-rpc_pipefs.mount
LoadState=masked
What=/dev/sda1
Id=boot.mount
LoadState=masked
...
[1] 4c109df9b5
Use systemctl runtime mask to prevent automounting (#701676)
[2] 43de8e326a
Do not mask virtual file systems when using systemctl (#708378)
Closes#129 - Unit \xe2\x97\x8f.service does not exist, proceeding
anyway
With Systemd 246 on Fedora 33, running GParted reports this error and no
longer masks the system mount units:
$ gparted
Unit \xe2\x97\x8f.service does not exist, proceeding anyway.
Unit \xe2\x97\x8f.service does not exist, proceeding anyway.
GParted 1.1.0
configuration --enable-libparted-dmraid --enable-online-resize
libparted 3.3
$ systemctl list-units -t mount --full --all --no-legend
-.mount loaded active mounted Root Mount
boot.mount loaded active mounted /boot
dev-hugepages.mount loaded active mounted Huge Pages File System
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
home.mount loaded active mounted /home
proc-fs-nfsd.mount loaded inactive dead NFSD configuration filesystem
proc-sys-fs-binfmt_misc.mount loaded inactive dead Arbitrary Executable File Formats File System
run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
run-user-1000.mount loaded active mounted /run/user/1000
run-user-42.mount loaded active mounted /run/user/42
sys-fs-fuse-connections.mount loaded active mounted FUSE Control File System
sys-kernel-config.mount loaded active mounted Kernel Configuration File System
sys-kernel-debug.mount loaded active mounted Kernel Debug File System
sys-kernel-tracing.mount loaded active mounted Kernel Trace File System
* sysroot.mount not-found inactive dead sysroot.mount
tmp.mount loaded active mounted Temporary Directory (/tmp)
var-lib-machines.mount loaded inactive dead Virtual Machine and Container Storage (Compatibility)
var-lib-nfs-rpc_pipefs.mount loaded active mounted RPC Pipe File System
* var.mount not-found inactive dead var.mount
^
[Unicode Black Circle character (U+25CF) replaced with star to avoid
making this this commit message Unicode.]
Currently the gparted shell wrapper lists the Systemd mount units and
takes the first space separated column as the unit name. If the LOAD
status of the unit is not "loaded" then Systemd prefixes the name with
an optional Black Circle. Prior to Systemd 246 these extra 2 characters
at the start of the line, including the optional Black Circle, were
suppressed by the --no-legend option, but with Systemd 246 this no
longer happens. As the mount unit names no longer start in the first
character of the line no units are masked. Instead the Unicode Black
Circle character, UTF-8 byte sequence E2 97 8F, is found at the start of
highlighted lines which results in this error:
Unit \xe2\x97\x8f.service does not exist, proceeding anyway.
Fix by adding the --plain option to suppress the optional Black Circle
in the systemctl output. Confirmed this option is available in the
oldest supported distributions with Systemd.
RedHat / CentOS 7 Systemd 219 systemctl has --plain option.
Ubuntu 16.04 LTS Systemd 229 systemctl has --plain option.
Closes#129 - Unit \xe2\x97\x8f.service does not exist, proceeding
anyway
Debian user reported a bug [1] that when they had PS_FORMAT environment
variable set it prevented GParted running:
# export PS_FORMAT='ruser,uid,pid,ppid,pri,ni,%cpu,%mem,vsz,rss,stat,tty,start,time,command'
# gparted
The process gpartedbin is already running.
Only one gpartedbin process is permitted.
# echo $?
1
Using ps column 'command' includes the command and all it's arguments,
rather than just the command name as ps displays by default. Thus the
shell wrapper finds the grep command it's using when searching for the
gpartedbin executable.
# ps -e | grep gpartedbin
root 0 26114 14777 19 0 0.0 0.0 112712 940 S+ pts/0 10:42:02 00:00:00 grep --color=auto gpartedbin
Fix by searching for running processes using pidof. pgrep does regular
expression matching where as pidof checks program name is the same [2].
Therefore use of pidof is preferred over pgrep [3].
[1] Debian bug #864932 - gparted fails if PS_FORMAT options are
specified
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=864932
[2] Difference between pidof and pgrep?
https://stackoverflow.com/questions/52151698/difference-between-pidof-and-pgrep
[3] [PATCH] gparted.in: Use reliable way of detecting gpartedbin process
existence
https://git.alpinelinux.org/aports/tree/community/gparted/gparted.in-Use-reliable-way-of-detecting-gpartedbin-.patchCloses!54 - Fix gparted not launching when PS_FORMAT environment
variable set
Back in 2009 devicekit-disks package was renamed to udisks [1]. All
supported distributions use udisks (or more recently udisks2). None
have the old devkit-disks command. Therefore remove it from the GParted
shell wrapper.
[1] https://www.freedesktop.org/wiki/Software/DeviceKit-disks/
"Note
On December 1st 2009, DeviceKit-disks was renamed to udisks. This
release is expected to appear in distributions released in the first
half of 2010."
GParted fails to display when run under Wayland [1][2][3]. This is
because by intentional design Wayland doesn't allow applications with
root privileges access to the display [4].
As an interim workaround make the gparted shell wrapper use xhost to
grant root access to the X11 server if root doesn't already have access,
but only when configured. Granting root access must be explicitly
enabled when building GParted like this:
./configure --enable-xhost-root
It defaults to disabled. When gpartedbin binary ends the shell wrapper
revokes root access only if it granted such access.
[1] GNOME Bug 776437 - GParted fails to run as root under Wayland
https://bugzilla.gnome.org/show_bug.cgi?id=776437
[2] Ubuntu Bug 1652282 - GParted does not work in GNOME on Wayland
https://bugs.launchpad.net/ubuntu/+source/gparted/+bug/1652282
[3] Fedora Bug 1397103 - gparted not working under Wayland
https://bugzilla.redhat.com/show_bug.cgi?id=1397103
[4] Common Fedora 25 bugs
Running graphical apps with root privileges (e.g. gparted) does not
work on Wayland
https://fedoraproject.org/wiki/Common_F25_bugs#wayland-root-apps
Bug 776437 - GParted fails to run as root under Wayland
Now that the gparted script is intended to be run by ordinary users, as
well as root, install it into directory $prefix/bin rather than
$prefix/sbin.
Bug 776437 - GParted fails to run as root under Wayland
Move calling of the privilege escalation program which allows a normal
user to run GParted as root from the desktop file into the gparted
wrapper script. This is in preparation for further changes needed to
grant root access to the X11 display under Wayland.
Don't introduce yet another script so that there aren't two different
names to run GParted by for normal users and root. Using the same
gparted name but placing two different scripts at /usr/bin/gparted and
/usr/sbin/gparted is not possible because on Arch Linux /usr/sbin is a
symbolic link to /usr/bin.
Frequently asked questions, Does Arch follow the FHS?
https://wiki.archlinux.org/index.php/Frequently_asked_questions#Does_Arch_follow_the_FHS.3F
"Arch Linux follows the file system hierarchy for operating systems
using the systemd service manager. See file-hierarchy(7) for an
explanation of each directory along with their designations. In
particular, /bin, /sbin, and /usr/sbin are symbolic links to
/usr/bin, and /lib (and /lib64 if applicable) are symbolic links to
/usr/lib".
Bug 776437 - GParted fails to run as root under Wayland
In order to prevent potential corruption of newly created file systems,
when available use udisks2-inhibit with gpartedbin execution to prevent
automounting.
Original report:
Xubuntu install fail due partition auto mount defeats Gparted
https://bugs.launchpad.net/ubuntu/+source/thunar/+bug/1078445
Some GNU/Linux distributions use the udisks2 "udisksd" daemon and have
udisks2-inhibit at a known location. The known location is not in the
default PATH environment variable.
One known distribution that matches this criteria is xubuntu 14.04.
Interestingly neither kubuntu 14.04 nor ubuntu 14.04 appear to have the
udisks2 "udisksd" daemon running and do not suffer from this specific
automounting problem.
Bug 745349 - gparted wrapper script needs updated for udisks2
Applying operations or just scanning the partitions in GParted was
causing all stopped Linux Software RAID arrays to be automatically
started. This is not new with this patch set, but as a result of the
following behaviour which has existed for a long time. Chain of events
goes likes this:
1) Gparted calls commit_to_os() to update the kernel with the new
partition table;
2) Libparted calls ioctl() BLKPG_DEL_PARTITION on every partition to
delete every partition from the kernel. Succeeds on non-busy
partitions only;
3) Kernel emits udev partition remove event on every removed partition;
4) Libparted calls ioctl() BLKPG_ADD_PARTITION on every non-busy
partition to re-add the partition to the kernel;
5) Kernel emits udev partition add event on every added partition;
6) Udev rule:
SUBSYSTEM=="block", ACTION=="add", ENV{ID_FS_TYPE}=="linux_raid_member", \
RUN+="/sbin/mdadm -I $tempnode"
from either /lib/udef/rules.d/64-md-raid.rules or
.../65-md-incremental.rules incrementally starts the member in a
Linux Software RAID array.
Fix by temporarily adding blank override rules files which does nothing,
so that when the udev add and remove events for Linux Software RAID
array member partitions fire nothing is done; but only when required.
Note that really old versions of udev don't have rules to incrementally
start array members and some distributions comment out such rules.
Bug #709640 - Linux Swap Suspend and Software RAID partitions not
recognised
This enhancement removes the virtual file systems from the list of file
systems (shown below) to be masked.
The following output was captured using Fedora 19:
$ systemctl list-units --full --all -t mount
UNIT LOAD ACTIVE SUB DESCRIPTION
-.mount loaded active mounted /
boot.mount loaded active mounted /boot
dev-hugepages.mount loaded active mounted Huge Pages File System
dev-mqueue.mount loaded active mounted POSIX Message Queue File System
proc-sys-fs-binfmt_misc.mount loaded inactive dead Arbitrary Executable File Formats File System
run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
sys-fs-fuse-connections.mount loaded active mounted FUSE Control File System
sys-kernel-config.mount loaded active mounted Configuration File System
sys-kernel-debug.mount loaded active mounted Debug File System
tmp.mount loaded active mounted Temporary Directory
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
10 loaded units listed.
To show all installed unit files use 'systemctl list-unit-files'.
Bug #708378 - Advertised new feature: Use systemctl runtime mask to
prevent automounting (#701676) doesn't work
A mistake was made in the following commit:
Use systemctl runtime mask to prevent automounting (#701676)
4c109df9b5
The intention was to use 'systemctl list-units' rather than
'systemctl list-unit-files' so that auto-generated mount files would
also be masked and hence prevented from auto-mounting.
Now 'systemctl list-units' is used.
Bug #708378 - Advertised new feature: Use systemctl runtime mask to
prevent automounting (#701676) doesn't work
Only one partition editing tool should be in use at any one point
in time. If more than one is in use concurrently, then data loss
might occur through operations on common partitions or partition
tables. As such, prevent multiple copies of GParted from running
at the same time.
With the beta release of Fedora 19, invoking gparted appears to
automatically mount partitions. The systemd daemon appears to be
performing the automounting. Hence use systemctl runtime mask to
prevent this automounting from occurring.
Bug #701676 - gparted doesn't inhibit systemd mounting, leading to
potential data loss
Note the license text of this file differs slightly from the C++
source code license text to indicate this file is a part of GParted.
See: https://www.gnu.org/licenses/gpl-howto.html
For programs that are more than one file, it is better to replace
“this program” with the name of the program, and begin the
statement with a line saying “This file is part of NAME”.
- Removed gparted-disable-automount.fdi handling.
- Renamed gparted binary to gpartedbin to permit a calling script to be named gparted.
- Added new calling script gparted.in to permit using hal-lock to acuiqre device locks to prevent automounting while executing gpartedbin.
- Renamed gparted.desktop.in to gparted.desktop.in.in to permit parsing installdir.
svn path=/trunk/; revision=826