Commit Graph

4076 Commits

Author SHA1 Message Date
Mike Fleetwood 8b35892ea5 Pass device and partition names to blkid (#131)
A user reported that GParted would hang at "scanning all devices...",
when a fully working disk was named on the command line, but another
device on the machine was hung.

This can be replicated like this:
(on Ubuntu 20.04 LTS for it's NBD support)

1. Export and import NBD:
    # truncate -s 1G /tmp/disk-1G.img
    # nbd-server -C /dev/null 9000 /tmp/disk-1G.img
    # nbd-client localhost 9000 /dev/nbd0

2. Hang the NBD server and therefore /dev/nbd0:
    # killall -STOP nbd-server

3. Run GParted:
    $ gparted /dev/sda

Tracing GParted shows that execution of blkid never returns.

    # strace -f -tt -q -bexecve -eexecve ./gpartedbin 2>&1 1> /dev/null | fgrep -v ENOENT
    ...
    [pid 37823] 13:56:24.814139 execve("/usr/sbin/mkudffs", ["mkudffs", "--help"], 0x55e2a3f2d230 /* 20 vars */ <detached ...>
    [pid 37814] 13:56:24.829246 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=37823, si_uid=0, si_status=1, si_utime=0, si_stime=0} ---
    [pid 37825] 13:56:25.376796 execve("/usr/sbin/blkid", ["blkid", "-v"], 0x55e2a3f2d230 /* 20 vars */ <detached ...>
    [pid 37824] 13:56:25.380824 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=37825, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
    [pid 37826] 13:56:25.402512 execve("/usr/sbin/blkid", ["blkid"], 0x55e2a3f2d230 /* 20 vars */ <detached ...>

Tracking of blkid shows that it hangs on either the open of or first
read from /dev/nbd0.

    # strace blkid
    ...
    lstat("/dev", {st_mode=S_IFDIR|0755, st_size=4560, ...}) = 0
    lstat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    stat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    lstat("/dev", {st_mode=S_IFDIR|0755, st_size=4560, ...}) = 0
    lstat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    access("/dev/nbd0", F_OK)               = 0
    stat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    openat(AT_FDCWD, "/sys/dev/block/43:0", O_RDONLY|O_CLOEXEC) = 4
    openat(4, "dm/uuid", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
    close(4)                                = 0
    openat(AT_FDCWD, "/dev/nbd0", O_RDONLY|O_CLOEXEC

Clean up:

1. Resume NBD server:
    # killall -CONT nbd-server

2. Delete NBD setup:
    # nbd-client -d /dev/nbd0
    # killall nbd-server
    # rm /tmp/disk-1G.img

Fix this by making GParted specify the whole disk device and partition
names that it is interested in to blkid, rather than letting blkid scan
and report all block devices.  Do this both when GParted determines the
devices for itself and when they are named on the command line.

Also update example blkid command output being parsed and cache value
with this change to how blkid is executed.

Closes #131 - GParted hangs when non-named device is hung
2021-02-10 16:30:13 +00:00
Mike Fleetwood 75bda733bb Refactor run_blkid_load_cache() into if fail return early (#131)
... code pattern.  Simplifies the code a little.

Closes #131 - GParted hangs when non-named device is hung
2021-02-10 16:30:13 +00:00
Mike Fleetwood 884cd5a352 Read partition names from /proc/partitions too (#131)
GParted already always reads /proc/partitions for whole disk device
names no matter whether it uses whole disk devices named on the command
line, from /proc/partitions or from libparted.  As /proc/partitions
lists all the block devices that the kernel knows about, and therefore
all the possible ones blkid could probe, so use it to provide partition
names and device to partition mapping.  See code comments for more
details about the assumptions the /proc/partition parsing code makes and
the fact that these are confirmed by examining the Linux kernel source.

This commit just adds debugging to print the existing vector of
validated devices GParted shows in the UI and the vector with all
partitions added, ready for but not yet passed to blkid.
    # ./gpartedbin
    ...
    DEBUG: device_paths=["/dev/sda","/dev/sdb"]
    DEBUG: device_and_partition_paths=["/dev/sda","/dev/sda1","/dev/sda2","/dev/sdb","/dev/sdb1"]

Also demonstrating that this continues to support named devices,
including file system image files [1].
    # truncate -s 256M /tmp/ext4.img
    # mkfs.ext4 /tmp/ext4.img
    # ./gpartedbin /dev/sda /tmp/ext4.img
    ...
    DEBUG: device_paths=["/dev/sda","/tmp/ext4.img"]
    DEBUG: device_and_partition_paths=["/dev/sda","/dev/sda1","/dev/sda2","/tmp/ext4.img"]

[1] e8f0504b13
    Make sure that FS_Info cache is loaded for all named paths (#787181)

Closes #131 - GParted hangs when non-named device is hung
2021-02-10 16:30:13 +00:00
Mike Fleetwood 52930f30ae Refactor load_proc_partitions_info_cache() a bit (#131)
Put whole disk device name matching code into a helper function to make
the /proc/partition parsing code easier to understand.

Closes #131 - GParted hangs when non-named device is hung
2021-02-10 16:30:13 +00:00
Mike Fleetwood 45f88c3274 Merge FS_Info load cache calls (#131)
Now FS_Info::load_cache() and ::load_cache_for_paths() are nearly next
to each other, merge them together to simplify the code a little.  This
makes the special case to ensure that file system images named on the
command line were queried by blkid and loaded into the FS_Info cache [1]
become the normal cache loading method.  Already passing all discovered
or named devices to load_cache_for_paths() is also a step on the way to
doing it for all devices and partitions of interest.

Just need to ensure that load_cache_for_paths() always loads the cache
as load_cache() did, rather than only when it hadn't already been
loaded.  Otherwise GParted will only ever run blkid and load the cache
once at startup and not on each refresh.

[1] e8f0504b13
    Make sure that FS_Info cache is loaded for all named paths (#787181)

Closes #131 - GParted hangs when non-named device is hung
2021-02-10 16:30:13 +00:00
Mike Fleetwood a8cd7a4e80 Initialise partition content discovery caches a bit later (#131)
PATCHSET OVERVIEW

A user reported that GParted would hang at "scanning all devices...",
when a fully working disk was named on the command line, but another
device on the machine was hung.

This can be replicated like this:
(on Ubuntu 20.04 LTS for it's NBD support)

1. Export and import NBD:
    # truncate -s 1G /tmp/disk-1G.img
    # nbd-server -C /dev/null 9000 /tmp/disk-1G.img
    # nbd-client localhost 9000 /dev/nbd0

2. Hang the NBD server and therefore /dev/nbd0:
    # killall -STOP nbd-server

3. Run GParted:
    $ gparted /dev/sda

Tracing GParted shows that execution of blkid never returns.

    # strace -f -tt -q -bexecve -eexecve /usr/sbin/gpartedbin 2>&1 1> /dev/null | fgrep -v ENOENT
    ...
    [pid 37823] 13:56:24.814139 execve("/usr/sbin/mkudffs", ["mkudffs", "--help"], 0x55e2a3f2d230 /* 20 vars */ <detached ...>
    [pid 37814] 13:56:24.829246 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=37823, si_uid=0, si_status=1, si_utime=0, si_stime=0} ---
    [pid 37825] 13:56:25.376796 execve("/usr/sbin/blkid", ["blkid", "-v"], 0x55e2a3f2d230 /* 20 vars */ <detached ...>
    [pid 37824] 13:56:25.380824 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=37825, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
    [pid 37826] 13:56:25.402512 execve("/usr/sbin/blkid", ["blkid"], 0x55e2a3f2d230 /* 20 vars */ <detached ...>

Tracing of blkid shows that it hangs on either the open of or first
read from /dev/nbd0.

    # strace blkid
    ...
    lstat("/dev", {st_mode=S_IFDIR|0755, st_size=4560, ...}) = 0
    lstat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    stat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    lstat("/dev", {st_mode=S_IFDIR|0755, st_size=4560, ...}) = 0
    lstat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    access("/dev/nbd0", F_OK)               = 0
    stat("/dev/nbd0", {st_mode=S_IFBLK|0660, st_rdev=makedev(0x2b, 0), ...}) = 0
    openat(AT_FDCWD, "/sys/dev/block/43:0", O_RDONLY|O_CLOEXEC) = 4
    openat(4, "dm/uuid", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
    close(4)                                = 0
    openat(AT_FDCWD, "/dev/nbd0", O_RDONLY|O_CLOEXEC

Clean up:

1. Resume NBD server:
    # killall -CONT nbd-server

2. Delete NBD setup:
    # nbd-client -d /dev/nbd0
    # killall nbd-server
    # rm /tmp/disk-1G.img

Going to fix this by making GParted specify the device and partition
names that it is interested in to blkid, rather than letting blkid scan
and report all block devices.  Do this both when GParted determines the
devices for itself and when they are named on the command line.

THIS PATCH

Move the loading and initialising of caches used during content
discovery to after device and partition discovery and just before
content discovery.  Just makes the code ready for the next change.

Closes #131 - GParted hangs when non-named device is hung
2021-02-10 16:30:13 +00:00
Dušan Kazik f5fb86dbd3 Update Slovak translation 2021-02-06 06:36:29 +00:00
Curtis Gedak 4665e3a125 Append -git to version for continuing development 2021-01-25 10:32:31 -07:00
Curtis Gedak 23b27dfc35 ========== gparted-1.2.0 ========== 2021-01-25 10:14:25 -07:00
Curtis Gedak 423a57ec4a Update copyright years 2021-01-25 09:55:15 -07:00
Rūdolfs Mazurs 623714920d Update Latvian translation 2021-01-24 18:44:54 +00:00
Fabio Tomat 5ae34c8b84 Update Friulian translation 2021-01-23 07:05:04 +00:00
Philipp Kiemle 80d0684e47 Update German translation 2021-01-22 08:10:54 +00:00
Marek Černocký 0791b970bf Updated Czech translation 2021-01-21 08:22:36 +01:00
Thibault Martin 49ee59717c Update French translation 2021-01-20 07:21:14 +00:00
Мирослав Николић 36319ee861 Update Serbian translation 2021-01-19 17:30:50 +00:00
Daniel Mustieles 8cec62656c Updated Spanish translation 2021-01-19 08:06:52 +01:00
Rafael Fontenelle 4b0247d28c Update Brazilian Portuguese translation 2021-01-18 19:03:16 +00:00
Rafael Fontenelle 67273d4131 Update Brazilian Portuguese translation 2021-01-18 18:59:34 +00:00
Daniel Șerbănescu b17909302c Update Romanian translation 2021-01-18 18:53:42 +00:00
Piotr Drąg 96b1d820cc Update Polish translation 2021-01-17 12:40:18 +01:00
Anders Jonsson a4a5a7e61b Update Swedish translation 2021-01-16 23:32:39 +00:00
Yuri Chornoivan 8a9d15ac35 Update Ukrainian translation 2021-01-16 06:58:31 +00:00
Mike Fleetwood 3c95419ed2 White space tidy-up of Utils::get_filesystem_software()
Put colon directly after case value and list cases in enumeration order.
2021-01-15 19:55:17 +00:00
Mike Fleetwood 3783cb4173 Add unit testing of GParted exFAT interface (!30)
Install exfatprogs into the CentOS 7 GitLab CI image, enabling unit
testing of GParted's use of exFAT programs.  Exfatprogs is not yet
available for Ubuntu 20.04 as used in the Ubuntu GitLab CI image, only
for Ubuntu 20.10 so far.

Closes !30 - Add exFAT support
2021-01-15 19:55:17 +00:00
Mike Fleetwood b0a061cf7a Set the partition type for exFAT correctly (!30)
Libparted only allows selection of the partition type indirectly by
specifying the type of the file system it will contain [1] and so far
doesn't know about the exFAT file system.  Therefore when GParted is
creating a new exFAT partition, it gets the GParted default of 83
(Linux file system) on MBR partition tables.

Example operation details:
    Create Primary Partition #1 (exfat, 512.00 MiB) on /dev/sdb
    * create empty partition
    * clear old file system signatures in /dev/sdb1
    * set partition type on /dev/sdb1
        new partition type: ext2
    * create new exfat file system

fdisk report:
    # fdisk -l /dev/sdb
    Disk /dev/sdb: 8 GiB, 8589934592 bytes, 16777216 sectors
    Disk model: VBOX HARDDISK
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xa2aab629

    Device     Boot Start     End Sectors  Size Id Type
    /dev/sdb1        2048 1050623 1048576  512M 83 Linux

However the "exFAT file system specification" says:
    https://docs.microsoft.com/en-us/windows/win32/fileio/exfat-specification
    "10.2 Partition Tables

    To ensure interoperability of exFAT volumes in a broad set of usage
    scenarios, implementations should use partition type 07h for MBR
    partitioned storage and partition GUID
    {EBD0A0A2-B9E5-4433-87C0-68B6B72699C7} for GPT partitioned storage.
    "

Fix this.

[1] ped_partition_new(..., const PedFileSystemType* fs_type, ...)
    https://www.gnu.org/software/parted/api/group__PedPartition.html#g2f94ca75880f9e0c3ce57f7a4b72faf5
    ped_partition_set_system(..., const PedFileSystemType* fs_type)
    https://www.gnu.org/software/parted/api/group__PedPartition.html#g2f94ca75880f9e0c3ce57f7a4b72faf5

Closes !30 - Add exFAT support
2021-01-15 19:55:17 +00:00
Mike Fleetwood bd386f445d Add exFAT support (!30)
With exfatprogs (https://github.com/exfatprogs/exfatprogs) installed the
following operations on exFAT file systems are supported:
- Creation
- Checking
- Labelling
As of the current exfatprogs 1.0.4 the following are not supported:
- Reading usage
- Resizing
- Updating the UUID

Closes !30 - Add exFAT support
2021-01-15 19:55:17 +00:00
Mike Fleetwood 56fb026658 Exclude snap /dev/loop file system image mounts (#129)
On Ubuntu the gparted shell wrapper still attempts to mask lots of
non-block device based file systems.  Remove the --quiet option from the
systemctl --runtime mask command to see:
    $ gparted
    Created symlink /run/systemd/system/snap-gnome\x2d3\x2d34\x2d1804-66.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-core-10583.mount -> /dev/null.
    Created symlink /run/systemd/system/boot-efi.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-gtk\x2dcommon\x2dthemes-1514.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-core-10577.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-core18-1944.mount -> /dev/null.
    Created symlink /run/systemd/system/run-user-1000-doc.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-gtk\x2dcommon\x2dthemes-1506.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-gnome\x2d3\x2d28\x2d1804-128.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-snap\x2dstore-518.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-gnome\x2d3\x2d28\x2d1804-145.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-core18-1932.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-snap\x2dstore-467.mount -> /dev/null.
    Created symlink /run/systemd/system/snap-gnome\x2d3\x2d34\x2d1804-60.mount -> /dev/null.
    Created symlink /run/systemd/system/-.mount -> /dev/null.
    GParted 1.0.0
    configuration --enable-libparted-dmraid --enable-online-resize
    libparted 3.3

The gparted shell wrapper is currently looking for non-masked Systemd
mount units where the 'What' property starts "/dev/".  However Ubuntu
also uses snap packages which are mounted file images via loop devices:
    $ grep '^/dev/' /proc/mounts | sort
    /dev/fuse /run/user/1000/doc fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
    /dev/loop0 /snap/core/10583 squashfs ro,nodev,relatime 0 0
    /dev/loop10 /snap/snap-store/518 squashfs ro,nodev,relatime 0 0
    /dev/loop11 /snap/snap-store/467 squashfs ro,nodev,relatime 0 0
    /dev/loop12 /snap/gtk-common-themes/1506 squashfs ro,nodev,relatime 0 0
    /dev/loop1 /snap/core/10577 squashfs ro,nodev,relatime 0 0
    /dev/loop3 /snap/core18/1944 squashfs ro,nodev,relatime 0 0
    /dev/loop4 /snap/core18/1932 squashfs ro,nodev,relatime 0 0
    /dev/loop5 /snap/gnome-3-34-1804/66 squashfs ro,nodev,relatime 0 0
    /dev/loop6 /snap/gnome-3-28-1804/128 squashfs ro,nodev,relatime 0 0
    /dev/loop7 /snap/gnome-3-34-1804/60 squashfs ro,nodev,relatime 0 0
    /dev/loop8 /snap/gnome-3-28-1804/145 squashfs ro,nodev,relatime 0 0
    /dev/loop9 /snap/gtk-common-themes/1514 squashfs ro,nodev,relatime 0 0
    /dev/sda1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0
    /dev/sda5 / ext4 rw,relatime,errors=remount-ro 0 0

Fix by excluding:
1. Device name "/dev/fuse" because it's a character not a block device
   and the mount point is associated with snap,
2. Device names starting "/dev/loop" and where the mount point starts
   "/snap/" [1].  This is to allow for use of GParted with explicitly
   named loop devices.

[1] The system /snap directory
    https://snapcraft.io/docs/system-snap-directory

Closes #129 - Unit \xe2\x97\x8f.service does not exist, proceeding
              anyway
2021-01-14 16:45:05 +00:00
Mike Fleetwood 1a5614b3dd Only mask Systemd mounts on block devices (#129)
The gparted shell wrapper masks Systemd mount units to prevent it
automounting file systems while GParted is running [1], excluding
virtual file system which GParted isn't interested in [2].  The problem
is that there are a lot of virtual file systems and they have changed
between Fedora 19 and 33 so now the exclusion list is out of date.

Run GParted on Fedora 33 and query the mount units while it is running:
    $ systemctl list-units -t mount --full --all
      UNIT                          LOAD   ACTIVE   SUB     DESCRIPTION
      -.mount                       loaded active   mounted Root Mount
    * boot.mount                    masked active   mounted /boot
      dev-hugepages.mount           loaded active   mounted Huge Pages File System
      dev-mqueue.mount              loaded active   mounted POSIX Message Queue File System
    * home.mount                    masked active   mounted /home
    * proc-fs-nfsd.mount            masked inactive dead    proc-fs-nfsd.mount
      proc-sys-fs-binfmt_misc.mount loaded inactive dead    Arbitrary Executable File Formats File System
      run-user-1000-gvfs.mount      loaded active   mounted /run/user/1000/gvfs
    * run-user-1000.mount           masked active   mounted /run/user/1000
    * run-user-42.mount             masked active   mounted /run/user/42
      sys-fs-fuse-connections.mount loaded active   mounted FUSE Control File System
      sys-kernel-config.mount       loaded active   mounted Kernel Configuration File System
      sys-kernel-debug.mount        loaded active   mounted Kernel Debug File System
    * sys-kernel-tracing.mount      masked active   mounted /sys/kernel/tracing
    * sysroot.mount                 masked inactive dead    sysroot.mount
    * tmp.mount                     masked active   mounted /tmp
    * var-lib-machines.mount        masked inactive dead    var-lib-machines.mount
    * var-lib-nfs-rpc_pipefs.mount  masked active   mounted /var/lib/nfs/rpc_pipefs
    * var.mount                     masked inactive dead    var.mount

    LOAD   = Reflects whether the unit definition was properly loaded.
    ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
    SUB    = The low-level unit activation state, values depend on unit type.

    19 loaded units listed.
    To show all installed unit files use 'systemctl list-unit-files'.

So it masked these virtual file systems which didn't need to be masked:
    * proc-fs-nfsd.mount            masked inactive dead    proc-fs-nfsd.mount
    * run-user-1000.mount           masked active   mounted /run/user/1000
    * run-user-42.mount             masked active   mounted /run/user/42
    * sys-kernel-tracing.mount      masked active   mounted /sys/kernel/tracing
    * var-lib-machines.mount        masked inactive dead    var-lib-machines.mount
    * var-lib-nfs-rpc_pipefs.mount  masked active   mounted /var/lib/nfs/rpc_pipefs

Lines from /proc/partitions for some of these virtual file systems:
    $  egrep '/run/user|/sys/kernel/tracing|/var/lib/nfs/rpc_pipefs' /proc/mounts
    tmpfs /run/user/42 tmpfs rw,seclabel,nosuid,nodev,relatime,size=202656k,nr_inodes=50664,mode=700,uid=42,gid=42,inode64 0 0
    tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=202656k,nr_inodes=50664,mode=700,uid=1000,gid=1000,inode64 0 0
    none /sys/kernel/tracing tracefs rw,seclabel,relatime 0 0
    sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
    gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0

And for contrast the lines from /proc/mounts for disk backed file systems:
    $ egrep '^/dev/' /proc/mounts
    /dev/sda1 /boot ext4 rw,seclabel,relatime 0 0
    /dev/sda2 / btrfs rw,seclabel,relatime,space_cache,subvolid=258,subvol=/root 0 0
    /dev/sda2 /home btrfs rw,seclabel,relatime,space_cache,subvolid=256,subvol=/home 0 0

Going back to first principles GParted cares that Systemd doesn't
automount file systems on block devices.  So instead only mask mount
units which are on block devices.  Where the 'What' property starts
"/dev/".

Systemd maintains hundreds of properties for each unit.
    $ systemctl show boot.mount | wc -l
    221

The properties of interest for all mount units can be queries like this:
    $ systemctl show --all --property=What,Id,LoadState '*.mount'
    ...

    What=sunrpc
    Id=var-lib-nfs-rpc_pipefs.mount
    LoadState=masked

    What=/dev/sda1
    Id=boot.mount
    LoadState=masked

    ...

[1] 4c109df9b5
    Use systemctl runtime mask to prevent automounting (#701676)

[2] 43de8e326a
    Do not mask virtual file systems when using systemctl (#708378)

Closes #129 - Unit \xe2\x97\x8f.service does not exist, proceeding
              anyway
2021-01-14 16:45:05 +00:00
Mike Fleetwood 3c9ae05cd8 Don't try to mask non-existent Systemd \xe2\x97\x8f.service (#129)
With Systemd 246 on Fedora 33, running GParted reports this error and no
longer masks the system mount units:

    $ gparted
    Unit \xe2\x97\x8f.service does not exist, proceeding anyway.
    Unit \xe2\x97\x8f.service does not exist, proceeding anyway.
    GParted 1.1.0
    configuration --enable-libparted-dmraid --enable-online-resize
    libparted 3.3

    $ systemctl list-units -t mount --full --all --no-legend
      -.mount                       loaded    active   mounted Root Mount
      boot.mount                    loaded    active   mounted /boot
      dev-hugepages.mount           loaded    active   mounted Huge Pages File System
      dev-mqueue.mount              loaded    active   mounted POSIX Message Queue File System
      home.mount                    loaded    active   mounted /home
      proc-fs-nfsd.mount            loaded    inactive dead    NFSD configuration filesystem
      proc-sys-fs-binfmt_misc.mount loaded    inactive dead    Arbitrary Executable File Formats File System
      run-user-1000-gvfs.mount      loaded    active   mounted /run/user/1000/gvfs
      run-user-1000.mount           loaded    active   mounted /run/user/1000
      run-user-42.mount             loaded    active   mounted /run/user/42
      sys-fs-fuse-connections.mount loaded    active   mounted FUSE Control File System
      sys-kernel-config.mount       loaded    active   mounted Kernel Configuration File System
      sys-kernel-debug.mount        loaded    active   mounted Kernel Debug File System
      sys-kernel-tracing.mount      loaded    active   mounted Kernel Trace File System
    * sysroot.mount                 not-found inactive dead    sysroot.mount
      tmp.mount                     loaded    active   mounted Temporary Directory (/tmp)
      var-lib-machines.mount        loaded    inactive dead    Virtual Machine and Container Storage (Compatibility)
      var-lib-nfs-rpc_pipefs.mount  loaded    active   mounted RPC Pipe File System
    * var.mount                     not-found inactive dead    var.mount

    ^
   [Unicode Black Circle character (U+25CF) replaced with star to avoid
   making this this commit message Unicode.]

Currently the gparted shell wrapper lists the Systemd mount units and
takes the first space separated column as the unit name.  If the LOAD
status of the unit is not "loaded" then Systemd prefixes the name with
an optional Black Circle.  Prior to Systemd 246 these extra 2 characters
at the start of the line, including the optional Black Circle, were
suppressed by the --no-legend option, but with Systemd 246 this no
longer happens.  As the mount unit names no longer start in the first
character of the line no units are masked.  Instead the Unicode Black
Circle character, UTF-8 byte sequence E2 97 8F, is found at the start of
highlighted lines which results in this error:
    Unit \xe2\x97\x8f.service does not exist, proceeding anyway.

Fix by adding the --plain option to suppress the optional Black Circle
in the systemctl output.  Confirmed this option is available in the
oldest supported distributions with Systemd.
    RedHat / CentOS 7   Systemd 219   systemctl has --plain option.
    Ubuntu 16.04 LTS    Systemd 229   systemctl has --plain option.

Closes #129 - Unit \xe2\x97\x8f.service does not exist, proceeding
              anyway
2021-01-14 16:45:05 +00:00
Jordi Mas 39136a26bd Update Catalan translation 2021-01-10 22:26:05 +01:00
Jordi Mas 2c38ea1dd7 Update Catalan translation 2021-01-01 23:02:24 +01:00
Аляксей 61d0592afe Update Belarusian translation 2020-11-28 14:30:22 +00:00
Curtis Gedak cca15b4c9f Set default partition alignment to cylinder for amiga partition table (#116)
Closes #116 - Fails to create partitions on disks with Amiga partition
              tables using default settings
2020-11-24 14:42:12 +00:00
Jordi Mas 2df42d90db Update Catalan translation 2020-11-02 21:37:24 +01:00
Dušan Kazik bad036c395 Update Slovak translation 2020-10-13 10:47:39 +00:00
Mike Fleetwood 15b42f6978 Remove unneeded #include <vector> from TreeView_Detail.h
std::vector<> is no longer used in TreeView_Detail.h since this commit
replaced them:
    fae909897e
    Use PartitionVector class throughout the code (#759726)
2020-09-18 16:00:44 +00:00
Mike Fleetwood 70db3469c5 Fix CentOS 7 CI test job failures from empty /etc/machine-id (!62)
Since August 2020, GitLab Continuous Integration test jobs have been
failing on the CentOS 7 image like this from
tests/test_SupportedFileSystems.log:

    process 6319: D-Bus library appears to be incorrectly set up; failed to read machine uuid: UUID file '/etc/machine-id' should contain a hex string of length 32, not length 0, with no other text
    See the manual page for dbus-uuidgen to correct this issue.
      D-Bus not built with -rdynamic so unable to print a backtrace
    Running main() from test_SupportedFileSystems.cc
    DISPLAY=":99"
    /usr/bin/xvfb-run: line 181:  6319 Aborted                 (core dumped) DISPLAY=:$SERVERNUM XAUTHORITY=$AUTHFILE "$@" 2>&1

Not sure why this has just started failing in the CentOS 7 CI image now,
but the error is widely known [1][2][3][4].  Use
systemd-machine-id-setup to generate a machine ID [5][6]; rather than
dbus-uuidgen [7] as it's designed to integrate into VMs and does the
right thing if a valid machine ID already exists.

[1] Red Hat Bug 598200 - D-Bus library appears to be incorrectly set up;
    failed to read machine uuid: UUID file '/var/lib/dbus/machine-id'
    https://bugzilla.redhat.com/show_bug.cgi?id=598200
[2] Free Desktop Bug 13194 - When machine-id not found, dbus should not
    abort
    https://bugs.freedesktop.org/show_bug.cgi?id=13194
[3] D-Bus library appears to be incorrectly set up
    https://unix.stackexchange.com/questions/117741/d-bus-library-appears-to-be-incorrectly-set-up
[4] Generate uuid for container
    https://xpra.org/trac/wiki/Usage/Docker/CentOS
[5] CentOS / RHEL 7 : How to Change the machine-id
    https://www.thegeekdiary.com/centos-rhel-7-how-to-change-the-machine-id/
[6] man systemd-machine-id-setup
    https://man7.org/linux/man-pages/man1/systemd-machine-id-setup.1.html
[7] man dbus-uuidgen
    https://dbus.freedesktop.org/doc/dbus-uuidgen.1.html

Closes !62 - Fix CentOS 7 CI test job failures because of zero sized
             /etc/machine-id
2020-09-18 16:00:44 +00:00
Fabio Tomat 174c5e898c Update Friulian translation 2020-09-11 15:27:45 +00:00
Aurimas Černius 76ba2672dc Updated Lithuanian translation 2020-09-06 22:51:52 +03:00
Boyuan Yang df150b8371 Update Chinese (China) translation 2020-08-30 21:52:36 +00:00
Piotr Drąg 5f9a139d2c Update Polish translation
Fixes https://gitlab.gnome.org/Teams/Translation/pl/-/issues/8
2020-07-12 12:49:04 +02:00
Baurzhan Muftakhidinov 72fd546e5d Update Kazakh translation 2020-06-26 04:57:00 +00:00
Daniel Șerbănescu 16dda8fdbb Update Romanian translation 2020-05-31 04:39:41 +00:00
Mike Fleetwood 201f5f2f2f Add missing includes into Devices module 2020-05-27 16:02:47 +00:00
Mike Fleetwood e9223207e6 Exclude PipeCapture read NUL byte unit tests in GitLab CI jobs (!60)
These PipeCapture unit tests are also failing, preventing the
ubuntu_test CI job passing:
    PipeCaptureTest.ReadEmbeddedNULCharacter
    PipeCaptureTest.ReadNULByteInMiddleOfMultiByteUTF8Character

These tests are also failing locally in both Ubuntu 20.04 LTS and
Fedora 32 VMs, but not in Ubuntu 18.04 LTS or Fedora 31 VMs.  As this is
not specifically a Ubuntu docker image update related issue, temporarily
exclude these failing tests.

Closes !60 - Fix GitLab CI job failures following Ubuntu docker image
             updates
2020-05-27 16:02:47 +00:00
Mike Fleetwood c5093b7d54 Add C++ compiler into GitLab CI Ubuntu image (!60)
Next the Ubuntu image CI job is failing without a C++ compiler like
this:

    checking whether to enable maintainer-specific portions of Makefiles... yes
    checking for g++... no
    checking for c++... no
    checking for gpp... no
    checking for aCC... no
    checking for CC... no
    checking for cxx... no
    checking for cc++... no
    checking for cl.exe... no
    checking for FCC... no
    checking for KCC... no
    checking for RCC... no
    checking for xlC_r... no
    checking for xlC... no
    checking whether the C++ compiler works... no
    configure: error: in `/builds/mfleetwo/gparted':
    configure: error: C++ compiler cannot create executables
    See `config.log' for more details
    ...
    ERROR: Job failed: exit code 1

The published "Ubuntu" docker image has been updated to Ubuntu 20.04 LTS
and must no longer include the build tools by default, or not be a
dependency of any of the other installed packages.  Explicitly install
build-essential to get the C++ compiler [1].  Also don't list make as
build-essential includes it.

[1] Installing the GNU C compiler and GNU C++ compiler
    https://help.ubuntu.com/community/InstallingCompilers

Closes !60 - Fix GitLab CI job failures following Ubuntu docker image
             updates
2020-05-27 16:02:47 +00:00
Mike Fleetwood 5fecdbfc96 Prevent tzdata install hang in GitLab CI Ubuntu image (!60)
The Ubuntu based GitLab CI jobs have recently started being terminated
after the default 1 hour timeout.  Installing / updating packages in the
image is updating the tzdata package which is prompting for input which
it will never receive, hence the hang.  The end of output from the job
looks like this:

    Setting up tzdata (2020a-0ubuntu0.20.04) ...
    debconf: unable to initialize frontend: Dialog
    debconf: (TERM is not set, so the dialog frontend is not usable.)
    debconf: falling back to frontend: Readline
    Configuring tzdata
    ------------------
    Please select the geographic area in which you live. Subsequent configuration
    questions will narrow this down by presenting a list of cities, representing
    the time zones in which they are located.
      1. Africa      4. Australia  7. Atlantic  10. Pacific  13. Etc
      2. America     5. Arctic     8. Europe    11. SystemV
      3. Antarctica  6. Asia       9. Indian    12. US
    Geographic area:
    ...
    ERROR: Job failed: execution took longer than 1h0m0s seconds

This is a well known issue [1][2][3].  Probably occurring now because of
a new release of tzdata not included in the base Ubuntu image we are
using.  Fix by telling the underlying dpkg tools this installation is
non-interactive.

[1] Avoiding user interaction with tzdata when installing certbot in a
    docker container
    https://askubuntu.com/questions/909277/avoiding-user-interaction-with-tzdata-when-installing-certbot-in-a-docker-contai
[2] How to install tzdata on a ubuntu docker image?
    https://serverfault.com/questions/949991/how-to-install-tzdata-on-a-ubuntu-docker-image
[3] apt-get install tzdata noninteractive
    https://stackoverflow.com/questions/44331836/apt-get-install-tzdata-noninteractive

Closes !60 - Fix GitLab CI job failures following Ubuntu docker image
             updates
2020-05-27 16:02:47 +00:00
Yi-Jyun Pan 16170368e7 Update Chinese (Taiwan) translation 2020-05-24 17:33:58 +00:00
Dušan Kazik 5768115b71 Update Slovak translation 2020-05-04 09:37:17 +00:00