Follows the "Return Early" design pattern making the code easier to
understand without having to remember cases for elses or cascading ifs.
Refactor before the following commit's fix so that capture of output on
failure can be confirmed as still working.
Closes!104 - Add Alpine Linux CI jobs and resolve label and UUID issues
with FAT16/32
The test CI job on Alpine Linux fails like this:
$ GTEST_FILTER+=':My/SupportedFileSystemsTest.CreateAndReadUsage/btrfs'
/bin/sh: eval: line 135: GTEST_FILTER+=:My/SupportedFileSystemsTest.CreateAndReadUsage/btrfs: not found
This is because the busybox ash shell in Alpine Linux doesn't support +=
syntax for variable concatenation. Use plain variable assignment
instead.
Closes!104 - Add Alpine Linux CI jobs and resolve label and UUID issues
with FAT16/32
There have been a number of GParted build issues [1][2] recently on
Alpine Linux because it uses musl libc [3] which is stricter to POSIX,
rather than the GNU C Library (glibc) which has numerous enhancements.
Glibc is used by most Linux distributions, including CentOS and Ubuntu
already used in the GNOME Continuous Integration jobs. So add a GParted
build job on Alpine Linux to catch these issues in future. Uses the
docker image of the latest Alpine Linux release.
[1] 3d4b1c1e7b
Fix NULL == 0 assumption in call to ped_partition_flag_next() (!100)
[2] 45c00927b7
Use POSIX basename() in BCache_Info.cc (!99)
[3] musl libc
https://musl.libc.org/Closes!104 - Add Alpine Linux CI jobs and resolve label and UUID issues
with FAT16/32
When the CentOS 7 CI jobs were failing on a subset of the job runners
[1] during March to May 2022, the docker image would hang even before
the packages were fully installed so cat /proc/version and cat
/etc/os-release were never run. Move them to the first thing done in
the docker image.
[1] Hanging of GitLab CI jobs on a subset of job runners
https://discourse.gnome.org/t/hanging-of-gitlab-ci-jobs-on-a-subset-of-job-runners/9931
Even though this is fixed the execution of configure as part of make
distcheck outputs this:
checking whether po/Makefile.in.in deletes intltool cache lock file... /usr/bin/grep: po/Makefile.in.in: No such file or directory
/usr/bin/sed: can't read po/Makefile.in.in: No such file or directory
/usr/bin/grep: po/Makefile.in.in: No such file or directory
no
make distcheck [1] performs a VPATH build with a read-only srcdir and
a separate writable build directory with files split between the two.
The relevant layout looks like:
./gparted-1.4.0-git/configure
./gparted-1.4.0-git/po/Makefile.in.in
./gparted-1.4.0-git/_build/sub/
And make distcheck runs configure like this:
cd ./gparted-1.4.0-git/_build/sub
../../configure --srcdir=../..
The file is ../../po/Makefile.in.in in this case, so not found by the
existing check. A simple investigation technique is to run make
distcheck, kill it shortly after configure completes and examine the
build tree. Definitely before make distcheck completes successfully and
deletes everything.
Fix by using $srcdir prefix to access the file. Also handle the case of
po/Makefile.in.in not existing, although this doesn't now occur in the
scenario fixed by this commit. And only patch the file if it's
writable, another case that doesn't occur in this scenario.
Relevant output line from configure run by make distcheck now looks
like:
checking whether po/Makefile.in.in deletes intltool cache lock file... yes
[1] GNU Automake, 14.4 Checking the Distribution
https://www.gnu.org/software/automake/manual/html_node/Checking-the-Distribution.htmlCloses!103 - Fix make distcheck failure found in GitLab CI job
unbuntu_test
On Ubuntu 22.04 LTS make distcheck fails like this:
$ make distcheck
...
ERROR: files left in build directory after distclean:
./po/.intltool-merge-cache.lock
make[1]: *** [Makefile:920: distcleancheck] Error 1
make[1]: Leaving directory '/builds/GNOME/gparted/gparted-1.4.0-git/_build/sub'
make: *** [Makefile:849: distcheck] Error 1
This was picked up by the GitLab ubuntu_test CI job after the Ubuntu
22.04 LTS release and the official Ubuntu docker image labelled latest
was updated to match, circa April 2022. This is a known issue with
intltool >= 0.51.0-5.1 [1][2][3], first included in Ubuntu 22.04 LTS.
The pending proposed fix is to also delete the left behind
.intltool-merge-cache.lock along with the associated cache file itself
in the intltool provided Makefile.in.in [4].
Applying a fix to the GitLab ubuntu_test CI job does nothing for fixing
it for us maintainers on our distributions. po/Makefile.in.in is not
part of the GParted git repository, instead it is copied from
/usr/share/intltool/Makefile.in.in by ./autogen.sh -> gnome-autogen.sh
-> intltoolize --force --copy --automake. Add a configure check which
patches po/Makefile.in.in as needed. This will fix it for those
building from git, and be a harmless check for those building from a tar
release. Configure output line looks like:
checking whether po/Makefile.in.in deletes intltool cache lock file... fixed
[1] Ubuntu bug 1712194 - Error when running make distcheck
https://bugs.launchpad.net/intltool/+bug/1712194
[2] Debian bug #991623 - intltool: make distcheck broken
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=991623
[3] Arch Linux bug FS#67098 - [intltool] latest patch for race condition
breaks some builds
https://bugs.archlinux.org/task/67098
[4] Remove cache lock file in mostlyclean
https://code.launchpad.net/~danbnicholson/intltool/intltool/+merge/406321Closes!103 - Fix make distcheck failure found in GitLab CI job
unbuntu_test
add_mountpoint_entry() doesn't modify the passed strings so use
pass-by-constant-reference. This avoids pass-by-value and having to
construct copies of the strings just to pass them to this method.
A user received the following error when attempting to resize a mounted
btrfs file system on their NixOS distribution:
Shrink /dev/nvme0n1p3 from 933.38 GiB to 894.32 GiB (ERROR)
+ calibrate /dev/nvme0n1p3 00:00:00 (SUCCESS)
+ btrfs filesystem resize 1:937759744K '/etc/machine-id' (ERROR)
ERROR: not a directory: /etc/machine-id
ERROR: resize works on mounted filesystems and accepts only
directories as argument. Passing file containing a btrfs image
would resize the underlying filesystem instead of the image.
In the partition table section of the gparted_details /dev/nvme0n1p3 was
reported with these mount points:
/etc/machine-id, /etc/NetworkManager/system-connections,
/etc/ssh/ssh_host_ed25519_key, /etc/ssh/ssh_host_ed25519_key.pub,
/etc/ssh/ssh_host_rsa_key, /etc/ssh/ssh_host_rsa_key.pub, /home,
/nix, /nix/store, /state, /var
The user had a common configuration of NixOS which boots with an empty
tmpfs as root with a few bind mounted files and directories to provide
the needed persistent data [1][2].
Re-create an equivalent situation:
1. Create a btrfs file system and mount it:
# mkfs.btrfs /dev/sdb1
# mkdir /mnt/store
# mount /dev/sdb1 /mnt/store
2. Bind mount a file from this file system else where in the hierarchy.
The only criteria is that this mount point sorts before /mnt/store.
# echo 'Test contents' > /mnt/store/test
# touch /boot/test
# mount --bind /mnt/store/test /boot/test
The kernel reports these mount mounts:
# grep sdb1 /proc/mounts
/dev/sdb1 /mnt/store btrfs rw,seclabel,relatime,space_cache=v2,subvolid=5,subvol=/ 0 0
/dev/sdb1 /boot/test btrfs rw,seclabel,relatime,space_cache=v2,subvolid=5,subvol=/ 0 0
3. Use GParted to resize this mounted btrfs file system. It fails with
the above error.
GParted read the mount points from /proc/mounts and sorted them. (See
the end of Mount_Info::load_cache() for the sorting). When resizing the
btrfs file system GParted just used the first sorted mount point. This
was the file /etc/machine-id for the user and file /boot/test in the
re-creation, hence the error.
Fix by selecting the first directory mount point to pass to the btrfs
resize command.
[1] NixOS tmpfs as root
https://elis.nu/blog/2020/05/nixos-tmpfs-as-root/
[2] Erase your darlings
https://grahamc.com/blog/erase-your-darlingsCloses#193 - path used to resize btrfs needs to be a directory
GParted fails to build on Alpine Linux Edge (development tree for the
next release) like this:
GParted_Core.cc: In constructor 'GParted::GParted_Core::GParted_Core()':
GParted_Core.cc:75:64: error: invalid 'static_cast' from type 'std::nullptr_t' to type 'PedPartitionFlag'
75 | for ( PedPartitionFlag flag = ped_partition_flag_next( static_cast<PedPartitionFlag>( NULL ) ) ;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The code is failing to compile now because musl libc 1.2.3 has became
more C++11 strict [1][2] by defining NULL [3] as nullptr [4] rather than
as 0. The parameter to ped_partition_flag_next() [5] should always have
been numeral 0 cast to an enumeration and never the NULL pointer.
Fixes this commit [6] from 2004-12-27 which changed the parameter from 0
to NULL.
[1] define NULL as nullptr when used in C++11 or later
https://git.musl-libc.org/cgit/musl/commit?id=98e688a9da5e7b2925dda17a2d6820dddf1fb28
[2] NULL vs nullptr (Why was it replaced?) [duplicate]
https://stackoverflow.com/questions/20509734/null-vs-nullptr-why-was-it-replaced
[3] C++ reference, NULL
https://en.cppreference.com/w/cpp/types/NULL
[4] C++ reference, nullptr
https://en.cppreference.com/w/cpp/language/nullptr
[5] libparted Documentation, ped_partition_flag_next()
https://www.gnu.org/software/parted/api/group__PedPartition.html#g0ce9ce4247b320011bc8e9d957c8cdbb
[6] Added cylsize to Device and made Operation contain a Device instead
commit 174f0cff77Closes!100 - Fix NULL == 0 assumption in call to
ped_partition_flag_next()
Musl libc [1][2] doesn't implement the GNU variant of basename() [3][4],
obtained via #include <string.h>. Therefore GParted fails to build on
such distributions:
fdebug-prefix-map=TOPDIR/build/tmp/work/cortexa57-yoe-linux-musl/gparted/1.4.0-r0/recipe-sysroot-native=-fvisibility-inlines-hidden -c -o ../../gparted-1.4.0/src/BCache_Info.cc:52:33:
error: use of undeclared identifier 'basename'; did you mean 'g_basename'?
return "/dev/" + Glib::ustring(basename(buf));
^~~~~~~~
g_basename
Fix by using the POSIX implementation of basename() [5] instead,
obtained via #include <libgen.h>, which musl libc does implement [6].
Note that the POSIX implementation of basename() is allowed to modify
the string passed to it. This is okay because
BCache_Info::get_bcache_device() is using a modifiable local character
buffer.
[1] musl libc
https://musl.libc.org/
[2] Projects using musl
https://wiki.musl-libc.org/projects-using-musl.html
[3] The GNU C Library, 5.10 Finding Tokens in a String
https://www.gnu.org/software/libc/manual/html_node/Finding-Tokens-in-a-String.html
[4] basename(3) - Linux manual page
https://man7.org/linux/man-pages/man3/basename.3.html
[5] POSIX basename()
https://pubs.opengroup.org/onlinepubs/009695399/functions/basename.html
[6] musl source, basename.c
http://git.musl-libc.org/cgit/musl/tree/src/misc/basename.cCloses!99 - Fix undeclared identifier 'basename' build failure with
musl libc
GParted automatically enables the Partition > Unmount action for busy
partitions. This is not going to be supported for jbds so disable it.
Closes#89 - GParted doesn't recognise EXT4 fs journal partition
Continuing from the state in the previous commit, create an ext4 file
system using the previously created external journal and mount it.
# mke2fs -t ext4 -J device=/dev/sdb1 -L test-ext4 /dev/sdb2
# mount /dev/sdb2 /mnt/2
Did some experimenting with trying to create a second file system using
the same external journal which is already in use.
# mke2fs -t ext4 -J device=/dev/sdb1 -L 2nd-test-ext4 /dev/sdb3
...
/dev/sdb1 is apparently in use by the system; will not make a journal here!
# exit $?
1
Examined the source code of mke2fs and found that it performs an
exclusive read-only open of the named journal block device to check if
it is in use by the system or not [1]. Use the same method in GParted.
Not used alternative method would be to mark the jbd active when the
ext3/4 file system using it is active, but that requires working out the
linkage between them. That can be done using either blkid or dumpe2fs
output but that involves parsing more fields and caching more data so is
much more code than just testing the block device busy status using the
same method which mke2fs uses.
Matching UUIDs via blkid output.
# blkid /dev/sdb1 /dev/sdb2
/dev/sdb1: LABEL="test-jbd" UUID="6e52858e-0479-432f-80a1-de42f9a4093e" TYPE="jbd"
/dev/sdb2: LABEL="test-ext4" UUID="cea5c2cd-b21c-4abf-a497-8c073bb12300" EXT_JOURNAL="6e52858e-0479-432f-80a1-de42f9a4093e" TYPE="ext4"
Matching UUIDs via dumpe2fs output.
# dumpe2fs -h /dev/sdb1 | egrep 'Filesystem UUID|Journal users'
dumpe2fs 1.46.3 (27-Jul-2021)
Filesystem UUID: 6e52858e-0479-432f-80a1-de42f9a4093e
Journal users: cea5c2cd-b21c-4abf-a497-8c073bb12300
# dumpe2fs -h /dev/sdb2 | egrep 'Filesystem UUID|Journal UUID'
dumpe2fs 1.46.3 (27-Jul-2021)
Filesystem UUID: cea5c2cd-b21c-4abf-a497-8c073bb12300
Journal UUID: 6e52858e-0479-432f-80a1-de42f9a4093e
If GParted was going to show the journal to file system linkage in the
UI then doing this would be needed. However so far there has only been
a single reported case of a GParted user using an external journal,
therefore adding the code complexity for this feature is not currently
justified. The simple busy detection method used by mke2fs is all that
is needed.
[1] mke2fs source code
https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/
misc/mke2fs.c:main()
check_mount(journal_device, force, _("journal"));
misc/util.c:check_mount()
ext2fs_check_if_mounted(device, &mount_flags);
lib/ext2fs/ismounted.c:ext2fs_check_if_mounted()
ext2fs_check_mount_point(file, mount_flags, NULL, 0);
lib/ext2fs/ismounted.c:ext2fs_check_if_mounted()
if (stat(device, &st_buf) == 0 &&
ext2fsP_is_disk_device(st_buf.st_mode)) {
int fd = open(device, O_RDONLY | O_EXCL);
if (fd >= 0) {
/*
* The device is not busy so it's
* definitelly not mounted. No need to
* to perform any more checks.
*/
close(fd);
*mount_flags = 0;
return 0;
} else if (errno == EBUSY) {
busy = 1;
}
}
Closes#89 - GParted doesn't recognise EXT4 fs journal partition
A user reported that they were using an external journal with an ext4
file system, but that GParted didn't recognise it. (They had the jbd
on an Intel Optane drive and the ext4 file system on an SSD).
Create a jbd like this:
# mke2fs -O journal_dev -L test-jbd /dev/sdb1
# blkid /dev/sdb1
/dev/sdb1: LABEL="test-jbd" UUID="6e52858e-0479-432f-80a1-de42f9a4093e" TYPE="jbd"
Add recognition of jbd. Use Blue Shadow colour, the same as ext4,
because jbd is primarily used by ext3/4 [1][2]. jbd is also used by
ocfs2 [1][3] and lustre [4][5] clustered file systems, but they are very
unlikely to encountered by GParted users. Also xfs [6] and jfs [7] can
have external journals so if recognition of them is ever added they will
get the same colour as their respective file systems too.
[1] Journaling block device
https://en.wikipedia.org/wiki/Journaling_block_device
"JBD is filesystem-independent. ext3, ext4 and OCFS2 are known to
use JBD"
[2] https://ext4.wiki.kernel.org/index.php/Frequently_Asked_Questions#What_are_the_key_differences_between_jbd_and_jbd2.3F
[3] OCFS2: The Oracle Clustered File System, Version 2
https://www.kernel.org/doc/ols/2006/ols2006v1-pages-289-302.pdf
"Metadata journaling is done on a per node basis with JBD"
[4] Efficient Object Storage Journaling in a Distributed Parallel File
System
https://www.usenix.org/legacy/event/fast10/tech/full_papers/oral.pdf
[5] Lustre Software Release 2.x Operations Manual
https://doc.lustre.org/lustre_manual.pdf
6.4.2. Choosing Parameters for an External Journal
[6] mkfs.xfs(8) - construct an XFS filesystem
https://man7.org/linux/man-pages/man8/mkfs.xfs.8.html
"OPTIONS
...
logdev=device
This is used to specify that the log section should reside on
the device separate from the data section. The internal=1 and
logdev options are mutually exclusive.
"
[7] jfs_mkfs(8) - create a JFS formatted partition
https://manpages.debian.org/testing/jfsutils/jfs_mkfs.8.en.html
"OPTIONS
...
-j journal_device
Create the external JFS journal on journal_device, ...
"
Closes#89 - GParted doesn't recognise EXT4 fs journal partition
As found by the GitLab Continuous Integration job on CentOS 7 with
itstool 2.0.2, building the GParted Manual breaks on the Russian
translation like this:
$ ./autogen.sh
$ make clean
$ cd help
$ make
...
if ! test -d "ru/"; then mkdir "ru/"; fi
if test -d "C"; then d="../"; else d="/home/mike/programming/c/gparted/help/"; fi; \
mo="ru/ru.mo"; \
if test -f "${mo}"; then mo="../${mo}"; else mo="/home/mike/programming/c/gparted/help/${mo}"; fi; \
(cd "ru/" && itstool -m "${mo}" ${d}/C/index.docbook) && \
touch "ru/ru.stamp"
Error: Could not merge translations:
'NoneType' object has no attribute 'node'
make: *** [ru/ru.stamp] Error 1
On Fedora 35 with itstool 2.0.6 building the GParted Manual merely
reports a warning, leaving one paragraph untranslated, but the build
completes successfully:
$ ./autogen.sh
$ make clean
$ cd help
$ make
...
if ! test -d "ru/"; then mkdir "ru/"; fi
if test -d "C"; then d="../"; else d="/home/fedora/programming/c/gparted/help/"; fi; \
mo="ru/ru.mo"; \
if test -f "${mo}"; then mo="../${mo}"; else mo="/home/fedora/programming/c/gparted/help/${mo}"; fi; \
(cd "ru/" && itstool -m "${mo}" ${d}/C/index.docbook) && \
touch "ru/ru.stamp"
Warning: Could not merge translation for msgid:
Set the <application>grub</application> root device by specifying the device returned by the <command>find</command> command. This should be the partition containing the boot directory. <_:screen-1/>
...
$ echo $?
0
Fix translation of DocBook markup tag in the Russian translation of the
GParted Manual by commit:
17f4c3176d
Update Russian translation
Closes!98 - Fix translation of DocBook markup tag of the GParted Manual
Udev stopped supporting volatile udev rules in /dev/.udev/rules.d in
udev 176, released 2012-01-11 [1]. The oldest supported distributions
use much more recent combined systemd and udev releases.
Distro EOL udevadm -V
Debian 9 2022-Jun 232
RHEL / CentOS 7 2024-Jun 219
Ubuntu 18.04 LTS 2023-Apr 237
Now udev only reads volatile rules from /run/udev/ruled.d [2]. Simplify
the code a little.
[1] udev 176 NEWS
https://git.kernel.org/pub/scm/linux/hotplug/udev.git/tree/NEWS?h=176
"A writable /run directory (ususally tmpfs) is required now for a
fully functional udev, there is no longer a fallback to /dev/.udev."
[2] man 7 udev
"RULES FILES
The udev rules are read from the files located in the system rules
directory /usr/lib/udev/rules.d, the volatile runtime directory
/run/udev/rules.d and the local administration directory
/etc/udev/rules.d."
From the setup in the previous commit, unregister (stop) all of the
bcache backing and cache devices.
# bcache unregister /dev/sdb2
# bcache unregister /dev/sdb1
# bcache unregister /dev/sdc1
# bcache show
Name Type State Bname AttachToDev
/dev/sdb2 1 (data) inactive Non-Exist Alone
/dev/sdb1 1 (data) inactive Non-Exist Alone
/dev/sdc1 3 (cache) inactive N/A N/A
Run GParted. Just the device scanning causes the stopped bcache devices
to be restarted.
# bcache show
Name Type State Bname AttachToDev
/dev/sdb2 1 (data) clean(running) bcache1 /dev/sdc1
/dev/sdb1 1 (data) clean(running) bcache0 /dev/sdc1
/dev/sdc1 3 (cache) active N/A N/A
This is nothing new with this patchset, but as a result of existing udev
behaviour. The chain of events goes like this:
1. GParted calls ped_device_get() on each whole device;
2. Libparted opens each partition read-write to flush the cache;
3. When each is closed the kernel emits a block change event;
4. Udev fires block rules to detect the possibly changed content;
5. Udev fires bcache register (AKA start) rule.
More details with the help of udevadm monitor, strace and syslog:
GParted | set_devices_thread()
GParted | ped_device_get("/dev/sdb")
Libparted| ...
Libparted| openat(AT_FDCWD, "/dev/sdb1", O_WRONLY) = 9
Libparted| ioctl(9, BLKFLSBUF) = 0
Libparted| close(9)
KERNEL | change /devices/.../block/sdb/sdb1 (block)
KERNEL | add /devices/virtual/bdi/250:0 (bdi)
KERNEL | add /devices/virtual/block/bcache0 (block)
KERNEL | change /devices/virtual/block/bcache0 (block)
UDEV | change /devices/.../block/sdb/sdb1 (block)
UDEV | add /devices/virtual/bdi/250:0 (bdi)
UDEV | add /devices/virtual/block/bcache0 (block)
UDEV | change /devices/virtual/block/bcache0 (block)
SYSLOG | kernel: bcache: register_bdev() registered backing device sdb1
# grep bcache-register /lib/udev/rules.d/69-bcache.rules
RUN+="bcache-register $tempnode"
Fix this by temporarily adding a blank udev override rule to suppress
automatic starting of bcache devices, just as was previously done for
Linux Software RAID arrays [1].
[1] a255abf343
Prevent GParted starting stopped Linux Software RAID arrays (#709640)
Closes#183 - Basic support for bcache
A bcache device provides accelerated access to a backing device in a one
to one relationship. Multiple bcache backing devices can be attached to
and accelerated by the same cache device. Extending the setup from the
previous commit, create an additional backing device and attach it to
the same cache.
# bcache make -B /dev/sdb2
# bcache attach /dev/sdc1 /dev/sdb2
# bcache show
Name Type State Bname AttachToDev
/dev/sdb2 1 (data) clean(running) bcache1 /dev/sdc1
/dev/sdb1 1 (data) clean(running) bcache0 /dev/sdc1
/dev/sdc1 3 (cache) active N/A N/A
List a couple of bcache specific sysfs files which identify registered
(active) bcache devices (components).
# ls -l /sys/block/sd?/sd??/bcache/{dev,set}
lrwxrwxrwx. 1 root root 0 Jan 7 10:08 /sys/block/sdb/sdb1/bcache/dev -> ../../../../../../../../../../virtual/block/bcache0
lrwxrwxrwx. 1 root root 0 Jan 7 11:53 /sys/block/sdb/sdb2/bcache/dev -> ../../../../../../../../../../virtual/block/bcache1
lrwxrwxrwx. 1 root root 0 Jan 7 11:53 /sys/block/sdc/sdc1/bcache/set -> ../../../../../../../../../../../fs/bcache/9945e165-0604-4f29-94bd-b155d01080ad
As was done with previous software block devices [1][2][3][4] show the
bcache (access) device as the mount point of a backing device
(component). Use the /sys/block/DEV[/PTN]/bcache/dev sysfs symlinks to
provide the bcache device names. Bcache cache devices (components)
don't get mount points because they aren't accessible.
[1] commit 8083f11d84
Display LVM2 VGNAME as the PV's mount point (#160787)
[2] commit f6c2f00df7
Populate member mount point with SWRaid array device (#756829)
[3] commit 538c866d09
Display array device as mount point of mdadm started ATARAID members
(#75)
[4] commit 538c866d09
Display array device as mount point of mdadm started ATARAID members
(#75)
Closes#183 - Basic support for bcache
GParted automatically enables the Partition > Unmount action for busy
partitions. Unregister (deactivate) bcache devices is not going to be
supported so disable it.
Closes#183 - Basic support for bcache
Make (format as) bcache backing device (-B) and cache device (-C) and
implicitly attach the backing device to the cache to enable caching, all
in one.
# bcache make -B /dev/sdb1 -C /dev/sdc1
# bcache show
Name Type State Bname AttachToDev
/dev/sdb1 1 (data) clean(running) bcache0 /dev/sdc1
/dev/sdc1 3 (cache) active N/A N/A
After experimenting with 'bcache unregister', 'bcache register' and
stracing 'bcache show' the bcache kernel module creates the sysfs
directory /sys/block/DEV[/PTN]/bcache and it's contents only when the
bcache device is registered with the kernel (bcache component is
active). Use this to identify whether any bcache device (component)
should be displayed as active or not in GParted.
# ls -ld /sys/block/sd?/sd?1/bcache
drwxr-xr-x. 6 root root 0 Jan 7 10:08 /sys/block/sdb/sdb1/bcache
drwxr-xr-x. 2 root root 0 Jan 7 10:08 /sys/block/sdc/sdc1/bcache
Closes#183 - Basic support for bcache
Add pattern to recognise block cache devices as valid devices for
GParted to work with. Devices are named by the Linux kernel device
driver like /dev/bcache0 [1] with partitions named like /dev/bcache0p1
[2].
Note bcache devices can be partitioned but all the documents I have seen
guide users to create file systems directly in a bcache device and not
partition it [3][4] (plus all other Internet search results I looked
at). This might be because bcache is a specialist use case and the
bcache backing device can be a partition itself.
[1] linux 5.15 drivers/md/bcache/super.c bcache_device_init()
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/md/bcache/super.c?h=v5.15#n945
[2] Contents of /proc/partitions for a bcache partitioned backing device
$ grep bcache /proc/partitions
251 0 524280 bcache0
251 1 523256 bcache0p1
[3] Linux kernel document: A block layer cache (bcache)
https://www.kernel.org/doc/Documentation/bcache.txt
[4] The Linux kernel user's and administrator's guide > A block layer
cache (bcache)
https://www.kernel.org/doc/html/latest/admin-guide/bcache.htmlCloses#183 - Basic support for bcache
Use blkid to detect bcache formatted devices. Requires blkid from
util-linux >= 2.24 for detection of bcache devices [1].
Use util-linux's FS images when testing GParted detection.
# wget http://git.kernel.org/cgit/utils/util-linux/util-linux.git/plain/tests/ts/blkid/images-fs/bcache-B.img.xz
# xzcat bcache-B.img.xz > /dev/sdb1
# wget http://git.kernel.org/cgit/utils/util-linux/util-linux.git/plain/tests/ts/blkid/images-fs/bcache-C.img.xz
# xzcat bcache-C.img.xz > /dev/sdc1
# blkid /dev/sdb1 /dev/sdc1
/dev/sdb1: UUID="8fb7f716-4c19-4517-bfbb-6f4a2becad60" TYPE="bcache" PARTUUID="f8f1485e-01"
/dev/sdc1: UUID="7a343627-ac87-4bf0-b76f-46067cbc9b8c" TYPE="bcache" PARTUUID="f46e8c86-01"
To tidy-up after testing GParted detection, stop the bcache device in
case it was automatically started and wipe the signatures. This is to
prevent udev rules from automatically starting the bcache device on
every subsequent reboot.
# echo 1 > /sys/block/sdb/sdb1/bcache/stop
# wipefs -a /dev/sdb1 /dev/sdc1
Closes#183 - Basic support for bcache
Currently the Face Skin colour range from the GNOME palette represent a
mixture of file systems and software block devices:
JFS - Face Skin Medium
LVM2_PV - Face Skin Dark
NILFS2 - Face Skin Shadow
LINUX_SWRAID - Dark Brown
ATARAID - Dark Brown
We are about to add recognition of bcache [1][2][3], another software
block device. Reorganise the colour assignments so that Face Skin
colour range is exclusively used by types of software block devices and
assign JFS and NILFS2 new colours.
Face Skin Light (#EFE0CD) -
Face Skin Medium (#E0C39E) - BCACHE [New assignment]
Face Skin Dark (#B39169) - LVM2_PV
Face Skin Shadow (#826647) - LINUX_SWRAID [New assignment]
Brown Dark (#5A4733) - ATARAID
NILFS2 has flash friendly characteristics [4] so use Accent Red colours
along with F2FS.
Accent Red (#DF421E) - F2FS
Accent Red Dark (#990000) - NILFS2 [New assignment]
Move JFS to a group with XFS and UFS. The hue of the GNOME palette
Accent Yellow Dark colour, as used by UFS, was more Orange compared to
Accent Yellow and a bit too close to the Orange used by BTRFS. So use
the hue of the GNOME Accent Yellow and extend the range and assign like
this.
Accent Yellow (#EED680) - XFS
Accent Yellow Dark (#D6B129) - JFS [Updated hue.
New assignment]
Accent Yellow Shadow (#AA8F2C) - UFS [New colour.
New assignment]
Accent Yellow Dark Shadow (#826F2B) - [New colour]
[1] bcache
https://bcache.evilpiepirate.org/
[2] Linux kernel document: A block layer cache (bcache)
https://www.kernel.org/doc/Documentation/bcache.txt
[3] The Linux kernel user's and administrator's guide > A block layer
cache (bcache)
https://www.kernel.org/doc/html/latest/admin-guide/bcache.html
[4] NILFS, Relative performance
https://en.wikipedia.org/wiki/NILFS#Relative_performanceCloses#183 - Basic support for bcache