Since the previous commit "Also erase all Promise FastTrack RAID
signatures" the previous failing IntelSoftwareRAIDUnaligned test now
passes along with the new PromiseFastTrackRaid* tests.
$ ./test_EraseFileSystemSignatures
Running main() from test_EraseFileSystemSignatures.cc
DISPLAY=":0.0"
[==========] Running 4 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 4 tests from EraseFileSystemSignaturesTest
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned
[ OK ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned (158 ms)
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned
[ OK ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned (81 ms)
[ RUN ] EraseFileSystemSignaturesTest.PromiseFastTrackRAIDAligned
[ OK ] EraseFileSystemSignaturesTest.PromiseFastTrackRAIDAligned (74 ms)
[ RUN ] EraseFileSystemSignaturesTest.PromiseFastTrackRAIDUnaligned
[ OK ] EraseFileSystemSignaturesTest.PromiseFastTrackRAIDUnaligned (74 ms)
[----------] 4 tests from EraseFileSystemSignaturesTest (387 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test case ran. (387 ms total)
[ PASSED ] 4 tests.
Closes#220 - Format to Cleared not clearing "pdc" ataraid signature
User reported that GParted didn't clear a pdc (Promise FastTrack) RAID
signature [1]. Reproduce this issue by creating a 16 MiB - 512 byte
test image with Promise FastTrack RAID signatures at all recognised
offsets [2].
$ python << 'EOF'
signature = b'Promise Technology, Inc.'
import os
fd = os.open('/tmp/test.img', os.O_CREAT|os.O_WRONLY)
os.ftruncate(fd, 16*1024*1024 - 512)
for offset in [63, 255, 256, 16, 399, 591, 675, 735, 911, 974, 991, 951, 3087]:
os.lseek(fd, -(offset*512), os.SEEK_END)
os.write(fd, signature)
os.close(fd)
EOF
Then use GParted Format to > Cleared.
$ sudo ./gpartedbin /tmp/test.img
Afterwards blkid, and therefore GParted, still recognises this as a
Promise FastTrack RAID member.
$ blkid /tmp/test.img
/tmp/test.img: TYPE="promise_fasttrack_raid_member"
This is because the test image still contains multiple signatures.
$ hexdump -C /tmp/test.img | grep Promise
00e7e000 50 72 6f 6d 69 73 65 20 54 65 63 ... |Promise Technolo|
00fce000 50 72 6f 6d 69 73 65 20 54 65 63 ... |Promise Technolo|
00fdfe00 50 72 6f 6d 69 73 65 20 54 65 63 ... |Promise Technolo|
00ff8000 50 72 6f 6d 69 73 65 20 54 65 63 ... |Promise Technolo|
Used a test image not an exact multiple of MiBs because drives generally
aren't an exact MiB multiple in size either and as the clearing of ZFS
labels L2 and L3 by writes of zeros at the end of the drive is rounded
to 256 KiBs there will be sectors after that not zeroed where other
Promise signatures remain. The above signatures map back to these
sectors before the end:
16*1024*1024 - 512 = 16776704
512b sectors KiB
(0x00e7e000 - 16776704) / 512 = -3087 -1543.5
(0x00fce000 - 16776704) / 512 = -399 -199.5
(0x00fdfe00 - 16776704) / 512 = -256 -128
(0x00ff8000 - 16776704) / 512 = -63 -31.5
Promise FastTrack RAID signatures are always at multiples 512-byte
sectors (code uses left shift 9 to convert from sectors to byte offset)
[2].
Fix this by:
1. Replace existing zeroing of 3 ranges relative to the end of the
device to be a single range covering the ZFS labels L2 and L3 to the
end of the drive. This will also clear the SWRaid 0.90 & 1.0
super blocks, the Nilfs2 secondary super block, the Intel Software
RAID signature found not zeroed in the unaligned unit test case and
the above Promise FastTrack RAID signatures at -199.5 KiB and later.
2. Add zeroing of the final Promise FastTrack RAID signature at sector
-3087.
Performed a review of all the other ATARAID super blocks detected by
blkid (files *_raid.c) [3] and they are all located within the last 11
sectors so will be zeroed by case 1. above.
[1] GParted forum thread: How to remove a ataraid partition ?
http://gparted-forum.surf4.info/viewtopic.php?id=18104
[2] blkid from util-linux promise_raid.c:probe_pdcraid()
https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/tree/libblkid/src/superblocks/promise_raid.c?h=v2.38.1#n27
[3] blkid RAID member detection (files *_raid.c)
https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/tree/libblkid/src/superblocks/?h=v2.38.1Closes#220 - Format to Cleared not clearing "pdc" ataraid signature
Each test in test_EraseFileSystemSignatures is taking just over 10
seconds to run in the Alpine Linux CI image:
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned
[ OK ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned (10045 ms)
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned
...
[ FAILED ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned (10048 ms)
[----------] 2 tests from EraseFileSystemSignaturesTest (20093 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (20093 ms total)
This is because the udevadm command is not found and so settle_device()
waits for 10 seconds in this call chain:
erase_filesystem_signatures()
settle_device(SETTLE_DEVICE_APPLY_MAX_WAIT_SECONDS)
sleep(10)
Install udevadm command into the Alpine Linux CI job docker image to fix
this. Now it's on a par with the time taken in the other distro CI test
jobs:
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned
[ OK ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned (417 ms)
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned
...
[ FAILED ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned (165 ms)
[----------] 2 tests from EraseFileSystemSignaturesTest (582 ms total)
[----------] Global test environment tear-down
[==========] 2 tests from 1 test case ran. (582 ms total)
Closes#220 - Format to Cleared not clearing "pdc" ataraid signature
Move common testing code which doesn't need linking with GParted objects
into the common module. Move the remaining common code used to print
GParted objects using the insertion operator (operator<<) into the
insertion_operators module. Split the common code like this so that the
operator<<(std::ostream&, const OperationDetail&) function is not
included in test_PipeCapture and it is not forced to link with all the
non-UI related GParted objects.
The Automake manual provides guidance that when a header belongs to a
single program it is recommended to be listed in the program's _SOURCES
variable and for a directory only containing header files listing them
in the noinst_HEADERS variable is the right variable to use [1].
However the guidance doesn't cover this case for common.h and
insertion_operators.h; header files in a directory with other files and
used by multiple programs. So just because we have gparted_core_OBJECTS
(normal Makefile, not Automake special variable) listing objects to link
with, choose to use noinst_HEADERS Automake variable to list needed
headers.
[1] GNU Automake manual, 9.2 Header files
https://www.gnu.org/software/automake/manual/html_node/Headers.html
"Usually, only header files that accompany installed libraries
need to be installed. Headers used by programs or convenience
libraries are not installed. The noinst_HEADERS variable can be
used for such headers. However, when the header belongs to a
single convenience library or program, we recommend listing it
in the program's or library's _SOURCES variable (see Defining
program sources) instead of in noinst_HEADERS. This is clearer
for the Makefile.am reader. noinst_HEADERS would be the right
variable to use in a directory containing only headers and no
associated library or program.
All header files must be listed somewhere; in a _SOURCES
variable or in a _HEADERS variable. Missing ones will not
appear in the distribution.
"
Closes#220 - Format to Cleared not clearing "pdc" ataraid signature
Initially just testing erasing of Intel Software RAID signatures.
Chosen because it was expected to work, but turned out not to be true in
all cases.
The code needs to initialise GParted_Core::mainthread, construct
Gtk::Main() and execute xvfb-run because of this call chain:
GParted_Core::erase_filesystem_signatures()
GParted_Core::settle_device()
Utils::execute_command ("udevadm settle ...")
status.foreground = (Glib::Thread::self() == GParted_Core::mainthread)
Gtk::Main::run()
This was also needed when testing file system interface classes as
discussed in commits [1][2].
The test fails like this:
$ ./test_EraseFileSystemSignatures
...
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned
[ OK ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDAligned (155 ms)
[ RUN ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned
test_EraseFileSystemSignatures.cc:286: Failure
Failed
image_contains_all_zeros(): First non-zero bytes:
0x00001A00 "Intel Raid ISM C" 49 6E 74 65 6C 20 52 61 69 64 20 49 53 4D 20 43
test_EraseFileSystemSignatures.cc:320: Failure
Value of: image_contains_all_zeros()
Actual: false
Expected: true
[ FAILED ] EraseFileSystemSignaturesTest.IntelSoftwareRAIDUnaligned (92 ms)
Manually write the same test image:
$ python << 'EOF'
signature = b'Intel Raid ISM Cfg Sig. '
import os
fd = os.open('/tmp/test.img', os.O_CREAT|os.O_WRONLY)
os.ftruncate(fd, 16*1024*1024 - 512)
os.lseek(fd, -(2*512), os.SEEK_END)
os.write(fd, signature)
os.close(fd)
EOF
Run gpartedbin /tmp/test.img and Format to > Cleared. GParted continues
to display the the image file as containing an ataraid signature.
$ blkid /tmp/test.img
/tmp/test.img: TYPE="isw_raid_member"
$ hexdump -C /tmp/test.img
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00fffa00 49 6e 74 65 6c 20 52 61 69 64 20 49 53 4d 20 43 |Intel Raid ISM C|
00fffa10 66 67 20 53 69 67 2e 20 00 00 00 00 00 00 00 00 |fg Sig. ........|
00fffa20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00fffe00
This signature is not being cleared when the device/partition/image size
is 512 bytes smaller than a whole MiB because the last 3.5 KiB is left
unwritten. This is because the last block of zeros written is 8 KiB
aligned to 4 KiB at the end of the device.
[1] a97c23c57c
Add initial create ext2 only FileSystem interface class test (!49)
[2] 8db9a83b39
Run test program under xvfb-run to satisfy need for an X11 display (!49)
Closes#220 - Format to Cleared not clearing "pdc" ataraid signature
As documented in the previous commit xfsprogs >= 5.19.0 refuses to
create an XFS file system smaller than 300 MiB.
$ truncate -s $((300*1024*1024-1)) test.img
$ ls -l test.img
-rw-r--r-- 1 auser auser 314572799 Dec 21 11:01 test.img
$ mkfs.xfs -V
mkfs.xfs version 6.0.0
$ mkfs.xfs test.img
Filesystem must be larger than 300MB.
...
$ echo $?
1
Successfully create an XFS file system at minimum size of 300 MiB.
$ truncate -s $((300*1024*1024)) test.img
$ ls -l test.img
-rw-r--r-- 1 auser auser 314572800 Dec 21 11:05 test.img
$ mkfs.xfs test.img
...
$ echo $?
0
$ blkid test.img
test.img: UUID="..." BLOCK_SIZE="512" TYPE="xfs"
Increase the GParted minimum XFS size to 300 MiB. For simplicity and
because the XFS developers said of smaller XFS file systems [1]:
"are known to have performance and redundancy problems that are not
present on the volume sizes that XFS is best at handling"
regardless of the version of mkfs.xfs used to create that XFS then apply
to all versions of xfsprogs.
[1] https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/commit/?id=6e0ed3d19c54603f0f7d628ea04b550151d8a262
mkfs: stop allowing tiny filesystems
Closes#217 - GitLab CI test job failing with new mkfs.xfs error
"Filesystem must be larger than 300MB."
From 27-Nov-2022 the alpine_test GitLab CI job started failing,
reporting errors creating XFS file systems in the
test_SupportedFileSystems unit test like this:
[ RUN ] My/SupportedFileSystemsTest.Create/xfs
test_SupportedFileSystems.cc:501: Failure
Value of: m_fs_object->create(m_partition, m_operation_detail)
Actual: false
Expected: true
Operation details:
mkfs.xfs -f -L '' '/builds/GNOME/gparted/tests/test_SupportedFileSystems.img' 00:00:00 (ERROR)
Filesystem must be larger than 300MB.
...
This is because Docker image "alpine:latest" has updated to Alpine Linux
3.17 which includes xfsprogs 6.0.0 which includes this change (first
released in xfsprogs 5.19.0):
https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/commit/?id=6e0ed3d19c54603f0f7d628ea04b550151d8a262
mkfs: stop allowing tiny filesystems
Refuse to format a filesystem that are "too small", because these
configurations are known to have performance and redundancy problems
that are not present on the volume sizes that XFS is best at
handling.
Specifically, this means that we won't allow logs smaller than 64MB,
we won't allow single-AG filesystems, and we won't allow volumes
smaller than 300MB.
Increase the default unit test file system image size from 256 MiB to
256+64 = 320 MiB to avoid this error.
Closes#217 - GitLab CI test job failing with new mkfs.xfs error
"Filesystem must be larger than 300MB."
GParted's check operation is a check and if possible repair. For most
file system types GParted already requests that the file system is
repaired. fsck.exfat -y flag has been available since the first release
of exfatprogs 1.0.1 [1] so unconditionally add this.
[1] exfatprogs 1.0.1 fsck/fsck.c:main() case 'y':
https://github.com/exfatprogs/exfatprogs/blob/1.0.1/fsck/fsck.c#L1231!109 - Enable repair when checking exfat file systems
The code was overly complicated in how it converted to the 32-bit little
endian on-disk representation of the Hidden Sectors field. It did:
1. Formatted the partition start sector as a hexadecimal string.
2. Padded it to 8 digits.
3. Reversed pairs of digits.
4. Converted pairs of hexadecimal digits to bytes of binary data.
5. Wrote the 4 bytes of binary data to the Hidden Sectors field.
There is no need for all this string manipulation to convert to a 32-bit
little endian value. Just do this:
1. Truncate (signed 64-bit) partition start sector to 32-bit.
2. Convert from host native to little endian.
3. Write as 4 bytes of binary data to the Hidden Sectors field.
The code also ignores write errors. ofstream.write() only copies the
data into an in process buffer [1] and the data is not passed to the OS
to write to the open file handle until ofstream.close() [2] is called.
However the status of close() was not checked so a failure of the OS to
perform the write would go unreported.
In the case of a failure providing the user with a command line to set
the Hidden Sectors field is excessive. Updating the Hidden Sectors is
no more or less likely to fail than for any other storage manipulation
action. For example GParted doesn't provide command line instructions
to update a partition size if a libparted call fails. Therefore remove.
Rewrite the code to resolve the above issues and lay it out using
if-operation-fails-return-early pattern.
[1] std::ostream::write()
"... it inserts characters into associated stream_buffer object as
if calling its member function sputc until n characters have been
written or until an insertion fails ..."
https://cplusplus.com/reference/ostream/ostream/write/
[2] std::ofstream::close()
"Any pending output sequence is written to the file."
https://cplusplus.com/reference/fstream/ofstream/close/Closes#164 - GParted crashes copying NTFS partition to starting beyond
2TiB
Create test setup using a 4 TiB loop device:
# truncate -s 4T /tmp/disk.img
# losetup -f --show /tmp/disk.img
/dev/loop0
Create 2 x 1 TiB partitions. First at offset 1 MiB, second at offset
2 TiB:
# sgdisk --new 1:2048:2147485696 --typecode 1:0700 /dev/loop0
# sgdisk --new 2:4294967296:6442450944 --typecode 2:0700 /dev/loop0
# partprobe /dev/loop0
Create NTFS file system in the first partition:
# mkntfs -Q /dev/loop0p1
Then use GParted to copy the first NTFS partition into the second
partition. GParted crashes:
# gpartedbin /dev/loop0
...
(gpartedbin:14660): glibmm-ERROR **: 20:39:01.191:
unhandled exception (type std::exception) in signal handler:
what: basic_string::_M_replace_aux
Trace/breakpoint trap
# echo $?
133
Overview of what is happening is that GParted_Core::update_bootsector()
is attempting to set the Hidden Sectors [1] field in the NTFS Partition
Boot Sector (PBS) to the start sector of the newly copied /dev/loop0p2
partition. But the sector number is greater than will fit in a 32-bit
unsigned integer, which the code doesn't handle.
Specifically the code prints the sector number as a hexadecimal number
into string 'hex'. As the target partition starts at exactly 2 TiB then
hex="100000000" (9 hexadecimal digits long). Next:
hex.insert(0, 8 - hex.length(), '0');
is meant to pad the beginning of the 'hex' string with '0's to make the
string 8 character long. But the string is already 9 character long so
8 - 9 is -1 which as unsigned integral type size_t [2] is 2^64-1. So
insert() is trying to insert 18446744073709551615 '0's at the start of
the 'hex' string! Hence the crash.
mkntfs refuses to accept an explicit partition start sector of 2^32 or
larger:
# mkntfs -Q --partition-start 4294967296 /dev/loop0p2
Invalid partition start sector. Maximum is 4294967295 (2^32-1).
# echo $?
1
When mkntfs can't determine the drive geometry and partition offset, as
is the case on loop devices or the partition start sector is 2^32 or
larger, then mkntfs writes zero into Hidden Sectors:
# mkntfs -Q /dev/loop0p1
The partition start sector was not specified for /dev/loop0p1 and it could not be obtained automatically. It has been set to 0.
...
To boot from a device, Windows needs the 'partition start sector', ...
Windows will not be able to boot from this device.
Creating NTFS volume structures.
mkntfs completed successfully. Have a nice day.
# echo $?
0
# hexdump -C /dev/loop0p1 | head -2
00000000 eb 52 90 4e 54 46 53 20 20 20 20 00 02 08 00 00 |.R.NTFS .....|
00000010 00 00 00 00 00 f8 00 00 00 00 00 00 00 00 00 00 |................|
^^ ^^ ^^ ^^
Hidden Sectors value at offset 0x1C in the NTFS Partition Boot Sector.
So mkntfs is warning, writing the Hidden Sectors as zero and reporting
success. Fix GParted in an equivalent way when it is updating the
Hidden Sectors for a moved or copied NTFS which starts at sector 2^32
and beyond.
After this fix the operational details for the same copy operation are:
Copy /dev/loop0p1 to /dev/loop0p2
* calibrate /dev/loop0p1 (SUCCESS)
* calibrate /dev/loop0p2 (SUCCESS)
* set partition type on /dev/loop0p2 (SUCCESS)
* copy file system from /dev/loop0p1 to /dev/loop0p2 (SUCCESS)
* update boot sector of ntfs file system on /dev/loop0p2 (WARNING)
Partition start (4294967296) is beyond sector 4294967295 (2^32-1).
Windows will not be able to boot from this file system.
* check file system on /dev/loop0p2 for errors and if p... (SUCCESS)
[1] NTFS, Partition Boot Sector (PBS)
"
Byte Field Field name Purpose
offset length
0x1C 5 bytes Hidden Sectors The number of sectors preceding
the partition.
"
https://en.wikipedia.org/wiki/NTFS#Partition_Boot_Sector_(PBS)
[2] std::string::insert
"fill (5) string& insert(size_t pos, size_t n, char c);
Insert into string
Inserts additional characters into the string right before the
character indicated by pos (or p):
(5) fill
Insert n consecutive copies of character c.
"
https://cplusplus.com/reference/string/string/insert/Closes#164 - GParted crashes copying NTFS partition to starting beyond
2TiB
Remove mention of intltool as it's now unused.
Add polkit to the list of dependencies to build GParted from source as
gettext always explicitly translates the org.gnome.gparted.policy file.
Add polkit-devel and gettext-devel packages to the packages needing
installing on various distributions to get the gettext translation rules
for .policy files and the autopoint build tool installed.
(Distributions such as Debian and Ubuntu split the packages differently.
Gettext translation rules for .policy files are in the base policykit-1
package and the autopoint tool is in the autopoint package which I
assume is always installed as part of the development tool set. Hence
no change to the command to install dependent packages on these
distributions. See the earlier commit messages for more details).
Closes!107 - Migrate from intltool to gettext translation
Remove no longer needed intltool related ignores. Add extra ignores
for direct use of gettext for translation.
ABOUT-NLS and most of the po/* files are copied during autogen.
autogen.sh -> gnome-autogen.sh -> autoreconf -> autopoint extracts these
from gettext's archive of application support files
/usr/share/gettext/archive.*. Looks like this:
$ ./autogen.sh
...
Processing ./configure.ac
Running autoreconf...
autoreconf: Entering directory `.'
autoreconf: running: autopoint --force
Copying file ABOUT-NLS
...
Copying file po/Makefile.in.in
Copying file po/Makevars.template
Copying file po/Rules-quot
Copying file po/boldquot.sed
Copying file po/en@boldquot.header
Copying file po/en@quot.header
Copying file po/insert-header.sin
Copying file po/quot.sed
Copying file po/remove-potcdate.sin
And these files are created by make in the po directory:
po/gparted.pot
po/remove-potcdate.sed
Closes!107 - Migrate from intltool to gettext translation
Now that intltool is no longer used, the workaround for it leaving file
.intltool-merge-cache.lock behind is no longer needed. Therefore revert
merge !103 "Fix make distcheck failure found in GitLab CI job
unbuntu_test". This commit reverts both of these earlier commits in one
go:
053691378c
Resolve messages from configure in VPATH build (!103)
0bd636a34b
Fix up intltool leaving .intltool-merge-cache.lock file behind (!103)
Closes!107 - Migrate from intltool to gettext translation
... as the GParted build no longer uses it. (Intltool is not explicitly
installed into the Ubuntu CI image).
However removing intltool from the GitLab CentOS Continuous Integration
image causes the build job to fail like this:
$ ./autogen.sh
...
**Warning**: I am going to run `configure' with no arguments.
If you wish to pass any to it, please specify them on the
`./autogen.sh' command line.
Processing ./configure.ac
Running autoreconf...
autoreconf: Entering directory `.'
autoreconf: running: autopoint --force
Can't exec "autopoint": No such file or directory at /usr/share/autoconf/Autom4te/FileUtils.pm line 345.
autoreconf: failed to run autopoint: No such file or directory
autoreconf: autopoint is needed because this package uses Gettext
This is because on CentOS 7 autopoint is provided by the gettext-devel
package which was installed as a requirement for intltool. Fix the
build by explicitly installing the package.
(On Alpine Linux the gettext-dev package is automatically installed and
on Ubuntu the autopoint package is automatically installed so those CI
images don't need to explicitly include the relevant package).
Closes!107 - Migrate from intltool to gettext translation
In the Alpine Linux 3.16 CI build job, file org.gnome.gparted.policy is
not translated with this warning:
$ make -j $nproc
...
/usr/bin/xgettext: warning: a fallback ITS rule file '/usr/share/gettext-0.21/its/metainfo.its' is used; it may not be in sync with the upstream
/usr/bin/xgettext: warning: file 'org.gnome.gparted.policy.in.in' extension 'policy' is unknown; will try C
In my Alpine Linux 3.15 VM building GParted fails like this:
$ make
...
make[2]: Entering directory '/home/alpine/programming/c/gparted'
sed -e 's,[@]libexecdir[@],/usr/local/libexec,g' -e 's,[@]bindir[@],/usr/local/bin,g' -e 's,[@]gksuprog[@],pkexec --disable-internal-agent,g' -e 's,[@]enable_xhost_root[@],no,g' < ./org.gnome.gparted.policy.in.in > org.gnome.gparted.policy.in
/usr/bin/msgfmt --xml --template org.gnome.gparted.policy.in -d ./po -o org.gnome.gparted.policy
/usr/bin/msgfmt: cannot locate ITS rules for org.gnome.gparted.policy.in
make[2]: *** [Makefile:1059: org.gnome.gparted.policy] Error 1
make[2]: Leaving directory '/home/alpine/programming/c/gparted'
make[1]: *** [Makefile:617: all-recursive] Error 1
make[1]: Leaving directory '/home/alpine/programming/c/gparted'
make: *** [Makefile:451: all] Error 2
This is because gettext's msgfmt doesn't have rules for what elements to
translate in .policy XML files. Add polkit-dev package to Alpine Linux
CI image to provide these files:
/usr/share/gettext/its/policy.its
/usr/share/gettext/its/policy.loc
Now the .policy file is translated successfully:
$ make
...
make[2]: Entering directory '/home/alpine/programming/c/gparted'
/usr/bin/msgfmt --xml --template org.gnome.gparted.policy.in -d ./po -o org.gnome.gparted.policy
make[2]: Leaving directory '/home/alpine/programming/c/gparted'
Closes!107 - Migrate from intltool to gettext translation
Alpine Linux build CI job fails like this:
$ ./autogen.sh
...
Running autoreconf...
autoreconf: export WARNINGS=no-portability
autoreconf: Entering directory '.'
autoreconf: running: autopoint --force
autopoint: *** git program not found
autopoint: *** Stop.
autoreconf: error: autopoint failed with exit status: 1
This is because gettext's autopoint command in Alpine Linux is built
with a git archive of application support files:
$ autopoint --version
/usr/bin/autopoint (GNU gettext-tools) 0.21
Uses a versions archive in git format.
...
$ ls -l /usr/share/gettext/archive*
-rw-r--r-- 1 root root 752320 Jan 14 2021 /usr/share/gettext/archive.git.tar.gz
Where as for other distributions gettext's autopoint command uses plain
compressed tar archive, for example in Ubuntu 22.04 LTS:
$ autopoint --version
/usr/bin/autopoint (GNU gettext-tools) 0.21
Uses a versions archive in dirxz format.
...
$ ls -l /usr/share/gettext/archive*
-rw-r--r-- 1 root root 407064 Mar 25 10:31 /usr/share/gettext/archive.dir.tar.xz
Fix by adding git to the packages installed into the Alpine Linux CI
docker image.
Closes!107 - Migrate from intltool to gettext translation
[0] GNOME Goal: Gettext Migration
https://wiki.gnome.org/Initiatives/GnomeGoals/GettextMigration
This goal from 2016 is to migrate away from using intltool to help
translate especially GNOME application related files, and instead use
gettext directly now that gettext can handle a lot more file formats
[1][2][3].
The GNOME Goal: Gettext Migration [0] says:
"With gettext 0.19.8, there is really no need anymore to use
intltool or GLib's dated gettext glue (AM_GLIB_GNU_GETTEXT and
glib-gettextize)."
This version or later of gettext is available in the oldest supported
distributions except for SLES 12:
Distribution EOL gettext -V
Debian 10 2022-Aug 0.19.8.1
RHEL / CentOS 7 2024-Jun 0.19.8.1
Ubuntu 18.04 LTS 2023-Apr 0.19.8.1
SLES 12 SP5 2024-Oct 0.19.2 [4][5]
As SLES 12 SP5 doesn't contain GParted and SLES 15 contains GParted
0.31.0 [6] loosing the ability to compile future GParted 1.5 release on
SLES 12 SP5 is acceptable.
Additionally the use of intltool and the associated GLib provided macro
AM_GLIB_GNU_GETTEXT was the source of the remaining warnings from
autoconf 2.71 seen in Fedora 36 and Ubuntu 22.04 LTS about the use of
obsolete macros:
$ ./autogen.sh
...
autoreconf: running: /usr/bin/autoconf --force
configure.ac:59: warning: The macro `GLIB_GNU_GETTEXT' is obsolete.
configure.ac:59: You should run autoupdate.
aclocal.m4:426: GLIB_GNU_GETTEXT is expanded from...
>> aclocal.m4:526: AM_GLIB_GNU_GETTEXT is expanded from...
>> configure.ac:59: the top level
configure.ac:59: warning: The macro `AC_TRY_LINK' is obsolete.
configure.ac:59: You should run autoupdate.
./lib/autoconf/general.m4:2920: AC_TRY_LINK is expanded from...
lib/m4sugar/m4sh.m4:692: _AS_IF_ELSE is expanded from...
lib/m4sugar/m4sh.m4:699: AS_IF is expanded from...
./lib/autoconf/general.m4:2249: AC_CACHE_VAL is expanded from...
./lib/autoconf/general.m4:2270: AC_CACHE_CHECK is expanded from...
aclocal.m4:111: GLIB_LC_MESSAGES is expanded from...
aclocal.m4:426: GLIB_GNU_GETTEXT is expanded from...
>> aclocal.m4:526: AM_GLIB_GNU_GETTEXT is expanded from...
>> configure.ac:59: the top level
configure.ac:59: warning: The macro `AC_TRY_LINK' is obsolete.
configure.ac:59: You should run autoupdate.
...
Note that use of AM_GLIB_GNU_GETTEXT was deprecated in GLib 2.47.5
released 2016-01-18 [7]. Newer versions of GLib are included in the
oldest supported distributions:
Distro Package containing Version
glib-gettext.m4
Debian 10 libglib2.0-dev-bin 2.58.3
RHEL / CentOS 7 glib2-devel 2.56.1
Ubuntu 18.04 LTS libglib2.0-dev-bin 2.56.4
SLES 12 SP5 glib2-devel 2.48.2
Therefore perform the migration described in the GNOME wiki documents
[0][1]. This involves:
1. Replacing the macros used in configure.ac;
2. Copying Makevars.template to po/Makevars and setting
PO_DEPENDS_ON_POT and DIST_DEPENDS_ON_UPDATE_PO to "no";
3. Replace @INTLTOOL_*@ macros in Makefile.am with rules to use gettext
to directly translate the relevent GNOME application files;
4. Removing (_) underscore marking translation prefixes from XML tags in
GNOME application files as gettext understands which tags of which
files need translating.
For reference are these commits in projects GNOME-System-Monitor [8],
Reversi [9] and Evince [10] making the same transition.
At this point "./autogen.sh && make" succeeds, at least on Ubuntu which
provides the instructions gettext needs to correctly translate .policy
files by default. These:
/usr/share/gettext/its/polkit.its
/usr/share/gettext/its/polkit.loc
are included in the base policykit-1 package.
[1] Migrating from Intltool to Gettext
https://wiki.gnome.org/MigratingFromIntltoolToGettext
[2] Using Modern Gettext
https://blogs.gnome.org/mclasen/2016/07/21/using-modern-gettext/
[3] On the killing of intltool
https://blogs.gnome.org/mcatanzaro/2016/07/27/on-the-killing-of-intltool/
[4] SUSE package search, SLES 12 SP5, gettext
https://scc.suse.com/packages?name=SUSE%20Linux%20Enterprise%20Server&version=12.5&arch=x86_64&query=gettext&module=
[5] SUSE Long Term Service Pack Support
https://links.imagerelay.com/cdn/3404/ql/f3a083e9bcd34c76addd096d7f60ec00/long_term_service_pack_support_flyer.pdf
[6] SUSE package search, SLES 15, gparted
https://scc.suse.com/packages?name=SUSE%20Linux%20Enterprise%20Server&version=15&arch=x86_64&query=gparted&module=
[7] Deprecate GLIB_GNU_GETTEXT macro, use upstream gettext instead
6b577196ee
[8] [GNOME-System-Monitor] Migrate from intltool
9185b9c713
[9] [Reversi] Gettext migration (bgo#793040)
d22f560ac8
[10] [Evince] build: Migrate from Intltool to Gettext
4fd6821324Closes!107 - Migrate from intltool to gettext translation
Autoconf 2.71 on Fedora 36 and Ubuntu 22.04 LTS has started reporting
a number of warnings about configure.ac containing obsolete macros. One
of them is this:
$ ./autogen.sh
...
Processing ./configure.ac
configure.ac:17: warning: The macro `AC_PROG_LIBTOOL' is obsolete.
configure.ac:17: You should run autoupdate.
m4/libtool.m4:99: AC_PROG_LIBTOOL is expanded from...
configure.ac:17: the top level
...
AC_PROG_LIBTOOL is deprecated and the replacement is LT_INIT [1].
LT_INIT is available in all supported distributions, for example RHEL /
CentOS 7 has libtool 2.4.2 with LT_INIT defined in
/usr/share/aclocal/libtool.m4 serial 57. The last known distribution
without LT_INIT was RHEL / CentOS 5 [2].
Update accordingly.
[1] Libtool Manual, 5.4.1 The LT_INIT macro
https://www.gnu.org/software/libtool/manual/html_node/LT_005fINIT.html
"Macro: LT_INIT(options)
... AC_PROG_LIBTOOL and AM_PROG_LIBTOOL are deprecated names for
older versions of this macro; autoupdate will upgrade your
configure.ac files."
[2] 654cdc7335
Update AM_PROG_LIBTOOL to AC_PROG_LIBTOOL in configure.ac (#734718)
Closes!106 - Update AC_PROG_LIBTOOL to LT_INIT in configure.ac
Avoid having to manually maintain the list of excluded File System tests
in the GitLab Docker CI image. Scan the unit test source extracting
those tests marked with SKIP_IF_NOT_ROOT_FOR_REQUIRED_LOOPDEV_FOR_FS()
to automatically construct the setting for the GTEST_FILTER environment
variable.
Closes!105 - Update used btrfs file system commands, new minimum is
btrfs-progs 4.5
Now that reading btrfs usage, UUID and label can be performed on a file
system image remove the need for a loop device for the relevant unit
tests.
Closes!105 - Update used btrfs file system commands, new minimum is
btrfs-progs 4.5
GParted has been using 'btrfs filesystem show' to report file system
usage but that doesn't work on a file system image so doesn't work in a
GitLab CI test job, as discussed earlier in this patchset.
There is 'btrfs inspect-internal min-dev-size' but:
1. That only works on a mounted file system and GParted isn't going to
mount an unmounted file system just to query it's used space, so by
extension won't work on image files.
2. It reports a figure which is almost the same as the chunk usage of
the device within the btrfs file system. However if some files have
been deleted leaving chunks partially used, then 'btrfs filesystem
resize' will successfully shrink a btrfs smaller than the reported
minimum device size.
And there is also 'btrfs filesystem usage' but that also only works on a
mounted file system.
So instead use 'btrfs inspect-internal dump-super' to report some of the
figures previously obtained from 'btrfs filesystem show'. For example
for a single device btrfs in an image file:
$ truncate -s 256M /tmp/test.img
$ mkfs.btrfs /tmp/test.img
$ btrfs inspect-internal dump-super /tmp/test.img | egrep 'total_bytes|bytes_used|sectorsize|devid'
total_bytes 268435456
bytes_used 114688
sectorsize 4096
dev_item.total_bytes 268435456
dev_item.bytes_used 92274688
dev_item.devid 1
Comparing with results from 'btrfs filesystem show' for the same file
system, after adding a loop device to allow 'btrfs filesystem show' to
succeed:
$ su -
# losetup --find --show /tmp/test.img
# btrfs filesystem show --raw /dev/loop0
Label: none uuid: 32a1eb31-4691-41ae-9ede-c45d723655a3
Total devices 1 FS bytes used 114688
devid 1 size 268435456 used 92274688 path /dev/loop0
This does bring a forced change in the calculation which affects multi-
device btrfs file systems. 'btrfs filesystem show' provided chunk
allocation information per device ("used" figure for each "devid"). The
file system wide used bytes ("FS bytes used") was apportioned according
to the fraction of the chunk allocation each device contained. However
'btrfs inspect-internal dump-super' doesn't provide chunk allocation
information for all devices, only for the current device
("dev_item.bytes_used"). Instead the calculation now has to apportion
the file system wide used bytes ("bytes_used") according to the fraction
of the size of the current device ("dev_item.total_bytes") within the
total size ("total_bytes").
This can't make any difference to a single device btrfs file system as
both fractions will be 1. It only affects how the file system wide used
bytes is distributed among multiple devices.
As an example to see the difference between calculation methods, create
a 2 GiB btrfs taking the defaults so getting duplicated metadata and
single data. Add another 2 GiB partition and populate with some files.
# mkfs.btrfs /dev/sdb1
btrfs-progs v4.15.1
See http://btrfs.wiki.kernel.org for more information.
Label: (null)
UUID: 68195e7e-c13f-4095-945f-675af4b1a451
Node size: 16384
Sector size: 4096
Filesystem size: 2.00GiB
Block group profiles:
Data: single 8.00MiB
Metadata: DUP 102.38MiB
System: DUP 8.00MiB
SSD detected: no
Incompat features: extref, skinny-metadata
Number of devices: 1
Devices:
ID SIZE PATH
1 2.00GiB /dev/sdb1
# mount /dev/sdb1 /mnt/1
# btrfs device add /dev/sdc1 /mnt/1
# cp -a /home/$USER/programming/c/gparted/ /mnt/1/
Usage figures using the old calculation apportioning file system wide
usage according to chunk allocation per device:
# btrfs filesystem show --raw /dev/sdb1
Label: none uuid: 68195e7e-c13f-4095-945f-675af4b1a451
Total devices 2 FS bytes used 178749440
devid 1 size 2147483648 used 239861760 path /dev/sdb1
devid 2 size 2147483648 used 436207616 path /dev/sdc1
sum_devid_used = 239861760 + 436207616
= 676069376
sdb1 usage = 178749440 * 239861760 / 676069376
= 63418277
sdc1 usage = 178749440 * 436207616 / 676069376
= 115331163
Usage figures using the new calculation apportioning file system wide
usage according to device sizes:
# btrfs inspect-internal dump-super /dev/sdb1 | egrep 'total_bytes|^bytes_used'
total_bytes 4294967296
bytes_used 178749440
dev_item.total_bytes 2147483648
# btrfs inspect-internal dump-super /dev/sdc1 | egrep 'total_bytes|^bytes_used'
total_bytes 4294967296
bytes_used 178749440
dev_item.total_bytes 2147483648
sdb1 usage = 178749440 * 2147483648 / 4294967296
= 89374720
sdc1 usage = 178749440 * 2147483648 / 4294967296
= 89374720
Both calculation methods ignore that btrfs allocates chunks at the
volume manager level. So when fully compacted the last chunk for
metadata and data for each storage profile (RAID level) will be
partially filled and this is not accounted for.
Also for multi-device btrfs file systems the new calculation provides
different results. However given that shrinking a device in a multi-
device btrfs file system can and does relocate extents to other devices
(redundancy requirements of chunks permitting) it's minimum size is
virtually impossible to calculate and may not restrict how small the
btrfs device can be shrunk anyway. If it turns out that this new
calculation causes problems it's been made a separate commit from the
previous commit for easier reverting.
Closes!105 - Update used btrfs file system commands, new minimum is
btrfs-progs 4.5