Update to the latest version of the AX_CXX_COMPILE_STDCXX_11 macro from
the Autoconf Archive, Note that the macro now depends on
AX_CXX_COMPILE_STDCXX so this macro has to be included too.
Usage of members label_device_info1 and label_device_info2 was removed
in this commit from 2004.
8ae5ebb2e6
several (mostly) i18n related fixes/cleanups
If an EXT2/3/4 file system needs checking, then resize2fs will report an
error, rather than report the minimum file system size.
# mkfs.ext4 /dev/sdb11
# resize2fs -P /dev/sdb11
resize2fs 1.42.9 (28-Dec-2013)
Estimated minimum size of the filesystem: 17012
# debugfs -w -R "ssv state 0" /dev/sdb11
# resize2fs -P /dev/sdb11
resize2fs 1.42.9 (28-Dec-2013)
Please run 'e2fsck -f /dev/sdb11' first.
# echo $?
1
This will prevent GParted reading the file system usage and in turn
GParted won't allow the file system to be shrunk. Re-add the previous
method of reading the free space from dumpe2fs output as a fallback.
With this change, the worst case scenario is that GParted allows the
user to attempt to shrink an unclean EXT4 file system, smaller that that
which resize2fs allows and gets an error telling them so. As part of
the failed shrink operation GParted will have checked the file system so
on refresh GParted will get the correct minimum size next time.
This scenario only seems to apply to unclean EXT4 file systems because
resize2fs has a larger minimum size that the free blocks would suggest
because of extra space requirements when resizing EXT4 file systems [1].
[1] e2fsprogs 1.44.3, resize/resize2fs.c:calculate_minimum_resize_size()
https://git.kernel.org/pub/scm/fs/ext2/e2fsprogs.git/tree/resize/resize2fs.c?h=v1.44.3#n2946
/*
* For ext4 we need to allow for up to a flex_bg worth of
* inode tables of slack space so the resize operation can be
* guaranteed to finish.
*/
/*
* We need to reserve a few extra blocks if extents are
* enabled, in case we need to grow the extent tree. The more
* we shrink the file system, the more space we need.
*
* The absolute worst case is every single data block is in
* the part of the file system that needs to be evacuated,
* with each data block needs to be in its own extent, and
* with each inode needing at least one extent block.
*/
Closes#8 - Shrinking an EXT4 partition does not respect resize2fs
limits
A user reported GParted failed to shrink an EXT4 file system because
GParted tried to shrink it smaller than resize2fs reported minimum size.
Operation details were:
Shrink /dev/sdc1 from 931.51 GiB to 605.00 GiB (ERROR)
calibrate /dev/sdc1 (SUCCESS)
path: /dev/sdc1 (partition)
start: 63
end: 1953520064
size: 1953520002 (931.51 GiB)
check file system on /dev/sdc1 for errors and (if poss...(SUCCESS)
e2fsck -f -y -v -C 0 '/dev/sdc1' (SUCCESS)
...
158165624 blocks are used (64.77% of 244190000)
...
shrink file system (ERROR)
resize2fs -p '/dev/sdc1' 634389176K (ERROR)
resize2fs 1.44.2 (14-May-2018)
resize2fs: New size smaller than minimum (171882113)
The GParted figures:
* Partition size = 1953520064 (512b sectors) = 976760032 KiB
* FS size = 244190000 (4K blocks) = 976760000 KiB
* Used FS size = 158165624 (4K blocks) = 632662496 KiB
* Requested FS size = 634389176 KiB
The resize2fs figure:
* Minimum FS size = 171882113 (4K blocks) = 687528452 KiB
GParted uses the number of free blocks in the file system to determine
the minimum size it can shrink a file system to. However resize2fs uses
it's own internally calculated minimum size and won't shrink a file
system below that size, as seen in the above details. Resize2fs does
have a force flag, (-f) which overrides some safety checks which are
normally enforced, to allow it to try to shrink a file system smaller
than it's calculated minimum. GParted currently doesn't use the force
flag and it seems unwise for it to start to do so.
So for unmounted EXT2/3/4 file systems, change GParted to use
'resize2fs -P' to get the minimum file system size, rather than using
the number of free blocks direct from the super block, as reported by
'dumpe2fs -h'.
Mounted file systems still use statvfs() to provide file system usage.
As mounted EXT2/3/4 file systems can't be shrunk the fact that statvfs()
produces different, possibly smaller than minimum, figures than those
from 'resize2fs -P' doesn't matter.
Closes#8 - Shrinking an EXT4 partition does not respect resize2fs
limits
No functional change. Just work in FS block sized units until as late
as possible in ext2::set_used_sectors(), before converting to device
sector size units. This is to make the following change simpler and
easier to understand.
Closes#8 - Shrinking an EXT4 partition does not respect resize2fs
limits
The GitLab Continuous Integration test stage jobs can fail like this:
$ make check
...
Making check in help
make[1]: Entering directory `/builds/mfleetwo/gparted/help'
...
xmllint --noout --xinclude --dtdvalid 'http://scrollkeeper.sourceforge.net/dtds/scrollkeeper-omf-1.0/scrollkeeper-omf.dtd' gparted-C.omf
warning: failed to load external entity "http://scrollkeeper.sourceforge.net/dtds/scrollkeeper-omf-1.0/scrollkeeper-omf.dtd"
Could not parse DTD http://scrollkeeper.sourceforge.net/dtds/scrollkeeper-omf-1.0/scrollkeeper-omf.dtd
xmllint --noout --xinclude --dtdvalid 'http://scrollkeeper.sourceforge.net/dtds/scrollkeeper-omf-1.0/scrollkeeper-omf.dtd' gparted-cs.omf
...
make[1]: *** [check-doc-omf] Error 2
make[1]: Leaving directory `/builds/mfleetwo/gparted/help'
make: *** [check-recursive] Error 1
ERROR: Job failed: exit code 1
It fails when the scrollkeeper.sourceforge.net site reports that
SourceForge is undergoing maintenance or is temporarily unavailable. I
have seen this occur on 3 separate occasions in the last 4 weeks since I
started experimenting with GitLab CI, which is rather too often.
Xmllint comes from the GNOME 2 gnome-doc-utils.make rules used to build
and validate GNOME 2 documentation.
Fragment of useful Debian bug report 730688:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=730688
--disable-scrollkeeper requieres scrollkeeper installed
"You can reproduce the problem in mdbtoools version 0.7.1-1 with no
network, and rarian-compat not installed.
When the network is available, buildd downloads the DTD for checks.
When there is no network, gnome-doc-utils fails.
"
Fix by:
(1) adding the rarian-compat package to the CI Docker images, which
provides a local copy of the scrollkeeper-omf.dtd file;
(2) adding the xmllint --nonet option to prevent fetching of DTDs
remotely.
With reference to earlier commit:
0eb9f1fcfb
Reduce dependency on scrollkeeper (#743318)
That commit allowed GParted to be installed on GNOME 3 desktops without
requiring rarian-compat package to be installed. This commit adds
rarian-compat to the CI images so that 'make check' can succeed without
accessing the Internet. Just the intricate path to continue to build
and test a GNOME 2 application in a world of GNOME 3 desktops with
beginning to be reduced backward compatibility.
Closes#9 - CI test jobs occasionally fail with xmllint not loading
external entity http://scrollkeeper.sourceforce.net/dtds/
scrollkeeper-omf-1.0/scrollkeeper-omf.dtd
Unfortunately parallelising 'make distcheck' causes it to fail like
this:
$ nproc=`grep -c '^processor' /proc/cpuinfo` || nproc=1
$ echo nproc=$nproc
nproc=8
...
$ make -j $nproc distcheck
...
make[1]: Entering directory '/builds/mfleetwo/gparted/gparted-0.31.0-git/_build/sub'
ERROR: files left after uninstall:
./share/icons/hicolor/icon-theme.cache
Makefile:896: recipe for target 'distuninstallcheck' failed
make[1]: Leaving directory '/builds/mfleetwo/gparted/gparted-0.31.0-git/_build/sub'
make[1]: *** [distuninstallcheck] Error 1
make: *** [distcheck] Error 1
Makefile:840: recipe for target 'distcheck' failed
ERROR: Job failed: exit code 1
Therefore go back to serial 'make distcheck'.
Closes!6 - Reduce the time taken by the GitLab CI jobs
Reduce the time taken by the GitLab Continuous Integration jobs by
parallelising make to use all available CPUs in the Docker CI image
when it is building GParted code. This includes 'make diskcheck'
because that also does a second build of the GParted code in a separate
subdirectory.
Closes!6 - Reduce the time taken by the GitLab CI jobs
The CUSTOM_TEXT enumeration is exclusively used as the type of one of
the parameters to the functions get_generic_text() and get_custom_text()
in the FileSystem class and derived classes. The definition of the
enumeration therefore belongs in FileSystem.h. Move it.
This is functionally identical, but is just to follow established coding
pattern [1] of specifying the FSType when constructing struct FS, rather
and setting it afterwards. luks.cc was added after the aforementioned
commit, but was being developed in parallel so was created [2] following
the old coding pattern.
[1] 1a4cefb960
Initialise all struct FS members
[2] 070d734e57
Add busy detection of LUKS mapping (#760080)
Prepare the GitLab Continuous Integration configuration for also
building and testing GParted on a Ubuntu image. The definition of the
image and before_script, which so far specify the CentOS Docker image
and how to install the required RPM packages, need to move from being
top level nodes to being defined per job. Namely within jobs
'centos_build' and 'centos_test'.
To avoid duplicating various nodes within multiple jobs, YAML anchors
(&LABEL) and references (*LABEL) are used. They are defined in ignored
jobs, job names starting with a dot (.).
Closes!4 - Add GitLab CI jobs to build and test GParted
Ready for adding additional Continuous Integration jobs using different
distribution Docker images. Rename thus:
build -> centos_build
test -> centos_test
Closes!4 - Add GitLab CI jobs to build and test GParted
Fragment of the tests/test-suite.log from the Docker CI image showing
details of the unit test failure:
Running main() from gtest_main.cc
[==========] Running 26 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 26 tests from BlockSpecialTest
...
[ RUN ] BlockSpecialTest.NamedBlockSpecialObjectBySymlinkMatches
test_BlockSpecial.cc:137: Failure
Failed
get_link_name(): Failed to open directory '/dev/disk/by-id'
test_BlockSpecial.cc:168: Failure
Failed
follow_link_name(): Failed to resolve symbolic link ''
test_BlockSpecial.cc:255: Failure
Expected: (lnk.m_name.c_str()) != (bs.m_name.c_str()), actual: "" vs ""
[ FAILED ] BlockSpecialTest.NamedBlockSpecialObjectBySymlinkMatches (0 ms)
...
[==========] 26 tests from 1 test case ran. (1 ms total)
[ PASSED ] 25 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] BlockSpecialTest.NamedBlockSpecialObjectBySymlinkMatches
1 FAILED TEST
So the code is trying to find a symbolic link to a block device to use
in the test. It is trying to read the directory /dev/disk/by-id to find
a symbolic link, but the directory doesn't exist in the Docker CI image.
The used directory was recently changed [1] to use one which existed on
all distributions. Docker images don't even have the /dev/disk
directory. Exclude just this specific test.
[1] 7fe4148074
Use /dev/disk/by-id/ to get device symlink in test_BlockSpecial
Closes!4 - Add GitLab CI jobs to build and test GParted
Recursively list all the files below /dev as part of the 'test' job as
certain block device names are needed by the failing test_BlockSpecial
unit test.
The artifact captures all the files from the directory in which the CI
script runs to build and test GParted. It creates a ZIP file which can
be downloaded after the job finishes, whether the job succeeds of fails.
This is to capture logs from the failure of the test_BlockSpecial unit
test.
Closes!4 - Add GitLab CI jobs to build and test GParted
Add GitLab Continuous Integration job named 'test' which runs the
GParted unit tests and distcheck. Note that the job starts from a fresh
official CentOS Docker image so also has to rebuild GParted too.
So far this job fails on unit test test_BlockSpecial. Fragment of the
CI job log:
make check-TESTS
make[2]: Entering directory `/builds/mfleetwo/gparted/tests'
make[3]: Entering directory `/builds/mfleetwo/gparted/tests'
PASS: test_dummy
FAIL: test_BlockSpecial
PASS: test_PasswordRAMStore
PASS: test_PipeCapture
make[4]: Entering directory `/builds/mfleetwo/gparted/tests'
make[4]: Nothing to be done for `all'.
make[4]: Leaving directory `/builds/mfleetwo/gparted/tests'
============================================================================
Testsuite summary for gparted 0.31.0-git
============================================================================
# TOTAL: 4
# PASS: 3
# SKIP: 0
# XFAIL: 0
# FAIL: 1
# XPASS: 0
# ERROR: 0
============================================================================
See tests/test-suite.log
Please report to https://bugzilla.gnome.org/enter_bug.cgi?product=gparted
============================================================================
Closes!4 - Add GitLab CI jobs to build and test GParted
Initial GitLab Continuous Integration configuration with a single job
named 'build' which just confirms GParted can be built and installed on
the latest official CentOS Docker image.
Closes!4 - Add GitLab CI jobs to build and test GParted
Back in 2009 devicekit-disks package was renamed to udisks [1]. All
supported distributions use udisks (or more recently udisks2). None
have the old devkit-disks command. Therefore remove it from the GParted
shell wrapper.
[1] https://www.freedesktop.org/wiki/Software/DeviceKit-disks/
"Note
On December 1st 2009, DeviceKit-disks was renamed to udisks. This
release is expected to appear in distributions released in the first
half of 2010."
Shrinking an LVM2 Physical Volume on CentOS 7 with the latest
lvm2 2.02.177 fails like this:
Shrink /dev/sda9 from 1.00 GiB to 768.00 MiB
* calibrate /dev/sda9
* check file system on /dev/sda9 for errors and (if possib...(SUCCESS)
* shrink file system (ERROR)
* lvm pvresize -v --setphysicalvolumesize 786432K '/dev/...(ERROR)
0 physical volume(s) resized / 1 physical volume(s) not resized
Wiping internal VG cache
Wiping cache of LVM-capable devices
/dev/sda9: Requested size 712.00 MiB is less than real size 1.00 GiB. Proceed? [y/n]:[n]
Physical Volume /dev/sda9 not resized.
This upstream change to lvm2 [1] makes pvresize prompt for confirmation
whenever the --setphysicalvolumesize option is used. (The change was
included in lvm2 2.02.171 and later, which is used in recent
distributions. The reporter found the issue on Ubuntu 18.04 LTS and I
reproduced the issue on RHEL/CentOS 7.5). The set size option has to be
used when shrinking the PV before shrinking the partition therefore fix
this issue by adding lvm common option --yes when using the set size
option.
[1] https://sourceware.org/git/?p=lvm2.git;a=commitdiff;h=cbc69f8c693edf0d1307c9447e2e66d07a04bfe9
pvresize: Prompt when non-default size supplied.
Closes#1 - Can't shrink LVM partition due to pvresize prompt
After a failed LUKS unlock attempt the password entry dialog shows the
error "Failed to open LUKS encryption". Improve the user experience by
clearing that error message at the start of the next attempt to avoid
contradictory information with the main windows status of "Opening
encryption on $PARTITION" whilst performing the next unlock attempt.
Bug 795617 - Implement opening and closing of LUKS mappings
When the wrong LUKS password is entered and the [Unlock] button clicked,
the wrong password is left in the entry box and focus remains on the
[Unlocked] button. Improve the user experience by selecting
(highlighting) the whole of the wrong password ready for deletion or
retyping and ensuring that the entry box always has focus.
Just for completeness also programmatically make the password entry box
have focus when the dialog box is created and first displayed, even
though it gets this by default.
Bug 795617 - Implement opening and closing of LUKS mappings
We previously migrated our web site from http://gparted.org to
https://gparted.org under:
bug 786707 - gparted.org does not use HTTPS
and updated URLs in the GParted Manual to match in commit:
a8172ecb04
Convert Manual links to HTTPS where possible and update version
Now update the URLs displayed in the GParted application too.
Bug 796411 - Enhancements request - URL links