Commit Graph

3113 Commits

Author SHA1 Message Date
Pali Rohár 77ef2be089 Fix setting empty label when creating FAT16/32 file systems (#784564)
A FAT file system label in the partition boot sector should be set to
"NO NAME    " when the label is empty [1][2].  mkdosfs/mkfs.fat always
sets this to the label specified on the command line, even when the
label is blank [3].  Fix by not specifying the label when it is blank.

[1] The FAT File System, BIOS Parameter Block and Extended BIOS
    Parameter Block
    https://social.technet.microsoft.com/wiki/contents/articles/6771.the-fat-file-system.aspx#BPB_and_EBPB
[2] FAT16 File System, Volume Label
    http://www.maverick-os.dk/FileSystemFormats/FAT16_FileSystem.html#VolumeLabel
[3] mkfs.fat -n "" generates file system with invalid label
    https://github.com/dosfstools/dosfstools/issues/54

Bug 784564 - GParted calls mkfs.fat incorrectly when user does not specify label
2017-07-07 13:23:27 +01:00
Mike Fleetwood 59c8a7d6df Detect btrfstune change UUID capability again (#784467)
Btrfstune changed it's command line help output from being written on
stderr to being written on stdout in the following commit first included
in btrfs-progs 4.7.2 released 05-Sep-2016.  This breaks GParted
detection of the change UUID capability.  Found on the very latest
distributions: up to date Arch Linux and Fedora 26 Beta.  Fix this.

    https://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git/commit/?id=57d1cbd867311d99a2ae5e3cdcffd04
    btrfs-progs: print help test to stdout

Bug 784467 - No longer detecting btrfs change UUID capability
2017-07-03 09:21:50 -06:00
Mike Fleetwood f31808989a Silence subdir-objects warning from automake 1.14 (#781978)
Configuring GParted on a machine with automake >= 1.14 produces this
warning:

    $ ./autogen.sh
    ...
    lib/gtest/Makefile.am:58: warning: source file 'src/gtest-all.cc' is in a subdirectory,
    lib/gtest/Makefile.am:58: but option 'subdir-objects' is disabled
    automake: warning: possible forward-incompatibility.
    automake: At least a source file is in a subdirectory, but the 'subdir-objects'
    automake: automake option hasn't been enabled.  For now, the corresponding output
    automake: object file(s) will be placed in the top-level directory.  However,
    automake: this behaviour will change in future Automake versions: they will
    automake: unconditionally cause object files to be placed in the same subdirectory
    automake: of the corresponding sources.
    automake: You are advised to start using 'subdir-objects' option throughout your
    automake: project, to avoid future incompatibilities.
    lib/gtest/Makefile.am:60: warning: source file 'src/gtest_main.cc' is in a subdirectory,
    lib/gtest/Makefile.am:60: but option 'subdir-objects' is disabled
    lib/gtest/Makefile.am: installing './depcomp'
    tests/Makefile.am:22: warning: source file '../src/BlockSpecial.cc' is in a subdirectory,
    tests/Makefile.am:22: but option 'subdir-objects' is disabled
    tests/Makefile.am:23: warning: source file '../src/PipeCapture.cc' is in a subdirectory,
    tests/Makefile.am:23: but option 'subdir-objects' is disabled
    ...

From the automake 1.14 NEWS file:
    http://git.savannah.gnu.org/cgit/automake.git/tree/NEWS?h=v1.14#n125

    "The next major Automake version (2.0) will unconditionally activate
    the 'subdir-objects' option.  In order to smooth out the transition,
    we now give a warning (in the category 'unsupported') whenever a
    source file is present in a subdirectory but the 'subdir-object' is
    not enabled.  For example, the following usage will trigger such a
    warning:

        bin_PROGRAMS = sub/foo
        sub_foo_SOURCES = sub/main.c sub/bar.c
    "

Set the subdir-objects automake option, silencing the warning.

Bug 781978 - Add Google Test C++ test framework
2017-06-22 09:27:01 -06:00
Mike Fleetwood 87ccf060ce Ensure internal detection requires second magic when needed (#783997)
Internal file system detection is broken for detection of LVM2 PVs
because it reports finding LVM2 PV even when only the first magic is
found.  Prepare a partition like this:
    # lvm pvcreate /dev/sdb1
    # hexdump -C /dev/sdb1 > hexdump-1.txt

Clear the second magic:
    # python
    f = open("/dev/sdb1","w")
    f.seek(0x218)
    f.write("\x00"*4)
    f.close()
    quit()

    # hexdump -C /dev/sdb1 > hexdump-2.txt
    # diff -u hexdump-[12].txt
     00000200  4c 41 42 45 4c 4f 4e 45  01 00 00 00 00 00 00 00  |LABELONE........|
    -00000210  58 69 83 e1 20 00 00 00  4c 56 4d 32 20 30 30 31  |Xi.. ...LVM2 001|
    +00000210  58 69 83 e1 20 00 00 00  00 00 00 00 20 30 30 31  |Xi.. ....... 001|
                                        ^^ ^^ ^^ ^^                       ^^^^
     00000220  52 4b 31 73 50 77 49 66  6a 72 55 4c 6a 4d 30 58  |RK1sPwIfjrULjM0X|

GParted still detects this as an LVM2 PV even though lvm and blkid do
not.

Correct logic error and only perform memcmp() with the second magic when
the second signature is non-NULL.

Bug 783997 - GParted detecting LVM2 PV regardless whether second magic
             matches or not
2017-06-20 10:03:31 -06:00
Emin Tufan Çetin fb0cc58479 Update Turkish translation 2017-06-07 12:18:02 +00:00
Mike Fleetwood f8783d69bc Add PipeCapture line discipline tests
Test that backspace and carriage return are processed correctly and
Ctrl-A and Ctrl-B are skipped by PipeCapture.
2017-06-03 09:38:55 -06:00
Mike Fleetwood 25780c611b Further improve speed of PipeCapture for non-watched output (#777973)
For large output a lot of time is used copying capturebuf to callerbuf
to provide a Glib::ustring copy of the buffer for the update callback.
However update callbacks are only used when commands are run to apply
operations by FileSystem::execute_command() and their output is
incrementally displayed in the UI.  Whereas update callbacks are never
used when commands are used to query information via
Utils::execute_command().

Stop performing interim copying of capturebuf to callerbuf when there
are no update callbacks registered as it is unnecessary.

Time to read portions of the recorded fsck.fat output via
fat16::set_used_sectors() and intermediate copies aren't required:

                     1 MiB     10 MiB   122 MiB
    old code :   0.074 sec   1.41 sec   210 sec [3:30]
    new code :   0.063 sec   0.56 sec     6.57 sec

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 6f49f3049d Increase PipeCapture maximum read size to 64K (#777973)
For large output PipeCapture spends all it's time copying the capturebuf
to callerbuf to provide a consistent view for any registered update
callbacks.  This overhead is dependant of the size of the ever growing
captured output and the number of times OnReadable() is called.
Therefore increase the maximum read size to exponentially reduce this
overhead.

Time taken to read varying amounts of fsck.fat output with various
read buffer sizes:

                 1 MiB   10 MiB            122 MiB
    512b  :   0.60 sec   65 sec [1:05]   17262 sec [4:47:42]
    4096b :   0.19 sec   13 sec           2157 sec [  35:57]
    64K   :   0.07 sec    1.4 sec          210 sec [   3:30]

Note that this is only increasing the maximum size that can be read from
the output of the external command.  If the command produces it's output
slowly, such as the with progress reporting commands like mkfs.ext4,
then only the available number of bytes is read reporting the next
progress increment.  However if the command produces it's output
quickly, such as when testing this bug using a modified fsck.fat
concatenating the 122 MiB of pre-recorded output, then full buffer reads
are performed.

To ensure that a single call to OnReadable() couldn't block the UI too
long, the time taken for OnReadable() to process a full buffer of
various sizes was recorded as:

    512b  :   0.031 milliseconds
    4096b :   0.188 milliseconds
    64K   :   3.576 milliseconds

Adding this amount of processing time in the UI under normal
circumstances is not a problem.

As the captured output increases, the time taken by OnReadable() becomes
dominated by the time taken to copy the ever increasing capture buffer
to handle it's expansion and to copy it to the caller buffer for the
update callback.  At the end of the 122M captured fsck.fat output
OnReadable() takes 350 milliseconds per call.  This is not a problem
because this is an extreme case in which GParted is already hung and
increasing the buffer size is reducing the overall hang time from over
4 hours to a few minutes.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 8dbbb47ce2 Workaround g_utf8_get_char_validate() bug with embedded NUL bytes (#777973)
If PipeCapture reads a NUL byte in the middle of what is expected to be
a multi-byte UTF-8 character then PipeCapture either returns the
captured characters to the previous update or loops forever depending on
whether the end of the stream is encountered before the read buffer is
full or not.  This is equivalent to saying whether the NUL byte occurs
within the last 512 bytes of the output or not.

This is caused by a bug in g_utf8_get_char_validated() reporting that a
partial UTF-8 character has been found when the NUL byte is encountered
in the middle of a multi-byte character even though more bytes are
available in the length specified buffer.  g_utf8_get_char_validated()
is always stopping at the NUL byte assuming it is working with a NUL
terminated string.

Workaround this by checking for g_utf8_get_char_validated() claiming a
partial UTF-8 character has been found when in fact there are at least
enough bytes in the read buffer to instead determine that it is really
an invalid UTF-8 character.

Reference:
    Bug 780095 - g_utf8_get_char_validated() stopping at nul byte even
                 for length specified buffers
    https://bugzilla.gnome.org/show_bug.cgi?id=780095

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 6b82616d2e Add test that PipeCapture can read NUL byte in middle of UTF-8 char (#777973)
Test that binary data that happens to be the start of a multi-byte UTF-8
character but then contains a NUL byte is successfully read.  The test
currently detects failure thus:

    $ ./test_PipeCapture
    ...
    [ RUN      ] PipeCaptureTest.ReadNULByteInMiddleOfMultiByteUTF8Character
    test_PipeCapture.cc:346: Failure
          Expected: expectedstr
         Of length: 7
    To be equal to: capturedstr.raw()
         Of length: 0
    With first binary difference:
    < 0x00000000  "._45678"           00 5F 34 35 36 37 38
    --
    > 0x00000000  ""
    [  FAILED  ] PipeCaptureTest.ReadNULByteInMiddleOfMultiByteUTF8Character (0 ms)
    ...

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 3a6a304c64 Workaround g_utf8_find_next_char() not incrementing past NUL char (#777973)
If PipeCapture reads a NUL character, a valid UTF-8 character, it causes
GParted to allocate all available memory and crash.  The while loop in
PipeCapture::OnReadable() loops forever reading the same NUL character
from readbuf because g_utf8_find_next_char() doesn't advance past it.
Hence an infinite number of NUL characters are added to the current
line, linevec.

Workaround this by checking for this failure case of
g_utf8_find_next_char() and increment past the NUL character.

This is actually a bug recently fixed in glib 2.49.3 released
2016-07-17.  References:

*   Bug 547200 - g_utf8_find_next_char() issues
    https://bugzilla.gnome.org/show_bug.cgi?id=547200

*   https://git.gnome.org/browse/glib/commit/?id=e0e652e4032a181d4f0b0a12aeddf0678b7a3c04
    Fix a corner-case in g_utf8_find_next_char

    In the case that *p is '\0', we should return p + 1, not p.
    This change allows to simplify g_utf8_find_next_char a bit.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 22573b4eed Add test that PipeCapture can read embedded NUL character (#777973)
Test currently detects failure thus:

    $ ./test_PipeCapture
    ...
    [ RUN      ] PipeCaptureTest.ReadEmbeddedNULCharacter
    unknown file: Failure
    C++ exception with description "std::bad_alloc" thrown in the test body.
    [  FAILED  ] PipeCaptureTest.ReadEmbeddedNULCharacter (31917 ms)
    ...

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 0a3e8487a0 Update PipeCaptureTest.MinimalBinaryCrash777973 expected string (#777973)
PipeCapture::OnReadable() has been almost completely re-written but this
test is still failing thus:

    $ ./test_PipeCapture
    ...
    [ RUN      ] PipeCaptureTest.MinimalBinaryCrash777973
    test_PipeCapture.cc:313: Failure
          Expected: inputstr
         Of length: 27
    To be equal to: capturedstr.raw()
         Of length: 26
    With first binary difference:
    < 0x00000010  "...!......."       A9 C2 A0 21 E2 95 9F E2 88 A9 C2
    --
    > 0x00000010  "...!......"        A9 C2 A0 21 E2 95 9F E2 88 A9
    [  FAILED  ] PipeCaptureTest.MinimalBinaryCrash777973 (0 ms)
    ...

The OnReadable() code specifically skips invalid bytes which aren't part
of valid UTF-8 characters, because they can't be displayed to the user.
The final C2 byte is the start of a multi-byte UTF-8 character, so on
it's own is invalid.  Therefore PipeCapture skips it.  Update expected
string accordingly.  Now the test passes.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood b5b3d199f8 Improve the performance of PipeCapture for large output (#777973)
A user had a very corrupted FAT file system such that fsck.fat produced
122 MiB of output.  GParted has to read this output to get the file
system usage information.  However GParted takes more than 48 hours to
read the 122 MiB of output, while using 100% CPU time and is
unresponsive for the duration.

Modified fsck.fat to output just the first 1 MiB of output and used perf
to capture performance data from GParted reading that output:
    # perf -g -F 1999 -- ./gpartedbin
    # perf report --stdio
      67.84% Glib::ustring::replace      [4.23s]
      17.67% g_utf8_pointer_to_offset    [1.10s]
       8.48% g_utf8_offset_to_pointer    [0.53s]
    [  6.01% (everything else)       ]   [0.38s]
    [100.00% TOTAL                   ]   [6.24s]

And to read the first 10 MiB of output the performance figures are:
      92.95% Glib::ustring::replace      [257.44s]
       4.35% g_utf8_pointer_to_offset    [ 12.05s]
       2.13% g_utf8_offset_to_pointer    [  5.90s]
    [  0.58% (everything else)       ]   [  1.61s]
    [100.00% TOTAL                   ]   [277.00s]

See how the total time is increasing non-linearly, 44 times longer for
only 10 times as much data.  This is because of the exponential increase
in time spent in Glib::ustring::replace.

After a lot of experimentation I came to the conclusion that
Glib::ustrings are not appropriate for storing and editing large buffers
of data, sizes megabytes and above.  The issues are that iterators are
invalid after the content changes and replacing UTF-8 characters by
index gets exponentially slower as the size of the string increases.
Hence the > 48 hours of 100% CPU time to read and apply the line
discipline to the 122 MiB of fsck.fat output.  See code comment for a
more detailed description of the issues found.

Rewrote OnReadable() to use Glib::ustrings as little as possible.
Instead using buffers and vectors of fixed width data types allowing for
fast access using pointers and indexes (converted to pointers by the
compiler with simple arithmetic).  Again see code comment for a more
detailed description of the implementation.

Repeating the performance capture with the new code for the first 1 MiB
of fsck.fat output:
      63.34% memcpy                      [0.35s]
    [ 36.66% (everything else)       ]   [0.21s]
    [100.00% TOTAL                   ]   [0.56s]

And for the first 10 MiB of fsck.fat output:
      96.66% memcpy                      [63.60s]
    [  3.34% (everything else)       ]   [ 2.20s]
    [100.00% TOTAL                   ]   [65.80s]

Simple timings taken to read portions of the fsck.fat output (when not
using perf):

                   1 MiB    10 MiB      122 MiB
    old code :   6.2 sec   277 sec   > 48 hours
                            (4:37)
    new code :   0.6 sec    66 sec    17262 sec
                            (1:06)    (4:47:42)

Performance of the code is still non-linear because of the assignment
of the ever growing capturebuf to callerbuf for every block of input
read.  This is required to generate a consistent Glib::ustring copy of
the input for the update callback.  However this is much faster than
before, and I have a plan for further improvements.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 03c2be9b90 Test that PipeCapture clears capture buffer before it starts (#777973)
Initialise capture string with sacrificial text before each test.
Existing tests confirm this is cleared first by checking the captured
string matches the input string.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood a233e30efe Move initial clearing of output capture buffers into PipeCapture class (#777973)
Seems more logical to initially clear the output capture buffer in a
single location in the PipeCapture class which reads the command output
into said buffer, rather than each calling site before the PipeCapture
objects are constructed.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood a03735ac6f Prevent core dump reading none UTF-8 data from external command (#777973)
A user had a very corrupted FAT file system and when GParted was reading
the file system usage it would core dump.  The tip of the stack trace
was:

    #0 Glib::ustring_Iterator<__gnu_cxx::__normal_iterator<char*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::operator++()
       at /usr/include/glibmm-2.4/glibmm/ustring.h line 957
    #1 Glib::ustring_Iterator<__gnu_cxx::__normal_iterator<char*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >::operator++(int)
       at /usr/include/glibmm-2.4/glibmm/ustring.h line 965
    #2 GParted::PipeCapture::OnReadable(Glib::IOCondition)
       at PipeCapture.cc line 63
    #3 GParted::PipeCapture::_OnReadable(_GIOChannel*, GIOCondition, void*)
       at PipeCapture.cc line 45

GParted uses 'fsck.fat -n -v /dev/PTN' to get the file system usage.
However because the file system was so corrupted it was reporting every
file name as being damaged, including the file names being reported as
binary data.  PipeCapture::OnReadable() reads up to 512 bytes at a time
and then uses a Glib::ustring iterator to loop over the UTF-8
characters, but a Glib::ustring iterator is explicitly not capable of
reading binary data [1].  With invalid UTF-8 bytes the code continued to
read beyond the end of the string until GParted crashed with a
segmentation violation.

Fix by accessing the read string by index instead of by iterator.

[1] Quote from the Glib::ustring_Iterator<T> Class Template Reference:
    https://developer.gnome.org/glibmm/stable/classGlib_1_1ustring__Iterator.html

    "The Glib::ustring iterated over must contain only valid UTF-8 data.
    If it does not, operator++(), operator--() and operator*() may make
    accesses outside the bounds of the string."

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 1e3324e16b Add test for bug with PipeCapture crashing reading binary data (#777973)
A user had a very corrupt FAT file system and fsck.fat was reporting
binary data in file names with errors.  GParted crashed reading this
output.  Boiled down the problematic fsck.fat output to a sample 27
bytes which still triggers a crash and created a test for it.  The test
crashes test_PipeCapture program too.

    $ ./test_PipeCapture
    ...
    [ RUN      ] PipeCaptureTest.MinimalBinaryCrash777973
    Segmentation fault (core dumped)
    $ echo $?
    139

However it still produces a non-zero exit status and the autotools test
runner detects this so 'make check' still reports failure.

    $ make check
    ...
    make[2]: Entering directory `/home/centos/programming/c/gparted/tests'
    PASS: test_dummy
    PASS: test_BlockSpecial
    ../test-driver: line 95: 16152 Segmentation fault      "$@" > $log_file 2>&1
    FAIL: test_PipeCapture
    ...
    ============================================================================
    Testsuite summary for gparted 0.28.1-git
    ============================================================================
    # TOTAL: 3
    # PASS:  2
    # SKIP:  0
    # XFAIL: 0
    # FAIL:  1
    # XPASS: 0
    # ERROR: 0
    ============================================================================
    See tests/test-suite.log
    Please report to https://bugzilla.gnome.org/enter_bug.cgi?product=gparted
    ============================================================================
    make[2]: *** [test-suite.log] Error 1
    make[2]: Leaving directory `/home/centos/programming/c/gparted/tests'
    make[1]: *** [check-TESTS] Error 2
    make[1]: Leaving directory `/home/centos/programming/c/gparted/tests'
    make: *** [check-am] Error 2
    $ echo $?
    2

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 8b47de8872 Add check of PipeCapture update callback (#777973)
Also ensure that the PipeCapture calls registered update callbacks and
that the data so far captured matches the leading portion of the input.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood dc1c49ba62 Add binary data string difference reporting to PipeCapture tests (#777973)
Google Test string comparison asserts are only designed of C style
strings containing printable text of one or more lines with a
terminating NUL character.  GParted is crashing when PipeCapture is
reading the binary file names being reported by fsck.fat from a very
corrupted FAT file system.  Therefore need to be able to compare and
report differences of binary data stored in C++ std::string and
Glib::ustrings.  Write a specific assertion to handle this.

Now these sample tests:

    TEST_F( PipeCaptureTest, BinaryStringFailure )
    {
        inputstr = "AAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCC";
        capturedstr = "AAAAAAAAAAAAAAAABBBBBBBBBBbbbb";
        EXPECT_BINARYSTRINGEQ( inputstr, capturedstr.raw() );
    }

    TEST_F( PipeCaptureTest, LeadingBinaryStringFailure )
    {
        inputstr = "The quick brown fox jumps over the lazy dog";
        capturedstr = "The quick brown fox\n";
        EXPECT_BINARYSTRINGEQ( inputstr.substr( 0, capturedstr.raw().length() ),
                               capturedstr.raw() );
    }

report failure like this:

    $ ./test_PipeCapture
    ...

    [ RUN      ] PipeCaptureTest.BinaryStringFailure
    test_PipeCapture.cc:270: Failure
          Expected: inputstr
         Of length: 48
    To be equal to: capturedstr.raw()
         Of length: 30
    With first binary difference:
    < 0x00000010  "BBBBBBBBBBBBBBBB"  42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
    --
    > 0x00000010  "BBBBBBBBBBbbbb"    42 42 42 42 42 42 42 42 42 42 62 62 62 62
    [  FAILED  ] PipeCaptureTest.BinaryStringFailure (1 ms)
    [ RUN      ] PipeCaptureTest.LeadingBinaryStringFailure
    test_PipeCapture.cc:278: Failure
          Expected: inputstr.substr( 0, capturedstr.raw().length() )
         Of length: 20
    To be equal to: capturedstr.raw()
         Of length: 20
    With first binary difference:
    < 0x00000010  "fox "              66 6F 78 20
    --
    > 0x00000010  "fox."              66 6F 78 0A
    [  FAILED  ] PipeCaptureTest.LeadingBinaryStringFailure (0 ms)
    ...

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood eb2d571a6c Add 1 MiB ASCII PipeCapture test (#777973)
Add test sending 1 MiB of ASCII test into PipeCapture to read.  This
requires multiple reads and rewites through the pipe.

This is as much a check of the threading in the PipeCaptureTest class as
it is of PipeCapture itself.  This is because a single thread might
block reading from a pipe waiting for itself to write to the pipe.
Deadlock.  Hence the requirement to use a separate thread for reading
and writing.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood d90702d526 Initialise Glib threading system in test_PipeCapture (#777973)
On CentOS 6, with glib/glibmm 2.28, running the tests fails thus:

    $ ./test_PipeCapture
    Running main() from gtest_main.cc
    [==========] Running 4 tests from 1 test case.
    [----------] Global test environment set-up.
    [----------] 4 tests from PipeCaptureTest
    [ RUN      ] PipeCaptureTest.EmptyPipe

    GLib-ERROR **: The thread system is not yet initialized.
    aborting...
    Aborted (core dumped)

For glib before version 2.32, the threading system must be explicitly
initialised with a call to g_thread_init(), or the Glibmm
Glib::thread_init() equivalent to prevent this error.

    Deprecated thread API, g_thread_init()
    https://developer.gnome.org/glib/stable/glib-Deprecated-Thread-APIs.html#g-thread-init

Do this by providing our own main() which also initialises the Glib
threading system, rather using and linking in the Google Test provided
main().

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood b21ee06230 Add initial unit tests for PipeCapture (#777973)
So far just tests sending 0 bytes and a few ASCII bytes into a pipe and
that they are read correctly by the PipeCapture class.

Bug 777973 - Segmentation fault on bad disk
2017-06-02 11:47:35 -06:00
Mike Fleetwood 7c51b62958 Remove unused build definition GPARTED_DATADIR
Definition has been unused since this commit first released in GParted
0.5.0:
    c2d19a8ab4
    Replace gnome-open with gtk_show_uri (#600046)
2017-06-02 10:47:35 -06:00
Mike Fleetwood 69d1bbcf8f Remove old .cvsignore files
As the source code is managed in GIT and there is a .gitignore file in
the top level directory specifying file names to exclude from version
control, then the old per-directory .cvsignore files for CVS are
redundant.

Add the only missing and applicable entry from src/.cvsignore of '.libs'
to .gitignore and remove all the .cvsignore files.
2017-06-02 10:47:35 -06:00
Mike Fleetwood 9af41093f8 Implement libtoolize suggestion setting ACLOCAL_AMFLAGS
Running ./autogen.sh reports this suggestion:
    libtoolize: Consider adding `-I m4' to ACLOCAL_AMFLAGS in Makefile.am.

Add suggested setting as it causes no difference and silences the
suggestion.

The Libtool Manual, 5.5.1 Invoking libtoolize
https://www.gnu.org/software/libtool/manual/html_node/Invoking-libtoolize.html

    "If libtoolize detects an explicit call to AC_CONFIG_MACRO_DIRS (see
    The Autoconf Manual in The Autoconf Manual) in your configure.ac, it
    will put the Libtool macros in the specified directory.

    In the future other Autotools will automatically check the contents
    of AC_CONFIG_MACRO_DIRS, but at the moment it is more portable to
    add the macro directory to ACLOCAL_AMFLAGS in Makefile.am, which is
    where the tools currently look. If libtoolize doesn't see
    AC_CONFIG_MACRO_DIRS, it too will honour the first '-I' argument in
    ACLOCAL_AMFLAGS when choosing a directory to store libtool
    configuration macros in. It is perfectly sensible to use both
    AC_CONFIG_MACRO_DIRS and ACLOCAL_AMFLAGS, as long as they are kept
    in synchronisation.

        ACLOCAL_AMFLAGS = -I m4
    "
2017-06-02 10:47:35 -06:00
Mike Fleetwood 72e81622f3 Improve diagnostics of failed BlockSpecial operator tests (#781978)
Deliberately breaking one of the operator<() tests produces less than
ideal diagnostics because it doesn't print the BlockSpecial objects
being compared.

    $ ./test_BlockSpecial
    ...
    [ RUN      ] BlockSpecialTest.OperatorLessThanTwoEmptyObjects
    test_BlockSpecial.cc:362: Failure
    Value of: bs1 < bs2
      Actual: true
    Expected: false
    [  FAILED  ] BlockSpecialTest.OperatorLessThanTwoEmptyObjects (1 ms)
    ...

This could be solved by using the Google Test Binary Comparison
assertions, however in the tests for false from (bs1 == bs2) and
(bs1 < bs2) comparisons then operators != and >= would have to be
implemented and the tests changed from:
    EXPECT_FALSE( bs1 < bs2 );
to:
    EXPECT_GE( bs1, bs2 );
This makes the meaning of the test less than clear.  The primary purpose
of the test is to check operator<(), but it is expecting the first
BlockSpecial object to be GE (greater than or equal to) than the second,
which is calling operator>=() which in turn is testing operator<().  For
tests of the operators themselves using Google Test Binary Comparison
assertions obscures what is being tested too much.

Instead provide a custom failure message which prints the BlockSpecial
objects failing the comparison, leaving the test still directly calling
the operator being tested, like this:
    EXPECT_FALSE( bs1 < bs2 ) << "   Where: bs1 = " << bs1 << "\n"
                              << "     And: bs2 = " << bs2;
And with a suitable macro this is simplified to:
    EXPECT_FALSE( bs1 < bs2 ) << ON_FAILURE_WHERE( bs1, bs2 );
Now the above deliberately broken test produces this output:

    $ ./test_BlockSpecial
    ...
    [ RUN      ] BlockSpecialTest.OperatorLessThanTwoEmptyObjects
    test_BlockSpecial.cc:369: Failure
    Value of: bs1 < bs2
      Actual: true
    Expected: false
       Where: bs1 = BlockSpecial{"",0,0}
         And: bs2 = BlockSpecial{"",0,0}
    [  FAILED  ] BlockSpecialTest.OperatorLessThanTwoEmptyObjects (0 ms)
    ...

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:47:35 -06:00
Mike Fleetwood 003d1b54c7 Add BlockSpecial operator<() tests (#781978)
Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:47:35 -06:00
Mike Fleetwood f4008092dd Add test for bug #771670 in BlockSpecial::operator==() (#781978)
Add a specific test for the failure found in bug 771670 and fixed by
commit:
    253c4d6416
    Fix BlockSpecial comparison (#771670)

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:47:35 -06:00
Mike Fleetwood debbd811b8 Add BlockSpecial operator==() tests (#781978)
Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:47:35 -06:00
Mike Fleetwood c37be28148 Add unit tests for BlockSpecial constructors and internal caching (#781978)
So far only includes tests of the construction of BlockSpecial objects
and the interaction with the internal cache used to reduce the number of
stat(2) calls.  Tests have been designed to be runable by non-root users
and assume as little as possible about the environment by discovering
used block device names and symbolic link name.

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:47:30 -06:00
Mike Fleetwood dbd5cd2ca8 Fix make distcheck failure from missing libgtest_main.la (#781978)
Running 'make distcheck' fails like this:
    [...]
    make[3]: Entering directory `[...]/gparted/gparted-0.28.1-git/_build/tests'
    [...]
    /bin/sh ../libtool --tag=CXX --mode=link g++ [...] ../../lib/gtest/lib/libgtest_main.la ../../lib/gtest/lib/libgtest.la [...]
    libtool: link: cannot find the library `../../lib/gtest/lib/libgtest_main.la' or unhandled argument `../../lib/gtest/lib/libgtest_main.la'
    make[3]: *** [test_dummy] Error 1
    make[3]: Leaving directory `[...]/gparted/gparted-0.28.1-git/_build/tests'
    [...]
    make[1]: Leaving directory `[...]/gparted/gparted-0.28.1-git/_build'
    make: *** [distcheck] Error 1

What is happening is that distcheck [1] performs a VPATH build which
uses separate read-only source and read-write build trees.  Shipped
source files, such as .h and .cc, are located in the source tree and the
built files, such as .o and .la, are compiled into the build tree.  In
this case the absolute path to libgtest_main.la is specified using
$(top_srcdir) for the source tree, rather than $(top_builddir) for the
build tree, hence the cannot find file type error.  This error doesn't
occur when just performing a 'make check' because that builds in the
source tree so $(top_builddir) is the same as $(top_srcdir).

Fix by correctly specifying the absolute path to the libgtest*.la files
using $(top_builddir).

[1] Automake Manual, 14.4 Checking the Distribution
    https://www.gnu.org/software/automake/manual/automake.html#Checking-the-Distribution

Note that for VPATH builds the Automake Manual [2] also recommends using
a pair of -I options to specify header files are found in both the build
and source trees, something like:
    AM_CPPFLAGS = -I$(top_builddir)/some/subdir -I$(top_srcdir)/some/subdir

This is unnecessary for GParted as all header files are shipped as part
of the source archive and no header files are created as part of the
build process.  Therefore adding -I options specifying $(top_builddir)
include directories would just be specifying empty or non-existent
include directories in the build tree.

[2] Automake Manual, 8.7 Variables used when building a program
    https://www.gnu.org/software/automake/manual/automake.html#Program-Variables

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:13:16 -06:00
Mike Fleetwood dceb293f15 Add unit test suites with initial successful dummy test (#781978)
Now 'make check' will additionally build and run the test suites in the
./tests directory.  Add initial always successful dummy test suite.
This is done using Automake support for testing.

    Automake Manual, 15 Support for test suites
    https://www.gnu.org/software/automake/manual/automake.html#Tests

./tests/Makefile.am takes some influence from the same file in the
minimal-gtest-autotools template project.
    654848ec01/tests/Makefile.am

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:13:16 -06:00
Mike Fleetwood 81b104928b Add building of Google Test libraries (#781978)
This closely follows the minimal-gtest-autotools template project.
    https://github.com/octol/minimal-gtest-autotools

Now 'make check' will additionally build the Google Test C++ test
framework libraries.

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:13:16 -06:00
Mike Fleetwood 4d6823a20a Switch to conditional appending of SUBDIRS in top-level Makefile.am (#781978)
rather than having two conditional complete definitions of SUBDIRS.

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:13:16 -06:00
Mike Fleetwood 0841b0274b Exclude 2 gtest files from translation (#781978)
Running 'make check' is failing, reporting that 2 of the Google Test
files need to be translated.

    $ make check
    ...
    Making check in po
    make[1]: Entering directory `/home/centos/programming/c/gparted/po'
    rm -f missing notexist
    srcdir=. /usr/bin/intltool-update -m
    The following files contain translations and are currently not in use. Please
    consider adding these to the POTFILES.in file, located in the po/ directory.

    lib/gtest/include/gtest/internal/gtest-filepath.h
    lib/gtest/src/gtest.cc

    If some of these files are left out on purpose then please add them to
    POTFILES.skip instead of POTFILES.in. A file 'missing' containing this list
    of left out files has been written in the current directory.
    Please report to https://bugzilla.gnome.org/enter_bug.cgi?product=gparted
    if [ -r missing -o -r notexist ]; then \
      exit 1; \
    fi
    make[1]: *** [check] Error 1
    make[1]: Leaving directory `/home/centos/programming/c/gparted/po'
    make: *** [check-recursive] Error 1
    'make check' is failing

This is not true, the files do not contain translatable strings.  This
is a match for known intltool bug:

    False positives from intltool-update -m
    https://bugs.launchpad.net/intltool/+bug/545862

Fix by adding the problem file names into POTFILES.skip.

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:13:16 -06:00
Mike Fleetwood cbab9849b1 Update README with the inclusion of Google Test framework (#781978)
Document the files and license of the Google Test C++ test framework.

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:13:16 -06:00
Mike Fleetwood 87f7170a55 Add Google Test 1.8.0 files (#781978)
Copy the Google Test C++ test framework (subset needed to build the unit
testing libraries) into the GParted project.  Copy the following files
and directories from Google Test, release 1.8.0:
    LICENSE
    README.md
    include/
    src/
into lib/gtest/.  All the files are under the BSD license as documented
in lib/gtest/LICENSE.

Bug 781978 - Add Google Test C++ test framework
2017-06-02 10:13:16 -06:00
Mike Fleetwood e0390d9cd8 Update comment with example LUKS_Info cache entry
Reflect the change made to the data model in commit:
    Use BlockSpecial in LUKS_Info module cache (#767842)
    7cd574cac5
2017-05-21 09:44:20 -06:00
Mike Fleetwood 6798c271c7 Quote mount point in Data Rescue dialog (#782681)
As before always quote use of mount points in command lines.

Bug 782681 - btrfs partitions mounted with whitespace cannot be resized
2017-05-21 09:44:20 -06:00
Mike Fleetwood 2025581029 Quote mount point when copying and resizing xfs (#782681)
Attempting to grow an already mounted xfs where the mount point
contained spaces failed like this:

    Grow /dev/sdb4 from 1.00 GiB to 1.50 GiB
    + calibrate /dev/sdb4
    + grow partition from 1.00 GiB to 1.50 GiB
    + grow file system to fill the partition
      + xfs_growfs /tmp/File System Label
        Usage: xfs_growfs [options] mountpoint
        ...

Apply the rule and quote all uses of mount points within command lines.
This is also applied to copying xfs file systems even though it was safe
because it only ever used GParted generated mount points.

Also for the xfs copy operation switch unmounting of partitions to
specify mount points instead of partitions.  This is just to be
consistent with how it is done in all the online file system resizing
code.

Bug 782681 - btrfs partitions mounted with whitespace cannot be resized
2017-05-21 09:44:20 -06:00
Mike Fleetwood 988dacfb1b Quote mount point when resizing nilfs2 (#782681)
The current nilfs2 resizing code is safe because it never passes a user
influenced mount point into a command line.  Regardless, apply the same
simple rule to always quote mount points when used in command lines.

WARNING:
Nilfs-resize is broken and can't actually resize a file system mounted
with spaces in the mount point anyway!

    # mkdir "/tmp/File System Label"
    # mount -v -t nilfs2 /dev/sdb3 "/tmp/File System Label"
    mount.nilfs2: started nilfs_cleanerd
    # nilfs-resize -v -y /dev/sdb3
    Error: cannot open NILFS on /dev/sdb3.
    # echo $?
    1
    # strace nilfs-resize -v -y /dev/sdb3
    ...
    stat("/dev/sdb3", {st_mode=S_IFBLK|0660, st_rdev=makedev(8, 19), ...}) = 0
    open("/dev/sdb3", O_RDONLY)             = 3
    ...
    open("/proc/mounts", O_RDONLY)          = 4
    read(4, "sysfs /sys sysfs rw,seclabel,nos"..., 1024) = 1024
    ...
    close(4)                                = 0
    open("/tmp/File\\040System\\040Label", O_RDONLY) = -1 ENOENT (No such file or directory)
    ...
    # fgrep /dev/sd /proc/mounts
    /dev/sda3 / ext4 rw,seclabel,relatime,data=ordered 0 0
    /dev/sda1 /boot ext4 rw,seclabel,relatime,data=ordered 0 0
    /dev/sdb3 /tmp/File\040System\040Label nilfs2 rw,relatime 0 0

So it looks like nilfs-resize (or a library it uses) can't decode the
octal characters '\040' used to encode spaces in the mount point as
reported in /proc/mounts.

Bug 782681 - btrfs partitions mounted with whitespace cannot be resized
2017-05-21 09:44:20 -06:00
Mike Fleetwood 3a10e30c3a Quote mount point when resizing jfs (#782681)
Attempting to grow an already mounted jfs where the mount point
contained spaces failed like this:

    Grow /dev/sdb2 from 1.00 GiB to 1.50 GiB
    + calibrate /dev/sdb2
    + grow partition from 1.00 GiB to 1.50 GiB
    + grow file system to fill the partition
      + mount -v -t jfs -o remount,resize /dev/sdb2 /tmp/File System Label
        Usage:
        mount [-lhV]
        mount -a [options]
        mount [options] [--source[ <source> | [--target] <directory>
        mount [options] <source> <directory>
        mount <operation> <mountpoint> [<target>]
        ...

Apply the rule to always enclose mount points within double quotes
within command lines.

Bug 782681 - btrfs partitions mounted with whitespace cannot be resized
2017-05-21 09:44:20 -06:00
Mike Fleetwood 618c1a202d Quote mount point when resizing btrfs (#782681)
A user had a btrfs file system mounted by automounter on a mount point
like "/mount/$USER/File System Label" which included white space
characters.  Resizing the file system while online failed like this:

    Grow /dev/sdb1 from 1.00 GiB to 1.50 GiB
    + calibrate /dev/sdb1
    + grow partition from 1.00 GiB to 1.50 GiB
    + grow file system to fill the partition
      + btrfs filesystem resize 1:max /mount/USER/File System Label
          btrfs filesystem resize: too many arguments
          usage: btrfs filesystem resize [devid:][+/-]<newsize>[kKmMgGtTpPeE]|[devid:]max <path>

So mount points not created by GParted should be considered under user
control and need quoting when used as parameters in command lines.
Strictly speaking, mount points created by GParted itself, by
FileSystem::mk_temp_dir(), are safe and don't need quoting.  However it
is simpler and safer just to quote all uses of mount points in command
lines, rather than risk missing some.

Bug 782681 - btrfs partitions mounted with whitespace cannot be resized
2017-05-21 09:44:20 -06:00
Mike Fleetwood 75131d85a5 Fix snap-to-alignment of operations creating partitions (#779339)
Using the default MiB alignment, creating an MSDOS logical partition
between two other existing logical partitions fails with this error
dialog:

    (-) <b>An error occurred while applying the operations</b>
        See the details for more information.
        <b>IMPORTANT</b>
        If you want support, you need to provide the saved details!
        See http://gparted.org/save-details.htm for more information.
                                                               [ OK ]

and these operation details:

    + libparted messages
      - Unable to satisfy all constraints on the partition.

This bug was introduced by this commit included in GParted 0.23.0:
    90e3ed68fc
    Shallow copy Device object into Operation object (#750168)

The commit message claimed that the deep copied Partition objects inside
the Device inside the Operation object are never accessed.  This turned
out not to be true.  Win_GParted::Add_Operation() uses them as part of
snap_to_alignment() which updates requested partition boundaries to
account for alignment requirements and the space needed for EBR
(Extended Boot Record) preceding logical partitions.

In this case the new logical partition was trying to be created over the
top of the EBR for the following logical partition because
snap_to_alignment() wasn't aware of its existence.

Fix by making Add_Operation() and snap_to_alignment() refer to the
current device, as displayed in the UI, rather than the shallow copy
included in the Operation object.  Hopefully now it is true that the
not copied vector of Partition objects in the Device object in each
Operation object are never accessed.

Bug 779339 - enforce at least 1 MiB "free space following"
2017-04-23 08:57:25 -06:00
gogo e31c96adc8 Update Croatian translation 2017-04-10 15:01:45 +00:00
Rafael Fontenelle 9927be7c83 Update Brazilian Portuguese translation 2017-03-26 01:41:07 +00:00
Mario Blättermann b4349f851e Update German translation 2017-03-15 09:04:06 +00:00
Jiri Grönroos aec9d903e8 Update Finnish translation 2017-02-18 16:21:30 +00:00
Curtis Gedak 883d7cf598 Append -git to version for continuing development 2017-02-17 15:55:27 -07:00