Commit Graph

3701 Commits

Author SHA1 Message Date
Trần Ngọc Quân 18c787c2f0 Updated Vietnamese translation
Signed-off-by: Trần Ngọc Quân <vnwildman@gmail.com>
2020-01-11 14:19:52 +07:00
Sveinn í Felli a6db989a9b Update Icelandic translation 2020-01-10 18:25:40 +00:00
Mike Fleetwood ce18685dfa Save all files on CI job failure for investigation
Since patchset !49 "Add file system interface tests" was merged the
GitLab Continuous Integration jobs have sometimes failed executing
test_SupportedFilesystems.  So on failure save all files from the docker
image for 1 week for investigation.

Documentation:
* Introduction to job artifacts
  https://gitlab.gnome.org/help/user/project/pipelines/job_artifacts.md

* GitLab CI/CD Pipeline Configuration Reference, artifacts
  https://gitlab.gnome.org/help/ci/yaml/README.md#artifacts
2019-12-04 07:38:01 +00:00
Mike Fleetwood 01826277e3 Rename TreeView_Detail::treeview_filesystems_Columns member to fsname (!52)
Closes !52 - Rename members and variables currently named 'filesystem'
2019-12-04 07:38:01 +00:00
Mike Fleetwood e5b92a87f2 Rename DialogFeatures::treeview_filesystems_Columns member to fsname (!52)
Name the structure member to 'fsname' used to store strings like "ext2"
etc.  This is equivalent to what was previously done in this commit:
    a9f08ddc7d
    Rename local variable to fsname in get_filesystem() (#741430)

Closes !52 - Rename members and variables currently named 'filesystem'
2019-12-04 07:38:01 +00:00
Mike Fleetwood f53c462c06 Rename create_format_menu_add_item() parameter of type FSType (!52)
Closes !52 - Rename members and variables currently named 'filesystem'
2019-12-04 07:38:00 +00:00
Mike Fleetwood 2cbb867508 Rename set_device_partitions() local variable of type FSType (!52)
Closes !52 - Rename members and variables currently named 'filesystem'
2019-12-04 07:38:00 +00:00
Mike Fleetwood c85fc66dcf Rename Utils method parameters of type FSType (!52)
Closes !52 - Rename members and variables currently named 'filesystem'
2019-12-04 07:38:00 +00:00
Mike Fleetwood 58fb230fb0 Also rename FS.filesystem member to fstype (!52)
Closes !52 - Rename members and variables currently named 'filesystem'
2019-12-04 07:37:19 +00:00
Mike Fleetwood b0f92be638 Rename Partition.filesystem member to fstype (!52)
Previously made this change:
    175d27c55d
    Rename enum FILESYSTEM to FSType

Now complete the renaming exercise of members and variables currently
named 'filesystem'.

Closes !52 - Rename members and variables currently named 'filesystem'
2019-12-03 13:24:44 +00:00
Mike Fleetwood 047a2481bb Stop requesting partition paths of free space and metadata
In GParted_Core::set_device_partitions() the partition path is being
queried from libparted.  However this is done before the switch
statement on the type of the partition, so is called for all libparted
partition objects including PED_PARTITION_FREESPACE and
PED_PARTITION_METADATA ones.  As libparted numbers these partition
objects as -1, it returns paths like "/dev/sda-1".

Additionally when using GParted, with it's default DMRaid handling, on a
dmraid started array this results in paths like
"/dev/mapper/isw_ecccdhhiga_MyArray-1" being passed to
is_dmraid_device() and make_path_dmraid_compatible().  Fortunately
make_path_dmraid_compatible() does nothing and returns the same name.
Call chain looks like:

    GParted_Core::set_device_partitions()
      get_partition_path(lp_partition)
        // where:
        // lp_partition->disk->dev->path = "/dev/mapper/isw_ecccdhhiga_MyArray"
        // lp_partition->type == PED_PARTITION_FREESPACE |
        //                       PED_PARTITION_METADATA
        //              ->num == -1
        ped_partition_get_path(lp_partition)
          return "/dev/mapper/isw_ecccdhhiga_MyArray-1"
        dmraid.is_dmraid_supported()
        dmraid.is_dmraid_device("/dev/mapper/isw_ecccdhhiga_MyArray-1")
          return true
        dmraid.make_path_dmraid_compatible("/dev/mapper/isw_ecccdhhiga_MyArray-1")
          return "/dev/mapper/isw_ecccdhhiga_MyArray-1"

Fix by moving the get_partition_path() call inside the switch statement
so that it is only called for PED_PARTITION_NORMAL,
PED_PARTITION_LOGICAL and PED_PARTITION_EXTENDED partition types.

Relevant commits:
*   53c49349f7
    Simplify logic in set_device_partitions method

*   81986c0990
    Ensure partition path name is compatible with dmraid (#622217)
2019-12-02 16:35:22 +00:00
Mike Fleetwood fa682d372a Make 4 internally used only DMRaid methods private 2019-12-02 16:35:22 +00:00
Mike Fleetwood 21cad97dc7 Recognise ATARAID members started by dmraid (#75)
This is not strictly necessary as members are already recognised using
blkid since this commit earlier in the sequence "Recognise ATARAID
members (#75)".  However it makes sure active members are recognised
even if blkid is not available and matches how file system detection
queries the SWRaid_Info module.

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood bb865aaaa4 Display array device as mount point of dmraid started ATARAID members (#75)
This matches how the array device is displayed as the mount point for
mdadm started ATARAID members by "Display array device as mount point of
mdadm started ATARAID members (#75)" earlier in this patchset.

Extend the DMRaid module member cache to save the array device name and
use as needed to display as the mount point.

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood caec22871e Detect busy status of dmraid started ATARAID members (#75)
Again this is to stop GParted allowing overwrite operations being
performed on an ATARAID member while the array is actively using the
member.  This time for dmraid started arrays using the kernel DM (Device
Mapper) driver.

The DMRaid module already uses dmraid to report active array names:

    # dmraid -sa -c
    isw_ecccdhhiga_MyArray

To find active members in this array, (1) use udev to lookup the kernel
device name:

    # udevadm info --query=name /dev/mapper/isw_ecccdhhiga_MyArray
    dm-0

(2) list the member names exposed by the kernel DM driver through the
/sys file system.

    # ls /sys/block/dm-0/slaves
    sdc  sdd
    # ls -l /sys/block/dm-0/slaves
    lrwxrwxrwx 1 root root 0 Nov 24 09:52 sdc -> ../../../../pci0000:00/0000:00:0d.0/ata3/host2/target2:0:0/2:0:0:0/block/sdc
    lrwxrwxrwx 1 root root 0 Nov 24 09:52 sdc -> ../../../../pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdd

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood 425dfa3709 Enable basic supported actions for ATARAID members (#75)
When an ATARAID member is inactive allow basic supported actions of
copy and move to be performed like with other recognised but only basic
supported types.

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood 1f1f44ff7a Prevent unmount of busy ATARAID members (#75)
Since earlier commit "Display array device as mount point of mdadm
started ATARAID members (#75)" GParted allows attempting to unmout a
busy ATARAID member as if it was a file system.  This is not a valid
thing to do, so disallow it.

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood f6c86835eb Display array uuid of mdadm recognised ATARAID members (#75)
Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood 538c866d09 Display array device as mount point of mdadm started ATARAID members (#75)
This matches how other non-file systems are handled, by displaying the
access reference in the mount point column.  For LVM Physical Volumes
the Volume Group name is displayed [1] and for an active Linux Software
RAID array the array device is displayed [2].

[1] 8083f11d84
    Display LVM2 VGNAME as the PV's mount point (#160787)

[2] f6c2f00df7
    Populate member mount point with SWRaid array device (#756829)

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood 6e990ea48a Detect busy status of mdadm started ATARAID members (#75)
This stops GParted allowing overwrite operations (such as create
partition table or format with a whole device file system) being
performed on an ATARAID member while the array is actively using the
member.

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood ef6794b7de Display correct type of mdadm recognised ATARAID members (#75)
The previous commit, made mdadm recognised IMSM and DDF type ATARAID
members get displayed as "linux-raid" (Linux Software RAID array
member).  This was because of query method 1 in detect_filesystems().

Fix this now by exposing and using the fstype of the member from the
SWRaid_Info cache.

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood 73bf8bef62 Parse ATARAID members from mdadm output and /proc/mdstat (#75)
Since mdadm release 3.0 (2009-06-02) [1] it has also supported external
metadata formats IMSM (Intel Matrix Storage Manager) and DDF, previously
only managed by dmraid.

A number of distributions have switched to use mdadm and kernel MD
(Multiple Devices) driver for managing these Firmware / BIOS / ATARAID
arrays.  These include: Fedora >= 14 [2], RHEL / CentOS >= 6 [3],
SLES >= 12 [4], Ubuntu >= 16.04 LTS.

Therefore additionally parse members in these ATARAID arrays included in
mdadm output, and when activated using the kernel MD driver, in file
/proc/mdstat.  Add fstype to the SWRaid_Info cache records to
distinguish members apart.  So far the rest of the GParted code
continues to treat all members as FS_LINUX_SWRAID.  This will be
resolved in following commits.

Note that this in no way affects how GParted shows and partitions the
array device itself, even those managed by dmraid and use the GParted
DMRaid module.  It only affects how GParted shows the member drives
themselves.

[1] mdadm ANNOUNCE-3.0 file
    https://git.kernel.org/pub/scm/utils/mdadm/mdadm.git/tree/ANNOUNCE-3.0?h=mdadm-3.0

[2] Fedora 14, Storage Administration Guide, 12.5. Linux RAID Subsystem
    https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/raid-subsys.html
    "...  Fedora 14 uses mdraid with external metadata to access ISW /
    IMSM (Intel firmware RAID) sets.  mdraid sets are configured and
    controlled through the mdadm utility."

[3] RHEL 6, Storage Administration Guide, 17.3. Linux RAID Subsystem
    https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/storage_administration_guide/raid-subsys
    "mdraid also supports other metadata formats, known as external
    metadata.  Red Hat Enterprise Linux 6 uses mdraid with external
    metadata to access ISW / IMSM (Intel firmware RAID) sets.  mdraid
    sets are configured and controlled through the mdadm utility."

[4] SUSE Linux Enterprise Server 12 Release Notes, 7.2.3 Driver for IMSM
    and DDF
    https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12/#fate-316007
    "For IMSM and DDF RAIDs the mdadm driver is used unconditionally."

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Mike Fleetwood aea6200d5f Recognise ATARAID members (#75)
PATCHSET OVERVIEW

A user had a Firmware / BIOS / ATARAID array of 2 devices configured as
a RAID 0 (stripe) set.  On top of that was a GPT with the OS partitions.
GParted displays the following errors on initial load and subsequent
refresh:

        Libparted Error
    (-) Invalid argument during seek for read on /dev/sda
                          [ Retry ] [ Cancel ] [ Ignore ]

        Libparted Error
    (-) The backup GPT table is corrupt, but the
        primary appears OK, so that will be used.
                              [  Ok  ] [ Cancel ]

This is an Intel Software RAID array which stores metadata at the end of
each member device, and so the first 128 KiB stripe of the set is stored
in the first 128 KiB of the first member device /dev/sda which includes
the GPT for the whole RAID 0 device.  Hence when libparted reads member
device /dev/sda it finds a GPT describing a block device twice it's
size and in results the above errors when trying to read the backup GPT.

A more dangerous scenario occurs when using 2 devices configured in an
Intel Software RAID 1 (mirrored) set with GPT on top.  On refresh
GParted display this error for both members, /dev/sda and /dev/sdb:

        Libparted Warning
    /!\ Not all of the space available to /dev/sda appears to be used,
        you can fix the GPT to use all of the space (an extra 9554
        blocks) or continue with the current setting?
                                                  [  Fix  ] [ Ignore ]

Selecting [Fix] gets libparted to re-write the backup GPT to the end of
the member device, overwriting the ISW metadata!  Do that twice and both
copies of the metadata are gone!

Worked example of this more dangerous mirrored set case.  Initial setup:

    # dmraid -s
    *** Group superset isw_caffbiaegi
    --> Subset
    name   : isw_caffbiaegi_MyMirror
    size   : 16768000
    stride : 128
    type   : mirror
    status : ok
    subsets: 0
    devs   : 2
    spares : 0

    # dmraid -r
    /dev/sda: isw, "isw_caffbiaegi", GROUP, ok, 16777214 sectors, data@ 0
    /dev/sdb: isw, "isw_caffbiaegi", GROUP, ok, 16777214 sectors, data@ 0

    # wipefs /dev/sda
    offset               type
    ---------------------------------------------
    0x200                gpt   [partition table]
    0x1fffffc00          isw_raid_member   [raid]

Run GParted and click [Fix] on /dev/sda.  Now the first member has gone:

    # dmraid -s
    *** Group superset isw_caffbiaegi
    --> *Inconsistent* Subset
    name   : isw_caffbiaegi_MyMirror
    size   : 16768000
    stride : 128
    type   : mirror
    status : inconsistent
    subsets: 0
    devs   : 1
    spares : 0

    # dmraid -r
    /dev/sdb: isw, "isw_caffbiaegi", GROUP, ok, 16777214 sectors, data@ 0

    # wipefs /dev/sda
    offset               type
    ---------------------------------------------
    0x200                gpt   [partition table]

Click [Fix] on /dev/sdb.  Now all members of the array are gone:

    # dmraid -s
    no raid disks

    # dmraid -r
    no raid disks

    # wipefs /dev/sdb
    offset               type
    ---------------------------------------------
    0x200                gpt   [partition table]

So GParted must not run libparted partition table scanning on the member
devices in ATARAID arrays.  Only on the array device itself.

In terms of the UI GParted must show disks which are ATARAID members as
whole disk devices with ATARAID member content and detect array busy
status to avoid allowing active members from being overwritten while in
use.

THIS COMMIT

Recognise ATARAID member devices and display in GParted as whole device
"ataraid" file systems.  Because they are recognised as whole device
content ("ataraid" file systems) this alone stops GParted running the
libparted partition table scanning and avoids the above errors.

The list of dmraid supported formats is matched by the signatures
recognised by blkid:

    $ dmraid -l
    asr     : Adaptec HostRAID ASR (0,1,10)
    ddf1    : SNIA DDF1 (0,1,4,5,linear)
    hpt37x  : Highpoint HPT37X (S,0,1,10,01)
    hpt45x  : Highpoint HPT45X (S,0,1,10)
    isw     : Intel Software RAID (0,1,5,01)
    jmicron : JMicron ATARAID (S,0,1)
    lsi     : LSI Logic MegaRAID (0,1,10)
    nvidia  : NVidia RAID (S,0,1,10,5)
    pdc     : Promise FastTrack (S,0,1,10)
    sil     : Silicon Image(tm) Medley(tm) (0,1,10)
    via     : VIA Software RAID (S,0,1,10)
    dos     : DOS partitions on SW RAIDs

    $ fgrep -h _raid_member util-linux/libblkid/src/superblocks/*.c
            .name           = "adaptec_raid_member",
            .name           = "ddf_raid_member",
            .name           = "hpt45x_raid_member",
            .name           = "hpt37x_raid_member",
            .name           = "isw_raid_member",
            .name           = "jmicron_raid_member",
            .name           = "linux_raid_member",
            .name           = "lsi_mega_raid_member",
            .name           = "nvidia_raid_member",
            .name           = "promise_fasttrack_raid_member",
            .name           = "silicon_medley_raid_member",
            .name           = "via_raid_member",

As they are all types of Firmware / BIOS / ATARAID arrays, report all
members as a single "ataraid" file system type.  (Except for
"linux_raid_member" in the above blkid source listing which is Linux
Software RAID).

Closes #75 - Errors with GPT on RAID 0 ATARAID array
2019-12-02 16:35:22 +00:00
Andre Klapper 79165d6918 Fix an incorrect translation in the Spanish user help
See #80
2019-12-01 17:44:46 +01:00
Andre Klapper 5def6be7a2 Fix typo in Spanish translation of user help
Fixes #80
2019-12-01 13:55:45 +01:00
Daniel Mustieles eadc85dae1 Updated Spanish translation 2019-11-28 13:48:41 +01:00
Mike Fleetwood af60f91f7f Add missing includes into jfs.cc 2019-11-14 17:12:06 +00:00
Mike Fleetwood 4b8d4be789 Remove unallocated space comment from HACKING file (!50)
The HACKING file should be hints for making changes to the code base and
associated processes.  A overview of how GParted handled unallocated
space was not that.  Also now the size of a JFS is accurately calculated
using JFS as an example of a file system with intrinsic unallocated
space is no longer valid.  Therefore removed from the HACKING file.
Instead add the original commit message as an extended comment to method
calc_significant_unallocated_sectors().

Closes !50 - Calculate JFS size accurately
2019-11-14 17:12:06 +00:00
Mike Fleetwood 2c0572e296 Calculate mounted JFS size accurately (!50)
With the same minimum sized 16 MiB JFS used in the previous commit, now
mounted, GParted once again reports 1.20 MiB of unallocated space.  This
is because the kernel JFS driver is also just reporting the size of the
Aggregate Disk Map (dmap) as the size of the file system [1].

Fix by reading the on disk JFS superblock to calculate the size of the
file system, but query the free space from the kernel using statvfs().
Need to query mounted JFS free space from the kernel because the on disk
dmap is not updated immediately so doesn't reflect recently used or
freed disk space.

For example, start with the 16 MiB JFS empty and mounted.

    # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
    [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1
    # df -k /mnt/1
    Filesystem     1K-blocks  Used Available Use% Mounted on
    /dev/sdb1          15152   136     15016   1% /mnt/1

Write 10 MiB of data to it:

    # dd if=/dev/zero bs=1M count=10 of=/mnt/1/file_10M
    10+0 records in
    10+0 records out
    1048760 bytes (10 MB, 10 MiB) copied, 0.0415676 s, 252 MB/s

Query the file system free space from the kernel and by reading the on
disk dmap figure:

    # df -k /mnt/1
    Filesystem     1K-blocks  Used Available Use% Mounted on
    /dev/sdb1          15152 10376      4776  69% /mnt/1
    # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
    [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1

    # sync
    # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
    [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1

    # umount /mnt/1
    # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1 | fgrep dn_nfree
    [2] dn_nfree:           0x000000004aa   [10] dn_agwidth:        1

The kernel reports the updated usage straight away, but the on disk dmap
record doesn't get updated even by sync, only after unmounting.

This is the same fix as was previously done for EXT2/3/4 [2].

[1] Linux jfs_statfs() function
    https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/jfs/super.c?h=v3.10#n142

[2] 3828019030
    Read file system size for mounted ext2/3/4 from superblock (#683255)

Closes !50 - Calculate JFS size accurately
2019-11-14 17:12:06 +00:00
Mike Fleetwood e55d10b919 Calculate unmounted JFS size accurately (!50)
Create the smallest possible JFS (16 MiB) and GParted will report
1.2 MiB of unallocated space.  This is because the size of the Aggregate
Disk Map (dmap) was used as the size of the file system.  However after
reading the source code to mkfs.jfs, it separately accounts for the size
of the Log (Journal) and the FSCK Working Space.  The size of a JFS is
the sum of these 3 components added together.

Using the minimum 16 MiB JFS as an example:

    # jfs_debugfs /dev/sdb1
    jfs_debugfs version 1.1.15, 04-Mar-2011

    Aggregate Block Size: 4096

    > superblock
    [1] s_magic:            'JFS1'          [15] s_ait2.addr1:      0x00
    [2] s_version:          1               [16] s_ait2.addr2:      0x00000018
    [3] s_size:     0x0000000000007660           s_ait2.address:    24
    [4] s_bsize:            4096            [17] s_logdev:          0x00000000
    [5] s_l2bsize:          12              [18] s_logserial:       0x00000000
    [6] s_l2bfactor:        3               [19] s_logpxd.len:      256
    [7] s_pbsize:           512             [20] s_logpxd.addr1:    0x00
    [8] s_l2pbsize:         9               [21] s_logpxd.addr2:    0x00000f00
    [9] pad:                Not Displayed        s_logpxd.address:  3840
    [10] s_agsize:          0x00002000      [22] s_fsckpxd.len:     52
    [11] s_flag:            0x10200900      [23] s_fsckpxd.addr1:   0x00
                            JFS_LINUX       [24] s_fsckpxd.addr2:   0x00000ecc
            JFS_COMMIT      JFS_GROUPCOMMIT      s_fsckpxd.address: 3788
                            JFS_INLINELOG   [25] s_time.tv_sec:     0x5dbbdfa0
                                            [26] s_time.tv_nsec:    0x00000000
                                            [27] s_fpack:           'small_jfs'
    [12] s_state:           0x00000000
                 FM_CLEAN
    [13] s_compress:        0
    [14] s_ait2.len:        4

    display_super: [m]odify or e[x]it: x
    > dmap

    Block allocation map control page at block 16

    [1] dn_mapsize:         0x00000000ecc   [9] dn_agheigth:        0
    [2] dn_nfree:           0x00000000eaa   [10] dn_agwidth:        1
    [3] dn_l2nbperpage:     0               [11] dn_agstart:        341
    [4] dn_numag:           1               [12] dn_agl2size:       13
    [5] dn_maxlevel:        0               [13] dn_agfree:         type 'f'
    [6] dn_maxag:           0               [14] dn_agsize:         8192
    [7] dn_agpref:          0               [15] pad:               Not Displayed
    [8] dn_aglevel:         0
    display_dbmap: [m]odify, [f]ree count, [t]ree, e[x]it > x
    > quit

Values of interest:
    s_size        - Aggregate size in device (s_pbsize) blocks
    s_bsize       - Aggregate block (aka file system allocation) size in
                    bytes
    s_pbsize      - Physical (device) block size in bytes
    s_logpxd.len  - Log (Journal) size in Aggregate (s_bsize) blocks
    s_fsckpxd.len - FSCK Working Space in Aggregate (s_bsize) blocks
    dn_nfree      - Number of free (s_bsize) blocks in Aggregate

Calculation:
    file system size = s_size * s_pbsize
                     + s_logpxd.len * s_bsize
                     + s_fsckpxd.len * s_bsize
                     = 30304 * 512
                     + 256 * 4096
                     + 52 * 4096
                     =  16777216
                        (Exactly 16 MiB.  The size of the partition.)
    free space = dn_nfree * s_bsize
               = 3754 * 4096
               = 15376384

Rewrite JFS usage querying code to use this updated calculation.

[1] JFS Overview / How the Journaled File System cuts system restart
    times to the quick
    http://jfs.sourceforge.net/project/pub/jfs.pdf
[2] JFS Layout / How the Journaled File systems handles the on-disk
    layout
    http://jfs.sourceforge.net/project/pub/jfslayout.pdf
[3] mkfs.jfs source code
    http://jfs.sourceforge.net/project/pub/jfsutils-1.1.15.tar.gz
    mkfs/mkfs.c
    Selected lines from mkfs/mkfs.c
        create_aggregate(..., number_of_blocks, ..., logsize, ...)
            number_of_blocks -= fsck_wspace_length;
            aggr_superblock.s_size = number_of_blocks * (aggr_block_size / phys_block_size);
            aggr_superblock.s_bsize = aggr_block_size;
            aggr_superblock.s_pbsize = phys_block_size;
            PXDlength(&aggr_superblock.s_logpxd, logsize);
            PXDlength(&aggr_superblock.s_fsckpxd, fsck_wspace_length);
        main()
            number_of_bytes = bytes_on_device;
            number_of_blocks = number_of_bytes / agg_block_size;
            logsize = logsize_in_bytes / aggr_block_size;
            number_of_blocks -= logsize;
            create_aggregate(..., number_of_blocks, ..., logsize, ...);

Closes !50 - Calculate JFS size accurately
2019-11-14 17:12:06 +00:00
Mike Fleetwood 7be6d0967a Update name of the NILFS2 specific package
Upstream NILFS project calls the package nilfs-utils [1][2].  Arch Linux
/ CentOS / Fedora / OpenSUSE use the upstream name.  However Debian /
Ubuntu name it nilfs-tools [3] instead.

Document the needed software as:

    nilfs-utils / nilfs-tools

Upstream name first separated by slash from alternative names
distributions use.

[1] NILFS Download page
    https://nilfs.sourceforge.io/en/download.html
[2] NILFS Public Git Repositories
    https://nilfs.sourceforge.io/en/git_repos.html
[3] Debian package: nilfs-tools
    https://packages.debian.org/sid/nilfs-tools
2019-11-09 17:18:34 +00:00
Mike Fleetwood 530e84bace Add missing libuuid-devel build dependency for Fedora into README 2019-11-09 17:18:34 +00:00
Mike Fleetwood 7a7d0a2119 Avoid crash reading JFS usage on Fedora 30 (!49)(#794947)
Running JFS read usage test on Fedora 30 fails like this:

    $ ./test_SupportedFileSystems --gtest_filter='*ReadUsage/jfs'
...
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUsage/jfs
    unknown file: Failure
    C++ exception with description "std::bad_alloc" thrown in the test body.
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/jfs, where GetParam() = 17 (41833 ms)

However the same test passes on Fedora 29, Fedora 31 Beta, CentOS 7,
Debian 10 and Ubuntu 18.04 LTS.

Also running GParted on Fedora 30 crashes just the same when reading JFS
usage:

    # gparted
    GParted 1.0.0
    configuration --enable-libparted-dmraid --enable-online-resize
    libparted 3.2
    terminate called after throwing an instance of 'std::bad_alloc'
      what():  std::bad_alloc
    /usr/bin/gparted: line 202: 19218 Aborted                 (core dumped) $BASE_CMD

Running jfs_debugfs to query the file system usage the same way GParted
does produces an infinite amount of repeating output:

    # echo dm | jfs_debugfs /dev/sdb1

So jfs_debugfs gets stuck in an infinite loop inside the dmap subcommand
when it encounters EOF.  GParted and the read JFS usage test read this
output until memory is exhausted and crash.  This is exactly what was
happening in closed bug 794947.  Even installed jfsutils from Fedora 29
on Fedora 30 and visa versa.  jfs_debugfs still produced an infinite
amount of output on Fedora 30 and worked correctly on Fedora 29.  So
it's not the build of jfsutils, but something in the OS that is making
the difference!

Anyway fix by providing the instruction to exit from the dmap
subcommand, and quit from jfs_debugfs itself, like this:

    # echo -e 'dmap\nx\nquit' | jfs_debugfs /dev/sdb1

Bug 794947 - gparted hangs when sees JFS partition on discovering
             partitions
Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood c15d0cd6aa Accept FS usage figures within significant unallocated threshold (!49)
So far the read file system usage figures, read via the file system
interface classes using file system specific tools, have been checked to
the exact sector for:
     0 <= used <= size
     0 <= unused <= size
     unallocated = 0
     used + unused = size

However for JFS and NTFS this fails like this:

    # ./test_SupportedFileSystems --gtest_filter='*ReadUsage/*' | fgrep ' ms'
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/btrfs (335 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/exfat (0 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/ext2 (38 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/ext3 (131 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/ext4 (32 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/f2fs (47 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/fat16 (19 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/fat32 (48 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/hfs (0 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/hfsplus (0 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/jfs, where GetParam() = 17 (73 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/linuxswap (20 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/luks (0 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/lvm2pv (410 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/minix (0 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/nilfs2 (226 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/ntfs, where GetParam() = 23 (56 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/reiser4 (49 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/reiserfs (139 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/udf (34 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/xfs (67 ms)
    [----------] 21 tests from My/SupportedFileSystemsTest (1726 ms total)
    [==========] 21 tests from 1 test case ran. (1726 ms total)

    # ./test_SupportedFileSystems --gtest_filter='*ReadUsage/jfs:*ReadUsage/ntfs'
    Running main() from test_SupportedFileSystems.cc
    Note: Google Test filter = *ReadUsage/jfs:*ReadUsage/ntfs
    [==========] Running 2 tests from 1 test case.
    [----------] Global test environment set-up.
    [----------] 2 tests from My/SupportedFileSystemsTest
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUsage/jfs
    test_SupportedFileSystems.cc:465: Failure
    Expected equality of these values:
      m_partition.sectors_unallocated
        Which is: 2472
      0
    test_SupportedFileSystems.cc:517: Failure
    Expected equality of these values:
      m_partition.sectors_used + m_partition.sectors_unused
        Which is: 521816
      m_partition.get_sector_length()
        Which is: 524288
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/jfs, where GetParam() = 17 (36 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUsage/ntfs
    test_SupportedFileSystems.cc:465: Failure
    Expected equality of these values:
      m_partition.sectors_unallocated
        Which is: 8
      0
    test_SupportedFileSystems.cc:517: Failure
    Expected equality of these values:
      m_partition.sectors_used + m_partition.sectors_unused
        Which is: 524280
      m_partition.get_sector_length()
        Which is: 524288
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/ntfs, where GetParam() = 23 (35 ms)
    [----------] 2 tests from My/SupportedFileSystemsTest (71 ms total)

    [----------] Global test environment tear-down
    [==========] 2 tests from 1 test case ran. (72 ms total)
    [  PASSED  ] 0 tests.
    [  FAILED  ] 2 tests, listed below:
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/jfs, where GetParam() = 17
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/ntfs, where GetParam() = 23

     2 FAILED TESTS

So JFS is reporting 2472 unallocated sectors in a size of 524288 sectors
and NTFS is reporting 8 unallocated sectors in the same size.  This
exact issue is already solved for GParted so that it doesn't show a
small amount of unallocated space by commits [1][2] from Bug 499202 [3].

Fix the same way, use the accessors to the file system usage figures
which don't show unallocated space when it is below the significant
threshold.

[1] b5c80f18a9
    Enhance calculation of significant unallocated space (#499202)

[2] 7ebedc4bb3
    Don't show intrinsic unallocated space (#499202)

[3] Bug 499202 - gparted does not see the difference if partition size
                 differs from filesystem size
    https://bugzilla.gnome.org/show_bug.cgi?id=499202

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood d6e8236860 Skip Check MINIX file system interface test (!49)
Checking a MINIX V3 file system fails like this:

    $ ./test_SupportedFileSystems --gtest_filter='*Check/minix'
...
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndCheck/minix
    test_SupportedFileSystems.cc:554: Failure
    Value of: m_fs_object->check_repair(m_partition, m_operation_detail)
      Actual: false
    Expected: true
    Operation details:
    mkfs.minix -3 '/home/centos/programming/c/gparted/tests/test_SupportedFileSystems.img'    00:00:00  (SUCCESS)
    87392 inodes
    262144 blocks
    Firstdatazone=5507 (5507)
    Zonesize=1024
    Maxsize=2147483647

    fsck.minix '/home/centos/programming/c/gparted/tests/test_SupportedFileSystems.img'    00:00:00  (ERROR)
    fsck.minix from util-linux 2.23.2
    bad magic number in super-block
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndCheck/minix, where GetParam() = 21 (182 ms)

fsck.minix doesn't support checking MINIX V3 file systems until this
commit, first included in util-linux 2.27 released 2015-09-07.

    https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/commit/?id=86a9f3dad58addb50eca9daa9d233827a005dad7
    fsck.minix: add minix v3 support

CentOS 7 only includes util-linux 2.23.2 so is affected by this, however
Ubuntu 18.04 LTS includes util-linux 2.31.1 so is not affected.

Just always skip this test for now.  Plan to re-enable later when the
oldest supported distributions and GitLab CI images include the needed
util-linux release.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 19ed25d774 Write new UUID to JFS before testing reading UUID (!49)
Testing reading the UUID from a newly created JFS was failing like this:

    $ ./test_SupportedFileSystems --gtest_filter='*ReadUUID/jfs'
...
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUUID/jfs
    test_SupportedFileSystems.cc:552: Failure
    Expected: (m_partition.uuid.size()) >= (9U), actual: 0 vs 9
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/jfs, where GetParam() = 17 (57 ms)

Mkfs.jfs creates a file system as version 1.  It does have a UUID and
blkid can report it, but jfs_tune doesn't report it.

    $ touch -s 256M test_jfs.img
    $ mkfs.jfs -q test_jfs.img
    mkfs.jfs version 1.1.15, 04-Mar-2011

    Format completed successfully.

    262144 kilobytes total disk space.
    $ blkid test_jfs.img
    test_jfs.img: UUID="6b0bb46a-a240-47b4-89ab-1fe759aa572d" TYPE="jfs"

    $ jfs_tune -l test_jfs.img | egrep 'version|UUID'
    jfs_tune version 1.1.15, 04-Mar-2011
    JFS version:		1

    $ hexdump -C test_jfs.img
    00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    *
    00008000  4a 46 53 31 01 00 00 00  58 f6 07 00 00 00 00 00  |JFS1....X.......|
Version >---------------- ^^ ^^ ^^ ^^
...
    00008080  00 00 00 00 00 00 00 00  6b 0b b4 6a a2 40 47 b4  |........k..j.@G.|
    00008090  89 ab 1f e7 59 aa 57 2d  00 00 00 00 00 00 00 00  |....Y.W-........|
UUID >-------------------------------- ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^
              ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^

However writing a new UUID to the JFS also updates the version to 2 and
allows jfs_tune to report the UUID.

    $ jfs_tune -U random test_jfs.img
    jfs_tune version 1.1.15, 04-Mar-2011
    UUID updated successfully.

    $ blkid test_jfs.img
    test_jfs.img: UUID="6374ec58-3568-4ffb-bea9-ff76bf5c192f" TYPE="jfs"

    $ jfs_tune -l test_jfs.img | egrep 'version|UUID'
    jfs_tune version 1.1.15, 04-Mar-2011
    JFS version:            2
    File system UUID:       6374ec58-3568-4ffb-bea9-ff76bf5c192f
    External log UUID:      00000000-0000-0000-0000-000000000000

    $ hexdump -C test_jfs.img
    00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    *
    00008000  4a 46 53 31 02 00 00 00  58 f6 07 00 00 00 00 00  |JFS1....X.......|
Version >---------------- ^^ ^^ ^^ ^^
...
    00008080  00 00 00 00 00 00 00 00  63 74 ec 58 35 68 4f fb  |........ct.X5hO.|
    00008090  be a9 ff 76 bf 5c 19 2f  00 00 00 00 00 00 00 00  |...v.\./........|
New UUID >---------------------------- ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^
              ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^

Therefore change the CreateAndReadUUID test for JFS to also write a new
UUID so that it also updates the version to 2, thus allowing jfs_tune to
report the UUID and the test pass.

Note that GParted doesn't encounter this problem because it used blkid
by default to report the UUID and only falls back to using the file
system interface method which calls jfs_tune when blkid is not
available.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood c60f2e43a3 Accept reading shorter UUIDs from FAT16/32 file systems (!49)
The tests were failing like this:

    $ ./test_SupportedFileSystems --gtest_filter='*CreateAndReadUUID/fat16'
....
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUUID/fat16
    test_SupportedFileSystems.cc:552: Failure
    Expected equality of these values:
      m_partition.uuid.size()
        Which is: 9
      36U
        Which is: 36
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/fat16, where GetParam() = 13 (45 ms)

This is because the test was expecting a full 36 character UUID as used
by Linux file systems.  Also accept shorter 9 character "UUID"s as used
by FAT16/32 file systems.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 8d4f9eac99 Create loop devices for NILFS2 read and write FS interface tests (!49)
For NILFS2 the read and write tests which use nilfs-tune all fail using
an image file, even when run as root, however the other tests succeed.
Selected output from the test program:

    # ./test_SupportedFileSystems --gtest_filter='*/nilfs2' | fgrep ' ms'
    [       OK ] My/SupportedFileSystemsTest.Create/nilfs2 (22 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/nilfs2, where GetParam() = 22 (31 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadLabel/nilfs2, where GetParam() = 22 (30 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/nilfs2, where GetParam() = 22 (30 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndWriteLabel/nilfs2, where GetParam() = 22 (37 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndWriteUUID/nilfs2, where GetParam() = 22 (39 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndCheck/nilfs2 (0 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndRemove/nilfs2 (0 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndGrow/nilfs2 (386 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndShrink/nilfs2 (345 ms)
    [----------] 10 tests from My/SupportedFileSystemsTest (920 ms total)
    [==========] 10 tests from 1 test case ran. (920 ms total)

nilfs-tune fails like this when given an image file:
    # truncate -s 256M test.img
    # mkfs.nilfs2 test.img
    mkfs.nilfs2 (nilfs-utils 2.2.7)
    Start writing file system initial data to the device
           Blocksize:4096  Device:test.img  Device Size:268435456
    File system initialization succeeded !!
    # nilfs-tune -l test.img
    nilfs-tune 2.2.7
    nilfs-tune: test.img: cannot open NILFS
    # echo $?
    1

However using nilfs-tune via a loop device works:
    # losetup --show --find /dev/loop0
    /dev/loop0
    # nilfs-tune -l /dev/loop0
    nilfs-tune 2.2.7
    Filesystem volume name:   (none)
    Filesystem UUID:          fc49912c-4d39-4672-8610-1e1185d0db5f
    Filesystem magic number:  0x3434
    Filesystem revision #:    2.0
    Filesystem features:      (none)
    Filesystem state:         valid
    Filesystem OS type:       Linux
    Block size:               4096
...

So nilfs-tune only works with block devices.  Fix by making these tests
require a loop device and therefore make them root only.  Now these
tests are skipped as non-root user and pass as root.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 4fcd739cee Create loop devices for online resized file system tests (!49)
File systems BTRFS, JFS, NILFS2 and XFS can only be resized while
mounted, but only root can mount file systems.  Therefore these tests
fail.  Also BTRFS resize uses 'btrfs filesystem show' to discover the
devid, which also fails as described in the previous commit message.

Note that root can mount a file system image directly, but that it
implicitly creates loop device:
    # truncate -s 256M test.img
    # mkfs.xfs test.img
    # mount test.img /mnt/1

    # fgrep /mnt/1 /proc/mounts
    /dev/loop0 /mnt/1 xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
    # losetup -a
    /dev/loop0: [64768]:35826659 (/root/test.img)

Therefore make these tests root only and require an explicit loop
device.  Now these file system resize tests succeed as root and are
skipped as non-root.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 07ad43a107 Create loop devices for BTRFS read file system interface tests (!49)
For BTRFS the read (and resize) tests fail when using an image file,
however the create, write and check tests pass.  Selected output from
the test program:

    $ ./test_SupportedFileSystems --gtest_filter='*/btrfs' | fgrep ' ms'
    [       OK ] My/SupportedFileSystemsTest.Create/btrfs (43 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/btrfs, where GetParam() = 7 (95 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadLabel/btrfs, where GetParam() = 7 (158 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/btrfs, where GetParam() = 7 (164 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndWriteLabel/btrfs (164 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndWriteUUID/btrfs (132 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndCheck/btrfs (129 ms)
    [       OK ] My/SupportedFileSystemsTest.CreateAndRemove/btrfs (0 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndGrow/btrfs, where GetParam() = 7 (155 ms)
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndShrink/btrfs, where GetParam() = 7 (97 ms)
    [----------] 10 tests from My/SupportedFileSystemsTest (1137 ms total)
    [==========] 10 tests from 1 test case ran. (1137 ms total)

The read operations fail because 'btrfs filesystem show' doesn't work on
am image file:
    $ truncate -s 256M test.img
    $ mkfs.btrfs test.img
    btrfs-progs v4.9.1
    See http://btrfs.wiki.kernel.org for more information.

    Label:              (null)
    UUID:               de1624ae-39bb-4796-aee4-7ee1fa24c06a
    Node side:          16384
    Sector size:        4096
    Filesystem size:    256.00MiB
    Block group profiles:
      Data:             single
      Metadata:         DUP
      System:           DUP
    SSD detected:       no
    Incompat features:  extref, skinny-metadata
    Number of devices:  1
    Devices:
        ID       SIZE  PATH
         1  256.00MiB  test.img
    $ btrfs filesystem show test.img
    ERROR: not a valid btrfs filesystem: /home/centos/programming/c/gparted/tests/test.img
    $ echo $1
    1

Querying a BTRFS image file also fails as root:
    $ su
    Password:
    # btrfs filesystem show test.img
    ERROR: not a valid btrfs filesystem: /home/centos/programming/c/gparted/tests/test.img
    # echo $1
    1

However querying the BTRFS via a loop device succeeds:
    # losetup --show --find test.img
    /dev/loop0
    # btrfs filesystem show /dev/loop0
    Label: none  uuid: de1624ae-39bb-4796-aee4-7ee1fa24c06a
            Total devices 1 FS bytes used 112.00KiB
            devid    1 size 256.00MiB used 88.00MiB path /root/test.img

There must be some kernel level BTRFS file system device discovery
happening because now after creating a loop device for the image file,
the BTRFS can be shown via the image file directly:
    # btrfs filesystem show test.img
    Label: none  uuid: de1624ae-39bb-4796-aee4-7ee1fa24c06a
            Total devices 1 FS bytes used 112.00KiB
            devid    1 size 256.00MiB used 88.00MiB path /root/test.img

Anyway for the BTRFS reading tests make them required a loop device and
therefore root only.  Now these tests are skipped as non-root user and
pass as root.

Addressing BTRFS resizing test failures will be handled in a following
commit.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 268c34e398 Create loop devices for LVM2 PV file system interface tests (!49)
Creating an LVM2 PV as a non-root user on an image file fails like this:
    $ truncate -s 256M test.img
    $ lvm pvcreate `pwd`/test.img
      WARNING: Running as a non-root user. Functionality may be unavailable.
      /run/lvm/lvmetad.socket: access failed: Permission denied
      WARNING: Failed to connect to lvmetad. Falling back to device scanning.
      /run/lock/lvm/P_orphans:aux: open failed: Permission denied
      Can't get lock for orphan PVs.
    $ echo $?
    5

Trying the same as root also fails:
    # truncate -s 256M test.img
    # lvm pvcreate `pwd`/test.img
      Device /root/test.img not found.
    # echo $?
    5

LVM seems strongly predicated on only using block devices [1].  LVM can
use loop devices though, but loop devices can only be created by root.
    # truncate -s 256M test.img
    # losetup -f --show `pwd`/test.img
    /dev/loop0
    # lvm pvcreate /dev/loop0
      Physical volume "/dev/loop0" successfully created.
    # echo $?
    0

Make the LVM2 PV tests require user root and use loop device over the
test image.  Tests for the other file system types still directly uses
the image file.  This makes the LVM2 PV tests pass when run as root, or
successfully skipped when run as non-root.

[1] lvmconfig --typeconfig default --withcomments --withspace | less
    From the "devices" section of the commented default configuration,
    LVM uses block devices found below /dev, devices provided by udev
    and/or found in sysfs.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood f2165fd44d Prevent file system tests core dumping in GitLab CI Ubuntu image (!49)
With the previous commit, execution of test_SupportedFileSystems is
failing in the GitLab CI Ubuntu image.  Fragment from file
tests/test-suite.log:

    FAIL: test_SupportedFileSystems
    ===============================

    Terminate called after throwing an instance of 'Glib::ConvertError'
    Aborted (core dumped)
    FAIL test_SupportedFileSystems (exit status: 134)

This core dump can be re-created locally by (1) removing modprobe from
the PATH, and (2) executing the test program in the C locale.

    $ LC_ALL=C ./test_SupportedFileSystems
    Running main() from test_SupportedFileSystems.cc
    terminate called after throwing an instance of 'Glib::ConvertError'
    Aborted
    $ echo $?
    134

Backtrace from gdb:
    (gdb) backtrace
    #0  0x00007f4f93002337 in __GI_raise (sig=sig@entry=6)
        at ../nptl/sysdeps/unix/sysv/linux/raise.c:55
    #1  0x00007f4f93003a28 in __GI_abort () at abort.c:90
    #2  0x00007f4f93b2e7d5 in __gnu_cxx::__verbose_terminate_handler() ()
        at ../../../../libstdc++-v3/libsupc++/vterminate.cc:95
    #3  0x00007f4f93b2c746 in __cxxabiv1::__terminate(void (*)()) (handler=<optimized out>)
        at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:38
    #4  0x00007f4f93b2c773 in std::terminate() ()
        at ../../../../libstdc++-v3/libsupc++/eh_terminate.cc:48
    #5  0x00007f4f93b2c993 in __cxxabiv1::__cxa_throw(void*, std::type_info*, void (*)(void*))
        (obj=0x260d4b0, tinfo=0x7f4f966c1930 <typeinfo for Glib::ConvertError>, dest=0x7f4f96486fa0 <Glib::ConvertError::~ConvertError()>)
        at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:87
    #6  0x00007f4f96486e27 in Glib::ConvertError::throw_func(_GError*) (gobject=0x260bf90) at convert.cc:329
    #7  0x00007f4f9649b5d7 in Glib::Error::throw_exception(_GError*) (gobject=0x260bf90) at error.cc:175
    #8  0x00007f4f964a7155 in Glib::operator<<(std::ostream&, Glib::ustring const&)
        (os=warning: RTTI symbol not found for class 'std::ostream' ..., utf8_string=...) at ustring.cc:1430
    #9  0x000000000044d66f in GParted::Utils::execute_command(Glib::ustring const&, char const*, Glib::ustring&, Glib::ustring&, bool)
        (command=..., input=input@entry=0x0, output=..., error=..., use_C_locale=use_C_locale@entry=true)
        at ../src/Utils.cc:688
    #10 0x000000000044dae9 in GParted::Utils::kernel_supports_fs(Glib::ustring const&)
        (use_C_locale=true, error=..., output=..., command=...)
        at ../src/Utils.cc:659
    #11 0x000000000044dae9 in GParted::Utils::kernel_supports_fs(Glib::ustring const&) (fs=...)
        at ../src/Utils.cc:480
    #12 0x0000000000460008 in GParted::jfs::get_filesystem_support() (this=0x25e8e60)
        at ../src/jfs.cc:59
    #13 0x00000000004464f9 in GParted::SupportedFileSystems::find_supported_filesystems() (this=0x25e8690)
        at ../src/SupportedFileSystems.cc:120
    #14 0x0000000000412360 in GParted::SupportedFileSystemsTest::setup_supported_filesystems() ()
        at test_SupportedFileSystems.cc:278
    #15 0x00000000004151b0 in GParted::SupportedFileSystemsTest::get_supported_fstypes() ()
        at test_SupportedFileSystems.cc:256
    #16 0x00000000004152c0 in GParted::gtest_MySupportedFileSystemsTest_EvalGenerator_() ()
        at test_SupportedFileSystems.cc:495
    #17 0x000000000041c7d6 in testing::internal::ParameterizedTestCaseInfo<GParted::SupportedFileSystemsTest>::RegisterTests()
        (this=0x2528ac0) at ../lib/gtest/include/gtest/internal/gtest-param-util.h:549
    #18 0x0000000000479fb5 in testing::internal::UnitTestImpl::RegisterParameterizedTests() (this=0x25288d0)
        at ./include/gtest/internal/gtest-param-util.h:709
    #19 0x0000000000479fb5 in testing::internal::UnitTestImpl::RegisterParameterizedTests()
        (this=this@entry=0x2528800) at ./src/gtest.cc:2658
    #20 0x000000000048a001 in testing::internal::UnitTestImpl::PostFlagParsingInit() (this=0x2528800)
        at ./src/gtest.cc:4980
    #21 0x000000000049e399 in testing::internal::InitGoogleTestImpl<char>(int*, char**)
        (argc=argc@entry=0x7ffe9d208a3c, argv=argv@entry=0x7ffe9d208b38) at ./src/gtest.cc:5934
    #22 0x000000000048d285 in testing::InitGoogleTest(int*, char**)
        (argc=argc@entry=0x7ffe9d208a3c, argv=argv@entry=0x7ffe9d208b38) at ./src/gtest.cc:5952
    #23 0x0000000000410404 in main(int, char**) (argc=1, argv=0x7ffe9d208b38)
        at test_SupportedFileSystems.cc:557

The test program runs when executed in my locale and produces these
messages:

    $ ./test_SupportedFileSystems
    Running main() from test_SupportedFileSystems.cc
    Failed to execute child process “modprobe” (No such file or directory)
    Failed to execute child process “modprobe” (No such file or directory)
    [==========] Running 210 tests from 1 test case.
...

So the test program is aborting when trying to print the failed to
execute child process message, but only in the C locale.

This doesn't affect the CentOS GitLab CI image because that installs the
kmod package with modprobe by default, however the Ubuntu image doesn't
have the kmod package.

Fix this by explicitly installing the kmod package into both the CentOS
and Ubuntu GitLab CI images.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 8f4edb0693 Extend tests to all fully supported file systems (!49)
Extend testing to all fully supported file systems, those with an
implemented FileSystem derived class.

Note that in main() GParted threading needs to now be initialised before
InitGoogleTest() because it calls INSTANTIATE_TEST_CASE_P() which in
turn calls get_supported_fstypes() which eventually constructs all the
individual file system interface objects and discovers available
support, some of which use execute_command().  Example call chain:
    InitGoogleTest()
      INSTANTIATE_TEST_CASE_P()
        get_supported_fstypes()
          setup_supported_filesystems()
            {SupportedFileSystems}->find_supported_filesystems()
              {btrfs}->get_filesystem_support()
                Utils::execute_command()

In the CentOS 7 GitLab CI image the EPEL (Extra Packages for Enterprise
Linux) repository is added to provide f2fs-tools and ntfsprogs.

23 of 210 tests fail on CentOS 7 and 22 on Ubuntu 18.04 LTS.  The
following commits will resolve these test failures.

    $ ./test_SupportedFileSystems
    Running main() from test_SupportedFileSystems.cc
    [==========] Running 210 tests from 1 test case.
    [----------] Global test environment set-up.
    [----------] 210 tests from My/SupportedFileSystemsTest
...
    [----------] 210 tests from My/SupportedFileSystemsTest (11066 ms total)

    [----------] Global test environment tear-down
    [==========] 210 tests from 1 test case ran. (11067 ms total)
    [  PASSED  ] 187 tests.
    [  FAILED  ] 23 tests, listed below:
    [  FAILED  ] My/SupportedFileSystemsTest.Create/lvm2pv, where GetParam() = 20
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/btrfs, where GetParam() = 7
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/jfs, where GetParam() = 17
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/lvm2pv, where GetParam() = 20
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/nilfs2, where GetParam() = 22
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUsage/ntfs, where GetParam() = 23
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadLabel/btrfs, where GetParam() = 7
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadLabel/nilfs2, where GetParam() = 22
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/btrfs, where GetParam() = 7
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/fat16, where GetParam() = 13
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/fat32, where GetParam() = 14
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/jfs, where GetParam() = 17
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndReadUUID/nilfs2, where GetParam() = 22
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndWriteLabel/nilfs2, where GetParam() = 22
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndWriteUUID/nilfs2, where GetParam() = 22
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndCheck/lvm2pv, where GetParam() = 20
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndCheck/minix, where GetParam() = 21
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndRemove/lvm2pv, where GetParam() = 20
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndGrow/btrfs, where GetParam() = 7
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndGrow/lvm2pv, where GetParam() = 20
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndGrow/xfs, where GetParam() = 27
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndShrink/btrfs, where GetParam() = 7
    [  FAILED  ] My/SupportedFileSystemsTest.CreateAndShrink/lvm2pv, where GetParam() = 20

    23 FAILED TESTS

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 7b92e9343b Print file system types in parameterised test names (!49)
Until now the parameterised test values are printed as part of the test
names as just 0, 1, etc. like this:

    $ ./test_SupportedFileSystems
    Running main() from test_SupportedFileSystems.cc
    [==========] Running 20 tests from 1 test case.
    [----------] Global test environment set-up.
    [----------] 20 tests from My/SupportedFileSystemsTest
    [ RUN      ] My/SupportedFileSystemsTest.Create/0
    [       OK ] My/SupportedFileSystemsTest.Create/0 (48 ms)
    [ RUN      ] My/SupportedFileSystemsTest.Create/1
    [       OK ] My/SupportedFileSystemsTest.Create/1 (11 ms)

Provide the file system types as the names for the parameterised test
values [1].  Now the test names are printed like this:

    $ ./test_SupportedFileSystems
    Running main() from test_SupportedFileSystems.cc
    [==========] Running 20 tests from 1 test case.
    [----------] Global test environment set-up.
    [----------] 20 tests from My/SupportedFileSystemsTest
    [ RUN      ] My/SupportedFileSystemsTest.Create/ext2
    [       OK ] My/SupportedFileSystemsTest.Create/ext2 (51 ms)
    [ RUN      ] My/SupportedFileSystemsTest.Create/linuxswap
    [       OK ] My/SupportedFileSystemsTest.Create/linuxswap (11 ms)

Also use these Google Test name friendly ASCII alphanumeric only names
everywhere the file system type needs to be reported in this test
program.

[1] Specifying Names for Value-Parameterized Test Parameters
    https://github.com/google/googletest/blob/v1.8.x/googletest/docs/advanced.md#specifying-names-for-value-parameterized-test-parameters

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 0a23b631c3 Add testing of linux-swap using Value-Parameterised Google Tests (!49)
Use Google Test Value-Parameterised to call every test for both ext2
and linux-swap.
    https://github.com/google/googletest/blob/v1.8.x/googletest/docs/advanced.md#value-parameterized-tests

Running the test now looks like this:

    $ ./test_SupportedFileSystems
    Running main() from test_SupportedFileSystems.cc
    [==========] Running 20 tests from 1 test case.
    [----------] Global test environment set-up.
    [----------] 20 tests from My/SupportedFileSystemsTest
    [ RUN      ] My/SupportedFileSystemsTest.Create/0
    [       OK ] My/SupportedFileSystemsTest.Create/0 (97 ms)
    [ RUN      ] My/SupportedFileSystemsTest.Create/1
    [       OK ] My/SupportedFileSystemsTest.Create/1 (15 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUsage/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/0 (106 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUsage/1
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUsage/1 (14 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadLabel/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadLabel/0 (95 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadLabel/1
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadLabel/1 (23 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUUID/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUUID/0 (99 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndReadUUID/1
    [       OK ] My/SupportedFileSystemsTest.CreateAndReadUUID/1 (22 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndWriteLabel/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndWriteLabel/0 (102 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndWriteLabel/1
    [       OK ] My/SupportedFileSystemsTest.CreateAndWriteLabel/1 (22 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndWriteUUID/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndWriteUUID/0 (101 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndWriteUUID/1
    [       OK ] My/SupportedFileSystemsTest.CreateAndWriteUUID/1 (21 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndCheck/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndCheck/0 (153 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndCheck/1
    test_SupportedFileSystems.cc:424: Skip test.  check not supported or support not found
    [       OK ] My/SupportedFileSystemsTest.CreateAndCheck/1 (0 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndRemove/0
    test_SupportedFileSystems.cc:437: Skip test.  remove not supported or support not found
    [       OK ] My/SupportedFileSystemsTest.CreateAndRemove/0 (0 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndRemove/1
    test_SupportedFileSystems.cc:437: Skip test.  remove not supported or support not found
    [       OK ] My/SupportedFileSystemsTest.CreateAndRemove/1 (0 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndGrow/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndGrow/0 (266 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndGrow/1
    [       OK ] My/SupportedFileSystemsTest.CreateAndGrow/1 (32 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndShrink/0
    [       OK ] My/SupportedFileSystemsTest.CreateAndShrink/0 (111 ms)
    [ RUN      ] My/SupportedFileSystemsTest.CreateAndShrink/1
    [       OK ] My/SupportedFileSystemsTest.CreateAndShrink/1 (28 ms)
    [----------] 20 tests from My/SupportedFileSystemsTest (1311 ms total)

    [----------] Global test environment tear-down
    [==========] 20 tests from 1 test case ran. (1342 ms total)
    [  PASSED  ] 20 tests.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 7c265d51c3 Switch to testing ext2 interface via SupportedFilesystems class (!49)
Replace directly using ext2 derived FileSystem interface class with
using the SupportedFileSystems class.  This is a step in getting ready
for testing all the GParted file system interface classes in one go.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 6d121ebb5d Split FILESYSTEMS and FILESYSTEM_MAP into separate module (!49)
GParted_Core::FILESYSTEMS and ::FILESYSTEM_MAP and the methods that
query and manipulate them are self-contained.  Therefore move them into
a separate SupportedFileSystems module.

Also having a single class maintaining all FileSystem interface objects
will make testing all the file system types much easier as there will be
no need to duplicate this functionality in the test.

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 279a9c44ed Add offline ext2 resizing tests (!49)
Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 1c6a594e8d Add simple ext2 write tests: label, UUID, check and remove (!49)
Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00
Mike Fleetwood 571525084b Reload Partition object after FS creation in read tests (!49)
Here are the errors reported in the deliberately broken
CreateAndReadLabel test from the previous commit message:

    [ RUN      ] ext2Test.CreateAndReadLabel
    test_ext2.cc:311: Failure
    Value of: m_partition.get_messages().empty()
      Actual: false
    Expected: true
    Partition messages:
    e2label: No such file or directory while trying to open /does_not_exist/test_ext2.img
    Couldn't find valid filesystem superblock.

    [  FAILED  ] ext2Test.CreateAndReadLabel (77 ms)

Even though the test was deliberately broken by setting the wrong path
for the file system image and the e2label command failed, apparently
testing for the expected label still passed.  What happened was that the
desired "TEST_LABEL" has to be in the Partition object and then the file
system was created.  Then reading the file system label failed, however
"TEST_LABEL" was already set in the Partition object so it matched.
Reading the label is unique among the read actions of usage, label and
UUID as the others don't need to be set before the file system is
created.  GParted doesn't encounter this issue because when refreshing
devices it creates new blank Partition objects and then performs the
read actions to populate them.

Fix by resetting the Partition object back to only containing basic
information before all the reading file system information tests, even
though it is only needed in the read label case.  This also better
reflects how GParted works.

Now with the same deliberate brokenness the test also reports the label
does not match it's expected value:

    $ ./test_ext2 --gtest_filter='ext2Test.CreateAndReadLabel'
    Running main() from test_ext2.cc
    Note: Google Test filter = ext2Test.CreateAndReadLabel
    [==========] Running 1 test from 1 test case.
    [----------] Global test environment set-up.
    [----------] 1 test from ext2Test
    [ RUN      ] ext2Test.CreateAndReadLabel
    test_ext2.cc:322: Failure
    Expected equality of these values:
      fs_label
        Which is: "TEST_LABEL"
      m_partition.get_filesystem_label().c_str()
        Which is: ""
    test_ext2.cc:272: Failure
    Value of: m_partition.get_messages().empty()
      Actual: false
    Expected: true
    Partition messages:
    e2label: No such file or directory while trying to open /does_not_exist/test_ext2.img
    Couldn't find valid filesystem superblock.

    [  FAILED  ] ext2Test.CreateAndReadLabel (70 ms)
    [----------] 1 test from ext2Test (70 ms total)

    [----------] Global test environment tear-down
    [==========] 1 test from 1 test case ran. (75 ms total)
    [  PASSED  ] 0 tests.
    [  FAILED  ] 1 test, listed below:
    [  FAILED  ] ext2Test.CreateAndReadLabel

     1 FAILED TEST

Closes !49 - Add file system interface tests
2019-11-09 17:18:34 +00:00