Table of contents

Expand XFS or replace a drive used by Watson Studio Local

Complete the following steps to move data into a larger drive, or how to replace a drive during a hardware upgrade. For example, you might want to increase the size of the /data or /ibm partition.

  1. Add the new partition or drive and partition and format it with XFS:
     #lsblk
    NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    vda           252:0    0   250G  0 disk 
    ├─vda1        252:1    0     1G  0 part /boot
    └─vda2        252:2    0 248.9G  0 part 
      ├─rhel-root 253:0    0   241G  0 lvm  /
      └─rhel-swap 253:1    0   7.9G  0 lvm  [SWAP]
    vdb           252:16   0   500G  0 disk 
    └─vdb1        252:17   0   500G  0 part /ibm
    vdc           252:32   0   400G  0 disk 
    └─vdc1        252:33   0   400G  0 part /data    <=== File system that needs more space
    vdd           252:48   0   500G  0 disk          <=== New Drive
    
    # parted /dev/vdd --script mklabel gpt
    
    # parted /dev/vdd --script mkpart primary '0%' '100%'
    
    # mkfs.xfs -f -n ftype=1 -i size=512 -n size=8192 /dev/vdd1
    meta-data=/dev/vdd1              isize=512    agcount=4, agsize=32767872 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0, sparse=0
    data     =                       bsize=4096   blocks=131071488, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=8192   ascii-ci=0 ftype=1
    log      =internal log           bsize=4096   blocks=63999, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
  2. Edit the /etc/fstab and comment out the /data to be copied, add the new entry with a comment so it is not active:
    
    #
    # /etc/fstab
    # Created by anaconda on Fri Apr 13 17:01:52 2018
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/rhel-root   /                       xfs     defaults        0 0
    UUID=58d6fe1d-8361-4fde-b791-96734db60fc1 /boot                   xfs     defaults        0 0
    /dev/mapper/rhel-swap   swap                    swap    defaults        0 0
    PARTUUID=4ccc876a-1331-4ef4-9dda-f04e8403e3f1       /ibm              xfs     defaults,noatime    1 2
    #PARTUUID=8d93b1a2-9bf4-4604-89c9-730a00477534       /data              xfs     defaults,noatime    1 2
    #/dev/vdc1       /data              xfs     defaults,noatime    1 2
    #/dev/vdd1       /data              xfs     defaults,noatime    1 2
  3. Disable docker, kubelet and gluster and reboot to clear away all mounted volumes:
    # systemctl disable glusterd docker kubelet 
    Removed symlink /etc/systemd/system/multi-user.target.wants/docker.service.
    Removed symlink /etc/systemd/system/multi-user.target.wants/kubelet.service.
    Removed symlink /etc/systemd/system/multi-user.target.wants/glusterd.service.
    
    # systemctl stop glusterd docker kubelet 
    # reboot
  4. Copy the XFS data to the new partition. Ensure you use xfs_copy, as rsync could corrupt the data (especially for the raw gluster bricks in /data) during the copying on xfs if the volume is active.
    # lsblk
    NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    vda           252:0    0   250G  0 disk 
    ├─vda1        252:1    0     1G  0 part /boot
    └─vda2        252:2    0 248.9G  0 part 
      ├─rhel-root 253:0    0   241G  0 lvm  /
      └─rhel-swap 253:1    0   7.9G  0 lvm  [SWAP]
    vdb           252:16   0   500G  0 disk 
    └─vdb1        252:17   0   500G  0 part /ibm
    vdc           252:32   0   400G  0 disk 
    └─vdc1        252:33   0   400G  0 part        <=== source
    vdd           252:48   0   500G  0 disk 
    └─vdd1        252:49   0   500G  0 part        <=== destination
    
    # xfs_copy /dev/vdc1 /dev/vdd1
     0%  ... 10%  ... 20%  ... 30%  ... 40%  ... 50%  ... 60%  ... 70%  ... 80%  ... 90%  ... 100%
    
    All copies completed.
  5. Modify the /etc/fstab by un-commenting the added larger partition:
    #/dev/vdc1       /data              xfs     defaults,noatime    1 2
    /dev/vdd1       /data              xfs     defaults,noatime    1 2
  6. Mount the drive that was added and check the size. It should be the same size as the source drive. Now that the new drive was partitioned to a larger size, you can use xfs_growfs to utilize the additional capacity:
    # mount /data
    
    # df -h 
    Filesystem             Size  Used Avail Use% Mounted on
    /dev/mapper/rhel-root  241G  2.1G  239G   1% /
    devtmpfs                12G     0   12G   0% /dev
    tmpfs                   12G     0   12G   0% /dev/shm
    tmpfs                   12G  564K   12G   1% /run
    tmpfs                   12G     0   12G   0% /sys/fs/cgroup
    /dev/vdb1              500G  185G  316G  37% /ibm
    /dev/vda1             1014M  208M  807M  21% /boot
    tmpfs                  2.4G     0  2.4G   0% /run/user/0
    /dev/vdd1              400G   76G  324G  20% /data     <=== The copied xfs data will be the same size as source
  7. Grow the file system to use the full capacity:
    # xfs_growfs /dev/vdd1
    meta-data=/dev/vdd1              isize=512    agcount=4, agsize=26214272 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0 spinodes=0
    data     =                       bsize=4096   blocks=104857088, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal               bsize=4096   blocks=51199, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    data blocks changed from 104857088 to 131071488
    
    # df -h 
    Filesystem             Size  Used Avail Use% Mounted on
    /dev/mapper/rhel-root  241G  2.1G  239G   1% /
    devtmpfs                12G     0   12G   0% /dev
    tmpfs                   12G     0   12G   0% /dev/shm
    tmpfs                   12G  564K   12G   1% /run
    tmpfs                   12G     0   12G   0% /sys/fs/cgroup
    /dev/vdb1              500G  185G  316G  37% /ibm
    /dev/vda1             1014M  208M  807M  21% /boot
    tmpfs                  2.4G     0  2.4G   0% /run/user/0
    /dev/vdd1              500G   76G  424G  16% /data     <=== Now we see the full size of the partition
  8. Enable and start the services:
    # systemctl enable glusterd docker kubelet
    Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    
    systemctl start glusterd docker kubelet
  9. Ensure gluster is healthy:
    # gluster volume status | grep ' N '
    # 
  10. Make sure all nodes are ready and pods are up:
    # kubectl get no
    NAME                              STATUS    ROLES     AGE       VERSION
    mvdsxdata-master-1.fyre.ibm.com   Ready     <none>    2h        v1.9.2-dirty
    mvdsxdata-master-2.fyre.ibm.com   Ready     <none>    2h        v1.9.2-dirty
    mvdsxdata-master-3.fyre.ibm.com   Ready     <none>    2h        v1.9.2-dirty
    
    # kubectl get po --all-namespaces | grep 0/
    #