SERVERware 4 Cluster Mirror Expanding Storage Pool for NVMe
SERVERware 4 Cluster Mirror Expanding Storage Pool for NVMe
Lets start with the hardware first, wee need to Insert "2" (identical) storage devices in storage host empty bay (1 in each server). Because we want to keep redundancy of storage, we always add 2 identical storage devices that will expand storage as redundant array.
NOTE: In case of a active hardware RAID on the storage host we have to add new devices to RAID "0" using remote management (iLO, iDRAC, IPMI) or directly on the hardware if we have access to the hardware physically.
Now to the software part:
Use the following command to see the existing configuration of the storage pool: zpool status.
~# zpool status pool: NETSTOR state: ONLINE scan: resilvered 728M in 0h0m with 0 errors on Tue Dec 6 16:13:09 2016 config: NAME STATE READ WRITE CKSUM NETSTOR ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 SW3-NETSTOR-SRV1-1 ONLINE 0 0 0 SW3-NETSTOR-SRV2-1 ONLINE 0 0 0 errors: No known data errors
In example system we have "2" storage devices in NETSTOR pool.
storage devices are in mirror configuration (mirror-0).
When we have information of existing storage devices we can proceed to format and partition new drives
We need to make a partition table on the new storage devices that will be combined in the (mirror-1) in the same NETSTOR pool..
To find out which block device name system has assigned to the new storage device
use the following command:
~# ls -lah /dev/disk/by-id drwxr-xr-x 2 root root 400 Srp 24 13:42 . drwxr-xr-x 8 root root 160 Srp 24 13:28 .. lrwxrwxrwx 1 root root 9 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN -> ../../sda lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part9 -> ../../sda9 lrwxrwxrwx 1 root root 9 Srp 24 13:30 ata-ST31000520AS_5VX0BZPV -> ../../sdb lrwxrwxrwx 1 root root 10 Srp 24 13:30 ata-ST31000520AS_5VX0BZPV-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 9 Srp 24 13:42 ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX -> ../../sdd
lrwxrwxrwx 1 root root 9 Srp 24 13:30 scsi-360000000000000000e00000000010001 -> ../../sdc lrwxrwxrwx 1 root root 10 Srp 24 13:30 scsi-360000000000000000e00000000010001-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Srp 24 13:42 wwn-0x11769037186453098497x -> ../../sdd lrwxrwxrwx 1 root root 9 Srp 24 13:30 wwn-0x3623791645033518541x -> ../../sda lrwxrwxrwx 1 root root 10 Srp 24 13:30 wwn-0x3623791645033518541x-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Srp 24 13:30 wwn-0x3623791645033518541x-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Srp 24 13:30 wwn-0x3623791645033518541x-part9 -> ../../sda9 lrwxrwxrwx 1 root root 9 Srp 24 13:30 wwn-0x60000000000000000e00000000010001 -> ../../sdc lrwxrwxrwx 1 root root 10 Srp 24 13:30 wwn-0x60000000000000000e00000000010001-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Srp 24 13:30 wwn-0x9104338722358317056x -> ../../sdb lrwxrwxrwx 1 root root 10 Srp 24 13:30 wwn-0x9104338722358317056x-part1 -> ../../sdb1
Now when we have block device name, we can make table, partition and prepare the drive for usage.
NOTE: In the example above we have displayed storage devices without the hardware RAID enabled and when this is a case it is easy to determinate new hardware by the device id "68N6GN0_WD-WXK1E6458WKX" also "zpool status -L NETSTOR" will display pool with block devices instead ot the partition label.
Alternative option is to read the SERIAL from the KVM console on the server also there is information on the BAY used for the device.
Assuming you know the SERIAL of the new storage device you can use this command to get the block device serial numbers and compare, from console: udevadm info --query=all --name=/dev/sdx | grep ID_SERIAL
SERVERware setup wizard has array info which includes serial numbers of storage device already used in the pool's
Now we will make partition table and label:
Use parted to make partition table for a new logical drive.
~# parted /dev/ --script -- mktable gpt
And create a new label.
IMPORTANT: label must be named in the following format: SW3-NETSTOR-SRVx-y.
Where “SRVx” comes from the server number and “-y” is the disk number.
- SW3-NETSTOR-SRV1 - this means virtual disk on SERVER 1
- -2 - this is the number of the disk (disk 2)
Now add a label to the new drive.
~# parted /dev/ --script -- mkpart "SW3-NETSTOR-SRV1-2" 1 -1
We have to update the configuration file for SERVERware to know what block device to use.
Edit configuration file to add a location of the secondary disk:
~# nano /etc/sysmonit/mirror.cfg
Add a new storage name comma separated after existing storage name.
"SW3-NETSTOR-SRV1-1", "SW3-NETSTOR-SRV1-2"
Edit file and apply the change to look like this:
{ "name": "ComaA", "timeout": 15, "virtaul_ifaces": [ { "name": "br0", "address": "192.168.200.22" }, { "name": "bondSAN", "address": "192.168.20.22" } ], "storage": { "pool_name": "NETSTOR", "nodes": [ { "id": "9b41c9b2ee1cb5eb47917f4d301cf9aa", "address": "2.2.2.20", "port": 4420, "nvme_targets": [ "SW3-NETSTOR-SRV1-1",
"SW3-NETSTOR-SRV1-2"
], "subsystem": "sw-mirror" }, { "id": "13da151e3e1e486959db9c08dcb76458", "address": "2.2.2.21", "port": 4420, "nvme_targets": [ "SW3-NETSTOR-SRV2-1", "SW3-NETSTOR-SRV2-2"
"SW3-NETSTOR-SRV2-2"
], "subsystem": "sw-mirror" } ] }
Save file and exit.
The list of sw-nvme commands introduced with NVMe listed below:
Command | Description | |
---|---|---|
sw-nvme list | Lists all connected devices with /dev/nvme-fabrics | |
sw-nvme discover | Discover will display all devices exported on the remote host with given IP and port | |
sw-nvme connect | Import remote device from given IP, port and nqn | |
sw-nvme disconnect | Remove the imported device from the host | |
sw-nvme disconnect-all | Remove all imported devices from the host | |
sw-nvme import | For given file in proper JSON format import remote devices | |
sw-nvme reload-import | For given file in proper JSON format import remote devices after disconnecting all current imports | |
sw-nvme enable-modules | Command will enable necessary kernel modules for NVMe/TCP | |
sw-nvme enable-namespace | Enable namespace with given id | |
sw-nvme disable-namespace | Disable namespace with the given id | |
sw-nvme load | For given file in proper JSON format export remote devices | |
sw-nvme store | If devices are exported manually, store will save system configuration in proper JSON format | |
sw-nvme clear | Command will remove exported device from system configuration. If specified with 'all' it will remove all configurations | |
sw-nvme export | For given URL parameter export device on port with nqn | |
sw-nvme export-stop | Remove device being exported on port with id | |
sw-nvme reload-configuration | For given file in proper JSON format export remote devices, after removing all current exports | |
sw-nvme replace-disk | This command combines 'clear all' and reload-configuration for easier disk replacement procedure on SERVERware | |
sw-nvme expand-pool | Command updates export configuration and adds new namespace into sw-mirror subsystem for SERVERware |
Now we have to export this newly added storage to the network so the primary server can see the new device
~# sw-nvme expand-pool --path /dev/disk/by-id/ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX
This is all that's needed on the secondary server to export the new device.
NOTE: All steps from the beginning of this how-to to this point should be done for the primary server also to prevent loss in the case of failover.
When the same steps are made on the primary storage server we will continue to add new storage to the NETSTOR pool
We can see zpool status:
~# zpool status pool: NETSTOR state: ONLINE scan: scrub in progress since Wed Dec 9 16:08:22 2020 1,72G scanned at 587M/s, 28,5K issued at 9,50K/s, 114G total 0B repaired, 0,00% done, no estimated completion time config: NAME STATE READ WRITE CKSUM NETSTOR ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 SW3-NETSTOR-SRV1-1 ONLINE 0 0 0 SW3-NETSTOR-SRV2-1 ONLINE 0 0 0 errors: No known data errors
Now we need to expand our pool with new logical drives.
You need to be careful with this command. Check names of logical drives to make sure you got the right name.
~# zpool add NETSTOR mirror /dev/disk/by-partlabel/SW3-NETSTOR-SRV1-2 /dev/disk/by-partlabel/SW3-NETSTOR-SRV2-2 -f
Now in the zpool, we should see newly added logical volume:
~# zpool status pool: NETSTOR state: ONLINE scan: resilvered 728M in 0h0m with 0 errors on Tue Dec 6 16:13:09 2016 config: NAME STATE READ WRITE CKSUM NETSTOR ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 SW3-NETSTOR-SRV1-1 ONLINE 0 0 0 SW3-NETSTOR-SRV2-1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 SW3-NETSTOR-SRV1-2 ONLINE 0 0 0 SW3-NETSTOR-SRV2-2 ONLINE 0 0 0 errors: No known data errors
You need to wait for zpool to finish resilvering.
This ends our expanding pool procedure.