1. Bicom Systems
  2. Solution home
  3. SERVERware
  4. HOWTOs

SERVERware 3 Cluster Mirror Expanding Storage Pool

SERVERware 3 Cluster Mirror Expanding Storage Pool

SERVERware 3 Cluster Mirror Expanding Storage Pool

Insert 2 new (identical) disks in storage server empty bays (1 in each server). Because we want to keep redundancy of storage, we always add 2 identical disks that will expand storage as a redundant array.

We can use the following command to see the existing configuration of the storage pool: zpool status.

~# zpool status
  pool: NETSTOR
 state: ONLINE
  scan: resilvered 728M in 0h0m with 0 errors on Tue Dec  6 16:13:09 2016
config:
        NAME                    STATE     READ WRITE CKSUM
        NETSTOR                 ONLINE       0     0     0
          mirror-0              ONLINE       0     0     0
            SW3-NETSTOR-SRV1-1  ONLINE       0     0     0
            SW3-NETSTOR-SRV2-1  ONLINE       0     0     0
          
errors: No known data errors

In the example system, we have 2 disks in the mirror pool NETSTOR.


We can see one mirror configuration from existing disks (mirror-0).


We need to make a partition table on the new drives.


To find out which block the device name system has assigned to the new disk.


Use the following command:

~# ls -lah /dev/disk/by-id

drwxr-xr-x 2 root root 400 Srp 24 13:42 .
drwxr-xr-x 8 root root 160 Srp 24 13:28 ..
lrwxrwxrwx 1 root root   9 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN -> ../../sda
lrwxrwxrwx 1 root root  10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 Srp 24 13:30 ata-INTEL_SSDSC2BB080G4_BTWL3405084Y080KGN-part9 -> ../../sda9
lrwxrwxrwx 1 root root   9 Srp 24 13:30 ata-ST31000520AS_5VX0BZPV -> ../../sdb
lrwxrwxrwx 1 root root  10 Srp 24 13:30 ata-ST31000520AS_5VX0BZPV-part1 -> ../../sdb1
lrwxrwxrwx 1 root root   9 Srp 24 13:42 ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX -> ../../sdd
lrwxrwxrwx 1 root root   9 Srp 24 13:30 scsi-360000000000000000e00000000010001 -> ../../sdc
lrwxrwxrwx 1 root root  10 Srp 24 13:30 scsi-360000000000000000e00000000010001-part1 -> ../../sdc1
lrwxrwxrwx 1 root root   9 Srp 24 13:42 wwn-0x11769037186453098497x -> ../../sdd
lrwxrwxrwx 1 root root   9 Srp 24 13:30 wwn-0x3623791645033518541x -> ../../sda
lrwxrwxrwx 1 root root  10 Srp 24 13:30 wwn-0x3623791645033518541x-part1 -> ../../sda1
lrwxrwxrwx 1 root root  10 Srp 24 13:30 wwn-0x3623791645033518541x-part2 -> ../../sda2
lrwxrwxrwx 1 root root  10 Srp 24 13:30 wwn-0x3623791645033518541x-part9 -> ../../sda9
lrwxrwxrwx 1 root root   9 Srp 24 13:30 wwn-0x60000000000000000e00000000010001 -> ../../sdc
lrwxrwxrwx 1 root root  10 Srp 24 13:30 wwn-0x60000000000000000e00000000010001-part1 -> ../../sdc1
lrwxrwxrwx 1 root root   9 Srp 24 13:30 wwn-0x9104338722358317056x -> ../../sdb
lrwxrwxrwx 1 root root  10 Srp 24 13:30 wwn-0x9104338722358317056x-part1 -> ../../sdb1

Now when we have a block device name, we can make a table, partition, and prepare the drive for usage.


Use parted to make partition table for a new logical drive.

~# parted /dev/<name of your drive> --script -- mktable gpt

Create a new label.


IMPORTANT: label must be named in the following format: SW3-NETSTOR-SRVx-y.

Where “SRVx” comes from the server number and “-y” is the disk number.


So, in our example (SW3-NETSTOR-SRV1-2):


1. SW3-NETSTOR-SRV1 - this means virtual disk on SERVER 1

2. -2 - this is the number of the disk (disk 2)


Now add a label to the new drive.

~# parted /dev/<name of your drive> --script -- mkpart "SW3-NETSTOR-SRV1-2" 1 -1

We have to update the configuration file for SERVERware to know what block device to use.


We can get this information by listing devices by-id:

~# ls -lah /dev/disk/by-id
lrwxrwxrwx 1 root root   9 Srp 24 13:53 ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX -> ../../sdd

Now copy disk ID ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX and edit the configuration file:

~# nano /etc/tgt/mirror/SW3-NETSTOR-SRV1.conf

Our case:

~# nano /etc/tgt/mirror/SW3-NETSTOR-SRV1.conf

The file should look like this:

<target SW3-NETSTOR-SRV1-1>
        <direct-store /dev/disk/by-id/scsi-3600508b1001cb960a9daa8733452c470>
                write-cache on
                bs-type rdwr
        </direct-store>
        initiator-address 192.168.1.46
</target>

Add one more <target> to configuration file, to include a new <target>:


<target SW3-NETSTOR-SRV1-2>


and ID:


<direct-store /dev/disk/by-id/ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX>


After editing, the configuration file should look like this:

<target SW3-NETSTOR-SRV1-1>
        <direct-store /dev/disk/by-id/scsi-3600508b1001cb960a9daa8733452c470>
                write-cache on
                bs-type rdwr
        </direct-store>
        initiator-address 192.168.1.46
</target>

<target SW3-NETSTOR-SRV1-2>
        <direct-store /dev/disk/by-id/ata-WDC_WD10JFCX-68N6GN0_WD-WXK1E6458WKX>
                write-cache on
                bs-type rdwr
        </direct-store>
        initiator-address 192.168.1.46
</target>

Save file and exit.


We need to edit one more configuration file to add a location of the secondary disk:

~# nano /etc/sysmonit/mirror.cfg 

Add a new storage name comma separated after the existing storage name.

"SW3-NETSTOR-SRV1-1",
"SW3-NETSTOR-SRV1-2"

Edit file and apply the change to look like this:

{
    "name": "HydraA",
    "timeout": 15,
    "virtaul_ifaces": [
        {
            "name": "br0",
            "address": "10.1.10.48"
        },
        { 
            "name": "bondSAN",
            "address": "192.168.2.48"
        }
    ],
    "storage": {
        "pool_name": "NETSTOR",
        "nodes": [
            {
                "id": "1461ffc0657f8c9798cb17981e832d04",
                "address": "192.168.1.46",
                "port": 3259,
                "iscsi_targets": [
                    "SW3-NETSTOR-SRV1-1",
                    "SW3-NETSTOR-SRV1-2"
           ]
            },
            { 
                "id": "9b04dd7a6f24265d46b8d37c9bae95ac",
                "address": "192.168.1.47",
                "port": 3259,
                "iscsi_targets": [
                    "SW3-NETSTOR-SRV2-1",
                    "SW3-NETSTOR-SRV2-2"
           ]
            }
        ]
    }
}

Save file and exit.

Now repeat all these steps on the second server.

After all these steps are done on both servers, we need to link storage from the secondary server to the zpool on the primary server.

Connect through ssh to the secondary server and export target to iSCSI using tgt-admin tool:

~# tgt-admin -C 2 --update ALL -c /etc/tgt/mirror.conf -v

This ends our procedure on the secondary server.

Connect through ssh to the primary server and use iscsiadm discovery to find the new logical disk we have exported on the secondary server.


First, find out the network address of the secondary storage server:

~# iscsiadm -m session
tcp: [1] 192.168.1.46:3259,1 SW3-NETSTOR-SRV2-1

In the output, we will get the information we need for the discovery command, 192.168.1.46:3259 (IP address and port).

Now using iscsiadm discovery, find a new logical drive:

~# iscsiadm -m discovery -t st -p 192.168.1.46:3259 
192.168.1.46:3259,1 SW3-NETSTOR-SRV2-1
192.168.1.46:3259,1 SW3-NETSTOR-SRV2-2


Now login to the exported iSCSI session. Use this command:

~# iscsiadm -m node -T SW3-NETSTOR-SRV2-2 --login

logging in to [iface: default, target: SW3-NETSTOR-SRV2-2, portal: 192.168.1.46,3259] (multiple)
Login to [iface: default, target: SW3-NETSTOR-SRV2-2, portal: 192.168.1.46,3259] successful.

To see newly added logical drive use:

~# ls -lah /dev/disk/by-partlabel
.
.
lrwxrwxrwx 1 root root  10 Pro  7 08:48 SW3-NETSTOR-SRV1-1 -> ../../sdc1
lrwxrwxrwx 1 root root  10 Pro  7 09:21 SW3-NETSTOR-SRV1-2 -> ../../sdf1
lrwxrwxrwx 1 root root  10 Pro  7 08:52 SW3-NETSTOR-SRV2-1 -> ../../sdd1
lrwxrwxrwx 1 root root  10 Pro  7 08:58 SW3-NETSTOR-SRV2-2 -> ../../sde1
.
.

Now we need to expand our pool with new logical drives. You need to be careful with this command. Check names of logical drives to make sure you got the right name.

~# zpool add NETSTOR mirror /dev/disk/by-partlabel/SW3-NETSTOR-SRV1-2 /dev/disk/by-partlabel/SW3-NETSTOR-SRV2-2 -f

Now in the zpool, we should see newly added logical volume:

~# zpool status
  pool: NETSTOR
 state: ONLINE
  scan: resilvered 728M in 0h0m with 0 errors on Tue Dec  6 16:13:09 2016
config:
        NAME                    STATE     READ WRITE CKSUM
        NETSTOR                 ONLINE       0     0     0
          mirror-0              ONLINE       0     0     0
            SW3-NETSTOR-SRV1-1  ONLINE       0     0     0
            SW3-NETSTOR-SRV2-1  ONLINE       0     0     0
          mirror-1              ONLINE       0     0     0
            SW3-NETSTOR-SRV1-2  ONLINE       0     0     0
            SW3-NETSTOR-SRV2-2  ONLINE       0     0     0

errors: No known data errors

Now restart the swhspared deamon to update GUI information.

~# /etc/init.d/swhspared restart 

This is the end of our storage expansion procedure.


Search

  

Views

Personal tools

Navigation

Main page

Download page as PDF

Printable version

Help

Tools


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article