Design & Build

Data Center Solution, Storage-Validation Test: Brocade VCS Fabric and Tegile HA2100EP Flash Storage Array

by on ‎08-21-2014 07:39 AM - edited on ‎09-08-2014 04:08 PM by pillais (2,734 Views)
 

 

Preface

 

Overview

The Solid State Ready (SSR) program is a comprehensive testing and configuration initiative to validate the interoperability of Fibre Channel and IP flash storage with a Brocade network infrastructure. This program provides testing of multiple fabrics, heterogeneous servers, NICs and HBAs in large port-count Brocade environments. 

 

The SSR qualification program will help verify seamless interoperability and optimum performance of solid state storage in Brocade FC and Ethernet fabrics.

 

Purpose of This Document

The goal of this document is to demonstrate the compatibility of Tegile HA2100EP arrays in a Brocade Ethernet fabric. This document provides a test report on the SSR qualification test plan executed on the Tegile HA2100EP array.

 

Audience

The target audience for this document includes storage administrators, solution architects, system engineers, and technical development representatives.

 

Objective

  1. Test the Tegile HA2100EP array with the Brocade VCS Ethernet fabric, for different stress and error recovery scenarios, to validate the interoperability and integration of the Tegile array with Brocade VCS fabric.
  2. Validate the performance of the Brocade VCS fabric in a solid state storage environment for high throughput and low latency applications.

Test Conclusions

  1. Achieved 100% pass rate on all the test cases in the SSR qualification test plan. The network and the storage were able handle the various stress and error recovery scenarios without any issues.
  2. Different I/O workload scenarios were simulated using Medusa and VMware IOAnalyzer tools and sustained performance levels were achieved across all workload types. The Brocade VCS fabric handled both the low latency and high throughput I/O workloads with equal efficiency without any I/O errors or packet drops.
  3. The results confirm that the Tegile HA2100EP array interoperates seamlessly with Brocade VCS fabric, and demonstrated high availability and sustained performance.
  4. For optimal availability and performance, consideration should be given to using host link aggregation forming a vLAG with the Brocade VCS fabric.
  5. Host multipathing tools should be used when connecting to iSCSI target devices to discover and utilize multiple paths on the storage target.
  6. The switches in the VCS fabric should have sufficient number of ISL’s with multiple uplinks to provide sufficient bandwidth and redundancy.

Related Documents

References

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Test Architect: Mike Astry, Patrick Stander
  • Test Engineer: Subhish Pillai

About Brocade

Brocade networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is realized through the Brocade One™ strategy, which is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

 

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings.

 

To learn more, visit (www.brocade.com)

 

About Tegile

Tegile SystemsTM is pioneering a new generation of flash-driven enterprise storage arrays that balance performance, capacity, features and price for virtualization, file services and database applications.

 

Our hybrid arrays are significantly faster than legacy arrays and significantly less expensive than all solid-state disk-based arrays. Featuring both NAS and SAN connectivity, these virtual data storage systems are easy-to-use, fully redundant, and highly scalable. Additionally, they come complete with built-in snapshot, replication, near-instant recovery, and virtualization management features.

 

Tegile’s patented IntelliFlashTM technology accelerates performance to solid state speeds without sacrificing the capacity or cost advantage of hard disk storage. Additionally it enables on-the-fly de-duplication and compression so usable capacity is far greater than its raw capacity.

 

Document History

 

Date                            Version              Description

2014-08-20                   1.0                      Initial Release

 

Test Plan

 

Scope

Testing are performed with GA versions of Brocade’s Network Operating System (NOS) running on Brocade VDX switches configured to form a Brocade VCS Fabric.

 

Testing is at the system level, including interoperability of Storage Devices with the Brocade VDX Switches. Performance is observed within the context of best practice fabric configuration; however absolute maximum benchmark reporting of storage performance is beyond the scope of this test.

 

Details of the test steps are covered under “Test Case Descriptions” section. Standard test equipment includes IBM/HP/Dell chassis server hosts with Brocade/QLogic/Emulex/Intel/Broadcom CNA’s and NIC’s with two uplinks from every host to the Brocade VCS Fabric. The 10GE Ethernet ports on the Tegile flash arrray, the “active” and “backup” controllers, form a link aggregation group using a VCS Fabric vLAG.

 

Test Configuration

The following shows the configuration of the DUT and the network topology.                                                                                                      

 SSR-VDX-Topology_Tegile_2.jpg

   Test Configuration

 

DUT Descriptions

The following lists the devices under test (DUT) and the test equipment used.

 

Storage Array

DUT ID

Model

Vendor

Description

Notes

Tegile HA2100EP

HA2100EP

Tegile

Dual Controller array with support for FC, iSCSI, NFS, CIFS protocol support.

Each controller has 2x8Gb FC ports and 2x10GbE ports in an active-standby configuration.

 

Switches

DUT ID

Model

Vendor

Description

Notes

VDX 6740_1/2

VDX 6740

Brocade

48x10GbE and 4x40GbE QSFP+ ports

 Supports Auto-NAS

VDX 6730-32_1/2

VDX 6730-32

Brocade

24x10GbE and 8x8Gbps FC ports

 

VDX 6730-76_1/2

VDX 6730-76

Brocade

60x10GbE and 16x8Gbps FC ports

 

VDX 6720-24_1

VDX 6720-24

Brocade

24x10GbE ports

 

VDX 6720-60_1

VDX 6720-60

Brocade

60x10GbE ports

 

 

DUT Specifications

Device

Release

Configuration Options

Notes

HA2100EP Array

2.1.1.3(140308)-1473

 Setup HA resource groups for disk pool and iSCSI/NAS IP groups.

VDX 6740

NOS v4.1.2

VCS Fabric License

 

VDX 6730-32/76

NOS v4.1.2

VCS Fabric License

 

VDX 6720-24/60

NOS v4.1.2

VCS Fabric License

 

 

Test Equipment Specifications

Device Type

Model

Server (SRV1-8)

HP DL380p G8, HP DL360 G7, IBM x3630M4, IBM x3650M4, IBM x3550M3, Dell R710, Dell R810

CNA

QLogic (Brocade) 1860-CNA, Brocade 1020-CNA, Emulex OCe14102-UM, Intel X520-SR2, Broadcom NetXtreme II (BCM57711,BCM57810)

Analyzer/Jammer

JDSU Xgig

I/O Generator

Medusa v6.0, VMware IOAnalyzer

 

Configure DUT and Test Equipment

The following summarizes the configuration of DUT and test equipment prior to conducting the test cases.

 

Step 1. Brocade VCS Fabric Configuration

1. The Brocade VDX switches are configured to form a Brocade VCS fabric in a Logical Chassis cluster mode. Refer to the Brocade Network OS Administrator’s guide (see the References section above) for how to configure a VCS Fabric.

 

2, Configure two VLANs (for iSCSI and NAS) on the VCS fabric. In this setup VLAN 7 is used for iSCSI and VLAN 8 for NAS.

 

3. Enable “Auto-NAS” on the VCS fabric and setup a NAS server IP. Enabling Auto-NAS will auto-configure the CEE map with the NAS traffic having a CoS value of “2”.

 

<==========>

sw0(config)# nas auto-qos

                sw0(config)# nas server-ip 192.168.8.1/32 vlan 8

               

sw0(config)# show running-config cee-map

                cee-map default

 precedence 1

 priority-group-table 1 weight 40 pfc on

 priority-group-table 15.0 pfc off

 priority-group-table 15.1 pfc off

 priority-group-table 15.2 pfc off

 priority-group-table 15.3 pfc off

 priority-group-table 15.4 pfc off

 priority-group-table 15.5 pfc off

 priority-group-table 15.6 pfc off

 priority-group-table 15.7 pfc off

 priority-group-table 2 weight 40 pfc off

 priority-group-table 3 weight 20 pfc off

 priority-table 2 2 3 1 2 2 2 15.0

 remap fabric-priority priority 0

 remap lossless-priority priority 0

<==========>

 

===========

NOTE:

Auto QoS is supported only on Brocade VDX 8770-series platforms and VDX 6740-series platforms. While Auto QoS is not supported on the VDX 6710, VDX 6720, and VDX 6730 switches, and VDX 6700 platforms, except the VDX 6740-series), these platforms can act as a pass-through entity for Auto QoS within VCS fabrics in either:

‐ Logical chassis cluster mode without any extra configuration

‐ Fabric cluster mode with the proper Converged Enhanced Ethernet (CEE) map configuration

 

Refer to the Brocaede Network OS Administrator’s guide (See References section above) for configuration steps and restrictions on using the Auto NAS feature.

===========

 

Step 2. Configure Tegile Flash Array

The HA2100EP model array has an active-standby controller architecture. Configuration includes the following.

  • Configure the disk pool and management IP groups as per standard best practices recommended by Tegile.
  • The iSCSI LUNs and NAS shares are configured
  • The network is configured as a link aggregation between the array and the Brocade VCS fabric using vLAG.

1. Create an aggregated network interface on each controller with the two 10GE ports as members.The settings for the aggregate interface are show below

 

Tegile Aggregated Link Interface Configuration.jpg 

   Tegile Aggregated Link Interface Configuration

 

2. The screen below shows the Tegile network interface configuration settings after the following configuration steps.

 

a. Add two VLAN interfaces to the aggregated interface. VLANs are used to separate the iSCSI and NAS protocol traffic.

 

b. Create an IP group for each VLAN interface.

 

Tegile Network Interface Settings.jpg

   Tegile Network Interface Settings

 

3. Repeat steps “1” and “2” above for the “standby” controller.

 

4. Setup a Floating IP address for the NAS VLAN. Associate the respective IP group from both controllers.

 

5. Setup two Floating IP address’ for the iSCSI VLAN and associate the respective IP groups from both controllers. This allows the host multipathing tools to see two paths and distribute traffic better.

 

Tegile HA Floating IP setting for iSCSI and NAS.jpg

  Tegile HA Floating IP setting for iSCSI and NAS

 

6. Bind the 2 Floating IP’s for the iSCSI network to the iSCSI target device.

 

Tegile Floating IP Address to SCSI Target Binding.jpg 

   Tegile Floating IP Address to SCSI Target Binding

 

7. Configure the corresponding VDX switch ports connected to the active and standby array controllers in their respective port-channel groups.

 

Repeat these steps with a 2nd port-channel (channel-group 2 for example) for the standby controller ports.

 

<==========>

interface TenGigabitEthernet 111/0/17

 fabric isl enable

 fabric trunk enable

 channel-group 1 mode active type standard

 lacp timeout long

 no shutdown

!

interface TenGigabitEthernet 112/0/17

 description Tegile_Ct0_ixgbe3

 fabric isl enable

 fabric trunk enable

 channel-group 1 mode active type standard

 lacp timeout long

 no shutdown

!

interface Port-channel 1

 vlag ignore-split

 mtu 9216

 switchport

 switchport mode trunk

 switchport trunk allowed vlan all

 switchport trunk tag native-vlan

 spanning-tree shutdown

 no shutdown

!

sw0# show port-channel 1

 LACP Aggregator: Po 1 (vLAG)

 Aggregator type: Standard

 Ignore-split is enabled

  Member rbridges:

    rbridge-id: 111 (1)

    rbridge-id: 112 (1)

  Admin Key: 0001 - Oper Key 0001

  Partner System ID - 0x1000,00-e0-ed-29-94-79

  Partner Oper Key 1004

 Member ports on rbridge-id 111:

   Link: Te 111/0/17 (0x6F18088010) sync: 1   *

 Member ports on rbridge-id 112:

   Link: Te 112/0/17 (0x7018088010) sync: 1

<==========>

 

Step 3. iSCSI Multipath Configuration and Discovery

 

For Windows Servers

  1. Enable MPIO support for iSCSI devices by checking the “Add support for iSCSI devices” box under “Discover Multi-Paths” tab in MPIO settings.

Windows MPIO Properties.jpg 

   Windows MPIO Properties

 

  1. Use the Microsoft iSCSI Initiator tool to connect to the target and add paths to the iSCSI target device.
  2. Each session to the target needs to be created individually for each Target portal IP with “Enable Multi-Path” box checked.

Window Multipath Disk Device Properties.jpg

   Window Multipath Disk Device Properties

 

For Linux Servers

Install the required iSCSI Initiator and Multipath tools.

 

  1. Add the following to /etc/multipath.conf file.

<==========>

devices {

      device {

                vendor "TEGILE"

                product "ZEBI-ISCSI"

                path_checker tur

                path_grouping_policy multibus

                path_selector "round-robin 0"

                no_path_retry 10

    }

}

<==========>

 

  1. Discover the Tegile iSCSI target using “iscsiadm” utility.

<==========>

iscsiadm -m discovery -t st -p 192.168.7.81

iscsiadm -m discovery -t st -p 192.168.7.82

iscsiadm -m node -L all

<==========>

 

Linux Server Example Configuation:

<==========>

# iscsiadm -m node

192.168.7.81:3260,2 iqn.2014-04.com.tegile.iscsi:test

192.168.7.82:3260,3 iqn.2014-04.com.tegile.iscsi:test

 

# iscsiadm -m session

tcp: [15] 192.168.7.81:3260,2 iqn.2014-04.com.tegile.iscsi:test

tcp: [16] 192.168.7.82:3260,3 iqn.2014-04.com.tegile.iscsi:test

 

# multipath -ll

3600144f0ec770e00000053ed0f90004c dm-7 TEGILE  ,ZEBI-ISCSI

size=50G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 21:0:0:38 sdo 8:224 active ready running

  `- 22:0:0:38 sdn 8:208 active ready running

<==========>

 

For VMware Servers

1.  Run the following command from VMware host CLI to setup default round-robin “Path Selection” policy for Tegile iSCSI targets.

 

<==========>

# esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA –V TEGILE  -M "ZEBI-ISCSI" –P VMW_PSP_FIXED -e "Tegile Zebi iSCSI"

 

~ # esxcli storage nmp satp rule list | grep -i TEGILE

VMW_SATP_DEFAULT_AA          TEGILE   ZEBI-ISCSI          user    VMW_PSP_RR         Tegi Zebi iSCSI 

<==========>

 

2. Create a VMkernel port bound to a physical adapter and setup IP connectivity to the Tegile iSCSI target IP, and create binding between the VMkernel port and the “iSCSI Software Adapter”.

 

VMware iSCSI Initiator Configuration.jpg

   VMware iSCSI Initiator Configuration

 

3. Add both discovery IP addresses under the “Dynamic Discovery” tab and rescan the adapter to discover and list the iSCSI devices.

 

VMware Dynamic Discovery Configuration.jpg

   VMware Dynamic Discovery Configuration

 

 

VMware iSCSI Adaptors Found.jpg

   VMware iSCSI Multipath Configuration

 

Step 4. Tegile NAS Configuration and Setup

 

For Windows Servers

  1. Setup host IP connectivity to the NAS IP address on the Tegile storage.
  2. Provision CIFS shares on the Tegile array with Read/Write access by the host IP address.
  3. Mount the CIFS share as a network drive on the host.

Windows CIFS Network Drive Mapping.jpg

   Windows CIFS Network Drive Mapping

 

For Linux Servers

  1. Setup host IP connectivity to the NAS IP address on the Tegile storage.
  2. Provision NFS shares on the Tegile array with Read/Write access by the host IP address.
  3. Mount the NFS share to a local mount point on the host.

<==========>

# mount 192.168.8.81:/export/hb067164_nfs/vol1 /nfs

 

# mount

192.168.8.81:/export/hb067164_nfs/vol1 on /nfs type nfs (rw,addr=192.168.8.81)

<==========>

 

For VMware Servers
  1. Create a VMkernel port bound to a physical adapter port
  2. Setup host IP connectivity to the NAS IP address on the Tegile storage.
  3. Provision NFS shares on theTegile array with Read/Write access by the VMkernel IP address.
  4. Add the storage as an NFS datastore.

VMware NAS Share Configuration.jpg

   VMware NAS Share Configuration

 

Step 5. Other Configuration Settings

  1. Configure the host LACP link aggregation forming a vLAG across multiple switches in the VCS fabric. This will provide fault tolerance at the host and allow better utilization of the available links.
  2. Enable Jumbo Frames (MTU=9000) on the host interfaces and the corresponding VDX switch port in the VCS Fabric.
  3. For end-to-end Auto-NAS QoS in the VCS Fabric, both the host and target ports need to be connected to the VDX 6740 switches.
  4. For host adapters supporting Data Center Bridging (DCB) protocol for iSCSI, DCB QoS can be configured on the VCS Fabric. Below is an example CEE map with iSCSI CoS value “4” configured and Auto-NAS enabled.

 

 <==========>

# show running-config protocol lldp

protocol lldp

 advertise dcbx-iscsi-app-tlv

  

# show running-config cee-map

cee-map default

 precedence 1

 priority-group-table 1 weight 40 pfc on

 priority-group-table 15.0 pfc off

 priority-group-table 15.1 pfc off

 priority-group-table 15.2 pfc off

 priority-group-table 15.3 pfc off

 priority-group-table 15.4 pfc off

 priority-group-table 15.5 pfc off

 priority-group-table 15.6 pfc off

 priority-group-table 15.7 pfc off

 priority-group-table 2 weight 20 pfc off

 priority-group-table 3 weight 20 pfc off    à NAS

 priority-group-table 4 weight 20 pfc on     à iSCSI

 priority-table 2 2 3 1 4 2 2 15.0    

 remap fabric-priority priority 0

 remap lossless-priority priority 0

 

# sh run int te 75/0/18

interface TenGigabitEthernet 75/0/18

 cee default

 mtu 9216

 fabric isl enable

 fabric trunk enable

 switchport

 switchport mode access

 switchport access vlan 7

 spanning-tree shutdown

 no shutdown

<==========>

 

Test Cases

The following table summarizes the testing use cases.

 

1.0

FABRIC INITIALIZATION – BASE FUNCTIONALITY

1.0.1

Storage Device – Physical and Logical Login with Speed Negotiation

1.0.2

iSCSI LUN Mapping

1.0.3

NAS Connectivity

1.0.4

vLAG Configuration

1.0.5

Storage Device Multipath Configuration – iSCSI Path integrity

1.1

ETHERNET STORAGE – ADVANCED FUNCTIONALTY

1.1.1

Storage Device – Jumbo Frame/MTU Size Validation

1.1.2

iSCSI Bandwidth Validation

1.1.3

NAS Bandwidth Validation

1.1.4

Storage Device – w/Congested Fabric

1.1.5

Storage Device – iSCSI Protocol Jammer Test Suite

1.1.6

Storage Device – NAS/CIFS Protocol Jammer Test Suite

1.2

STRESS & ERROR RECOVERY

1.2.1

Storage Device Fabric IO integtiry – Congested Fabric

1.2.2

Storage Device Integrity – Device Recovery from Port Toggle – Manual Cable Pull

1.2.3

Storage Device Integrity – Device Recovery from Device Relocation

1.2.4

Storage Device Stress – Device Recovery from Device Port Toggle – Extended Run

1.2.5

Storage Device Recovery – ISL Port Toggle – Extended Run

1.2.6

Storage Device Recovery – ISL Port Toggle (entire switch)

1.2.7

Storage Device Recovery – Switch Offline

1.2.8

Storage Device Recovery – Switch Firmware Download HCL (where applicable)

1.2.9

Workload Simulation Test Suite

 

Test Case Descriptions

 

1.0 FABRIC INITIALIZATION – BASE FUNCTIONALITY

 

1.0.1 Storage Device – Physical and Logical Login with Speed Negotiation

 

Test Objective

  1. Verify device login to VDX switch with all supported speed settings.
    a. Configure VDX switch for AUTONAS

b. Configure Storage Port for iSCSI connectivity. Validate Login & base connectivity.

c. Configure Storage Port for NAS connectivity. Validate Login & base connectivity.

 

Procedure

  1. Enable Auto-NAS on the VCS fabric and set NAS Server-IP

<==========>

sw0(config)# nas auto-qos

sw0(config)# nas server-ip 192.168.8.1/32 vlan 8

<==========>

  1. Change switch port speed to Auto and 10G. [Setting speed to 1G requires supported SFP.]
  2. Validate link states on the array and IP connectivity between the array and hosts.

Result

1. PASS. Test Passed. IP connectivity verified.

 

1.0.2 iSCSI LUN Mapping

 

Test Objective

  1. Verify host to LUN access with each mapped OS-type.

Procedure

  1. Establish IP connectivity between host and array.
  2. Create Host Groups and LUNs on the array with access to iSCSI initiator IQN.
  3. Verify host login to target and read/write access to LUNs

Result

1. PASS. Able to perform read/write operations LUNs.

 

1.0.3 NAS Connectivity

 

Test Objective

  1. Verify host to File Share connectivity with CIFS & NFS with multiple simultaneous connections.

Procedure

  1. Establish IP connectivity between host and array.
  2. Create Host Groups and Shares on the array with read/write access to host IP.
  3. Verify host can connect to share and has read/write access.

Result

1. PASS. Able to perform read/write operations to the share. Verified auto-NAS is working.

 

<==========>

# show nas statistics server-ip 192.168.8.81/32 vlan 8

Rbridge 111

-----------

Server ip 192.168.8.81/32 vlan 8

matches 189375 packets 0 bytes

 

Rbridge 112

-----------

Server ip 192.168.8.81/32 vlan 8

matches 2036066 packets 0 bytes

<==========>

 

1.0.4 vLAG Configuration

 

Test Objective

  1. Configure vLAG connectivity from Storage Ports to 2 separate VDX switches.
  2. Verify data integrity through vLAG.

Procedure

  1. Create vLAG between VDX switches and storage ports.
  2. Verify connectivity and storage access between host and array.

Result

1. PASS. vLAG formed successfully. IP connectivity verified

 

<==========>

# show port-channel 1

 LACP Aggregator: Po 1 (vLAG)

 Aggregator type: Standard

 Ignore-split is enabled

  Member rbridges:

    rbridge-id: 111 (1)

    rbridge-id: 112 (1)

  Admin Key: 0001 - Oper Key 0001

  Partner System ID - 0x1000,00-e0-ed-29-94-79

  Partner Oper Key 1004

 Member ports on rbridge-id 111:

   Link: Te 111/0/17 (0x6F18088010) sync: 1   *

 Member ports on rbridge-id 112:

   Link: Te 112/0/17 (0x7018088010) sync: 1

<==========>

 

1.0.5 Storage Device Multipath Configuration – iSCSI Path Integrity

 

Test Objective

  1. Verify multi-path configures successfully.
  2. Configure each adapter and storage port in different VDX switches.
  3. For all device paths, consecutively isolate individual paths and validate IO integrity and path recovery.

Procedure

  1. Setup host with at least 2 initiator ports. (Create a LAG between the hosts or assign IP addresses to each port in the same subnet to access the target.)
  2. Setup multipath on host.
  3. Establish iSCSI target sessions on both the target IP addresses.
  4. Start I/O
  5. Perform sequential port toggles across initiator and target switch ports to isolate paths.

Result

1. PASS. I/O failed over to remaining available paths and recovered when disrupted path was restored.

 

1.1 ETHERNET STORAGE – ADVANCED FUNCTIONALTY

 

1.1.1 Storage Device – Jumbo Frame/MTU Size Validation

 

Test Objective

  1. Perform IO validation testing while incrementing MTU Size from minimum to maximum with reasonable increments.
  2. Include Jumbo Frame size as well as maximum negotiated/supported between device and switch.

Procedure

  1. Set MTU on the storage interfaces to 1500, 3000, 6000 and 9000.
  2. Verify I/O operations complete at all the MTU sizes

Result

1. PASS. Verified I/O completed without issues. Verified I/O size adapts to the changed MTU value.

 

1.1.2 iSCSI Bandwidth Validation

 

Test Objective

  1. Validate maximum sustained bandwidth to storage port via iSCSI.
  2. After 15 minutes Verify IO completes error free.

Procedure

  1. Start iSCSI I/O to the storage array from multiple iSCSI initiators.
  2. Verify I/O runs without errors.

Result

1. PASS. All I/O operations completed without errors. 

 

1.1.3 NAS Bandwidth Validation

 

Test Objective

  1. Validate maximum sustained bandwidth to storage port via NAS/CiFS.
  2. After 15 minutes Verify IO completes error free.

Procedure

  1. Start NFS/CIFS I/O to the storage array from multiple connected hosts.
  2. Verify I/O runs without errors.

Result

1. PASS. All I/O operations completed without errors. 

 

1.1.4 Storage Device – w/Congested Fabric

 

Test Objective

  1. Create network bottleneck through a single Fabric ISL.
  2. Configure multiple ‘iSCSI/NAS to host’ data streams sufficient to saturate the ISL’s available bandwidth for 30 minutes. 
  3. Verify IO completes error free.

Procedure

  1. Start NFS/CIFS and iSCSI I/O to the storage array from multiple hosts.
  2. Disable redundant ISL links in the VCS fabric to isolate a single ISL.
  3. Verify I/O runs without errors.

Result

1. PASS. I/O completed successfully on all hosts.

 

1.1.5 Storage Device – iSCSI Protocol Jammer Test Suite

 

Test Objective

  1. Perform Protocol Jammer Tests including areas such as:
    - CRC corruption,
    - packet corruption,
    - missing frame,
    - host error recovery,
    - target error recovery

Procedure

  1. Insert Jammer device in the I/O path on the storage end.
  2. Execute the following Jammer scenarios
    - CRC corruption
    - Drop packets to and from the target.
    - Replace IDLE with Pause Frame
  3. Verify Jammer operations and recovery with Analyzer.

 

Result

1. PASS. I/O recovered in all instances after the jammer operations.

 

1.1.6 Storage Device – NAS/CIFS Protocol Jammer Test Suite

 

Test Objective

  1.        - CRC corruption,
    - packet corruption,
    - missing frame,
    - host error recovery,
    - target error recovery

Procedure

  1. Insert Jammer device in the I/O path on the storage end.
  2. Execute the following Jammer scenarios:
    - CRC corruption
    - Drop packets to and from the target.
    - Replace IDLE with Pause Frame
  3. Verify Jammer operations and recovery with Analyzer.       

Result

1. PASS. I/O recovered in all instances after the jammer operations.

 

1.2 STRESS & ERROR RECOVERY

 

1.2.1 Storage Device Fabric IO integrity – Congested Fabric

 

Test Objective

  1. From all available initiators, start a mixture of READ/WRITE/VERIFY traffic with random data patterns continuously to all their targets overnight. 
  2. Verify no host application failover or unexpected change in I/O throughput occurs.
  3. Configure fabric & devices for maximum link & device saturation.
  4. Include both iSCSI & NAS/CIFS traffic. (if needed -- add L2 Ethernet traffic to fill available bandwidth)

Procedure

  1. Start NFS/CIFS and iSCSI I/O to the storage array from multiple hosts.
  2. Setup a mix of READ/WRITE traffic.
  3. Verify all I/O complete without issues.

Result

1. PASS. All I/O completed without errors.

 

1.2.2 Storage Device Integrity – Device Recovery from Port Toggle and Manual Cable Pull

 

Test Objective

  1. With I/O running, perform a quick port toggle every Storage Device & Adapter port. 
  2. Verify host I/O will recover.
  3. Sequentially performed for each Storage Device & Adapter port.

Procedure

  1. Setup multipath on host and start I/O
  2. Perform multiple iterations of sequential port toggles across initiator and target switch ports.

Result

1. PASS. I/O failed over and recovered successfully.

 

1.2.3 Storage Device Integrity – Device Recovery from Device Relocation

 

Test Objective

  1. With I/O running, manually disconnect and reconnect port to different switch in same fabric.
  2. Verify host I/O will failover to alternate path and toggled path will recover.
  3. Sequentially performed for each Storage Device & Adapter port.
  4. Repeat test for all switch types.

Procedure

  1. Setup multipath on host and start I/O
  2. Move storage target ports to different switch ports in the fabric.

Result

1. PASS. I/O failed over and recovered successfully.

 

1.2.4 Storage Device Stress – Device Recovery from Device Port Toggle – Extended Run

 

Test Objective

  1. Sequentially toggle each Initiator and Target ports in fabric.  
  2. Verify host I/O will recover to alternate path and toggled path will recover.
  3. Run for 24 hours.

Procedure

  1. Setup multipath on host and start I/O
  2. Perform multiple iterations of sequential port toggles across initiator and target switch ports.

Result

1. PASS. I/O failed over and recovered successfully.

 

1.2.5 Storage Device Recovery – ISL Port Toggle – Extended Run

 

Test Objective

  1. Sequentially toggle each ISL path on all switches.  Host I/O may pause, but should recover.
  2. Verify fabric ISL path redundancy between hosts & storage devices.
  3. Verify host I/O throughout test.

Procedure

  1. Setup host multipath with links on different switches in the VCS fabric and start I/O.
  2. Perform multiple iterations of sequential ISL toggles across the fabric.

Result

1. PASS. I/O re-routes to available paths in the VCS fabric and recovers when the link is restored.

 

1.2.6 Storage Device Recovery – ISL Port Toggle (Entire Switch)

 

Test Objective

  1. Sequentially, and for all switches, disable all ISLs on the switch under test.
  2. Verify fabric switch path redundancy between hosts & storage devices.
  3. Verify switch can merge back in to the fabric.
  4. Verify host I/O path throughout test.

Procedure

  1. Setup host multipath with links on different switches in the VCS fabric and start I/O.
  2. Perform multiple iterations of sequentially disabling all ISLs on a switch in the fabric

Result

1. PASS. I/O failed over to alternate path and recovered once the switch merged back in the fabric.

 

1.2.7 Storage Device Recovery – Switch Offline

 

Test Objective

  1. Toggle each switch in sequential order. 
  2. Include switch enable/disable, power on/off, and reboot testing.

Procedure

  1. Setup host multipath with links on different switches in the VCS fabric and start I/O.
  2. Perform multiple iterations of sequential disable/enable, power on/off and reboot of all the switches in the fabric.

Result

1. PASS. I/O failed over to alternate path and recovered once the switch merged back in the fabric.

 

1.2.8 Storage Device Recovery – Switch Firmware Download HCL (Where Applicable)

 

Test Objective

  1. Sequentially perform firmware maintenance procedure on all device connected switches under test.
  2. Verify Host I/O will continue (with minimal disruption) through the “firmware download” and device pathing will remain consistent.

Procedure

  1. Setup host multipath with links on different switches in the VCS fabric and start I/O.
  2. Sequentially perform firmware upgrades on all switches in the fabric.

Result

1. PASS. I/O failed over doing the switch reloads. All switches need to be at the same code level for the switches to rejoin the fabric.

 

1.2.9 Workload Simulation Test Suite

 

Test Objective

  1. Validate Storage/Fabric behavior while running a workload simulation test suite.
  2. Areas of focus may include VM environments, de-duplication/compression data patterns, and database simulation.

Procedure

  1. Setup four standalone hosts for iSCSI and 4 for NAS (2 for CIFS and 2 for NFS).
  2. Use Medusa I/O tool for generating I/O and simulating workloads.
    a. Run random and sequential I/O in a loop at block transfer sizes of 512, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m.
    b. Include a nested loop of 100% read, 100% write, and 50% read/write.
  3. Run File Server simulation workload
    a. Run Microsoft Exchange Server simulation workload
  4. Setup an ESX cluster of 2 hosts with 4 worker VMs per host. Use VMware IOAnalyzer tool for generating I/O and simulating workloads.
    - Run random and sequential IO at large and small block transfer sizes.
    - Run SQL Server simulation workload
    - Run OLTP simulation workload
    - Run Web Server simulation workload
    - Run Video on Demand simulation workload
    - Run Workstation simulation workload
    - Run Exchange server simulation workload

Result

1. PASS. All workload runs were monitored at the host, storage and fabric and verified they completed without any I/O errors or faults.