Design & Build

Data Center Solution, Storage-Validation Test: Brocade VCS Fabric and Violin Memory 6232 Flash Storage Array

by on ‎08-21-2014 07:38 AM - edited on ‎09-15-2014 08:58 AM by randylodes (2,524 Views)
 

 

Preface

 

Overview

The Solid State Ready (SSR) program is a comprehensive testing and configuration initiative to provide Fibre Channel SAN and IP interoperability with flash storage. This program provides testing of multiple fabrics, heterogeneous servers, NICs, and HBAs in large port-count Brocade environments.

 

The SSR qualification program will help verify seamless interoperability and optimum performance with solid state storage in Brocade SAN fabrics.

 

Purpose of This Document

This document provides the validation of Brocade VCS fabric technology with the Violin 6232 all-flash storage array, using multiple switch platforms, HBAs, and server operating systems. This validation shows that the Violin 6232 interoperates properly within a Brocade Ethernet fabric, while supporting   the performance and low latency associated with solid state storage.

 

Audience

The content in this document is written for a technical audience, including solution architects, system engineers, and technical development representatives.

 

Objectives

  1. Test the Violin 6232 array with the Brocade Ethernet fabric, for different stress and error recovery scenarios to validate the interoperability and integration of the array with Brocade Ethernet fabrics.
  1. Validate the performance of IP fabric in a solid state storage environment for high throughput and low latency applications.

 

Test Conclusions

  1. Achieved 100% pass rate on all the test cases in the SSR qualification test plan. The network and the storage were able handle the various stress and error recovery scenarios without any issues.
  2. Different I/O workload scenarios were simulated using Medusa, Vdbench and VMware IOAnalyzer tools and sustained performance levels were achieved across all workload types. The results confirm that the Violin Memory 6232 array interoperates seamlessly with the Brocade Ethernet fabric, and together demonstrate high availability, performance, and low latency.
  3. For optimal availability and performance, consideration should be given to multipath configuration on the host side. While Windows 2008 and 2012 will provide Round-Robin behavior by default, Linux systems will benefit from adding a custom entry to /etc/multipath.conf, and VMware hosts systems should be changed from the default   ‘Most Recently Used (VMware)’ setting to ‘Round-Robin (VMware)’.  Actively using all available paths provides a significant improvement in performance throughput.
  4. The results confirm the assertion that the Violin Memory 6232 array interoperates seamlessly with Brocade VCS fabric, and the configuration demonstrated high availability and sustained performance.
  5. The switches in the VCS fabric should have sufficient number of ISLss with multiple uplinks to provide sufficient bandwidth and redundancy.

 

Related Documents

 

References

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Test Architects: Mike Astry, Patrick Stander
  • Test Engineer: Randy Lodes

 

About Brocade

Brocade networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is realized through the Brocade One™ strategy, which is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

 

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings.

 

To learn more, visit (www.brocade.com)

 

About Violin Memory

Business in a Flash. Violin Memory transforms the speed of business with high performance, always available, low cost management of critical business information and applications.

 

Violin’s All-Flash optimized solutions accelerate breakthrough CAPEX and OPEX savings for building the next generation data center. Violin’s Flash Fabric Architecture (FFA) speeds data delivery with chip-to-chassis performance optimization that achieves lower consistent latency and cost per transaction for Cloud, Enterprise and Virtualized mission-critical applications. Violin’s All-Flash Arrays and Appliances, and enterprise data management software solutions enhance agility and mobility while revolutionizing datacenter economics.

 

Founded in 2005, Violin Memory is headquartered in Santa Clara, California.

 

Document History

Date                  Version        Description

8-19-2014           1.0                Initial Version

 

Test Plan

The storage array is connected via a Brocade VCS Ethernet fabric to multiple server hosts to drive IO in a multipath configuration. Error injection is introduced, and failover & recovery behaviors are observed. IO performance is observed across different workload configurations.  

 

Scope

Testing will be performed with a mix of GA and development versions of Brocade’s Network Operating System (NOS) running on Brocade VDX switches configured to form a Brocade VCS Ethernet fabric.

 

Testing is centered on interoperability and optimal configuration. Performance is observed within the context of best practice fabric configuration; however absolute maximum benchmark reporting of storage performance is beyond the scope of this test.

 

Details of the test steps are covered under “Test Case Descriptions” section. Standard test bed setup includes IBM/HP/Dell chassis server hosts with Brocade/Intel/Emulex NICs with two uplinks from every host to a Brocade VCS fabric. IO generators included Medusa Labs Test Tools, vdbench, Iometer, and VMware IOAnalyzer.

 

Test Configuration

The following shows the test configuration and network topology.

 

 

Test Configuration.jpg

   Test Configuration

 

DUT Descriptions

The following tables provide details about the devices under test (DUT) and the test equipment.

 

Storage Array

DUT ID

Model

Vendor

Description

Violin Memory 6232

6232

Violin Memory Systems

The Violin Memory 6232 flash storage array is an all-flash array that supports up to 64 MLC VIMMS (Violin Intelligent Memory Modules). The unit under test is populated with 32 VIMMS . Each controller supports 4x 10Gb Ethernet connections.

 

Brocade Ethernet Fabric Switches

DUT ID

Model

Vendor

Description

VDX-1

VDX6730-32

Brocade

32 port 10Gb switch (24x10GbE/8GbFC)

VDX-2

VDX6730-76

Brocade

76 port 10Gb switch (60x10GbE/16x8GbFC)

VDX-3

VDX6720-24

Brocade

24 port 10Gb Ethernet Fabric switch

VDX-4

VDX6740

Brocade

48 port 10Gb switch (48x10Gb/4x40Gb)

VDX-5

VDX6740

Brocade

48 port 10Gb switch (48x10Gb/4x40Gb)

VDX-6

VDX6730-32

Brocade

32 port 10Gb switch (24x10GbE/8GbFC)

VDX-7

VDX6730-76

Brocade

76 port 10Gb switch (60x10GbE/16x8GbFC)

VDX-8

VDX6720-60

Brocade

60 port 10Gb Ethernet Fabric switch

 

DUT Specifications

 

Storage

Version

Violin Memory 6232 array

V6.3.1

 

Brocade switches

Version

VDX 6740 w/ VCS Fabric License

NOS 4.1.2

VDX 6730 w/ VCS Fabric License

NOS 4.1.2

VDX 6720 w/ VCS Fabric License

NOS 4.1.2

 

Adapters

Version

Brocade/QLogic 1020 2-port 10Gb CNA

3.2.4.0

Emulex OCe14102-UM 2-port 10Gb CNA

10.0.803.23

Intel X520-2 2-port 10Gb CNA

3.21.2

Broadcom BCM57810 10Gb NIC

1.78.17-0

 

DUT ID

Servers

RAM

Processor

OS

SRV-1

HP Proliant DL380P G8

160GB

Intel Xeon E5-2640

VMware 5.5 [cluster]

SRV-2

HP Proliant DL380P G8

160GB

Intel Xeon E5-2640

VMware 5.5 [cluster]

SRV-3

IBM System x3630 M4

24GB

Intel Xeon E5-2420

RHEL 6.5 x86_64

SRV-4

Dell Poweredge R720

64GB

Intel Xeon E5-2640

Windows Server 2012

SRV-5

ProLiant DL380p G8

32GB

Intel Xeon E5-2690

Windows Server 2012R2

SRV-6

HP Proliant DL385p G8

16GB

AMD Opteron 6212

Windows Server 2008R2

SRV-7

Dell Poweredge R720

16GB

Intel Xeon E5-2620

SLES 11.3 x86_64

SRV-8

Dell Poweredge R720

16GB

Intel Xeon E5-2620

RHEL 6.5 x86_64

 

Test Equipment

Version

Finisar 10Gb Analyzer/Jammer

Xgig-B2100C

Medusa Labs Test Tools

6.0

Vdbench

5.0401

Iometer

1.1.0-rc1

VMware IOAnalyzer

1.6.0

 

Configure DUT and Test Equipment

These are the steps for configuring the DUT and the test equipment.

 

Step 1. Configure VCS Fabric

The Brocade VDX switches are configured to form a Brocade VCS fabric in a Logical Chassis cluster mode. Refer to the Brocade Network OS Administrator’s Guide for details regarding VCS fabric configuration.

 

Step 2. Configure VLANs on VDX

Create two VLANs as shown below

 

< ========== >

VDX6730_066_075# conf t

VDX6730_066_075(config)# interface Vlan 7

VDX6730_066_075(config-Vlan-7)# exit

VDX6730_066_075(config)# interface Vlan 8

VDX6730_066_075(config-Vlan-8)# CTRL-Z

< ========== >

 

Step 3. Configure Host Network Interfaces

a. Both interfaces are used from a dual-port 10GE NIC, configured in different subnets. Jumbo frames are configured on all host ports and switch ports. An example of a Linux host network configuration is shown below.

 

< ========== >

hb067176:~ # ifconfig eth2

eth2      Link encap:Ethernet  HWaddr 3CSmiley Very Happy9:2B:F6Smiley Very HappyF:B8

          inet addr:192.168.9.176  Bcast:192.168.9.255  Mask:255.255.255.0

          inet6 addr: fe80::3ed9:2bff:fef6:dfb8/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1

hb067176:~ # ifconfig eth3

eth3      Link encap:Ethernet  HWaddr 3CSmiley Very Happy9:2B:F6Smiley Very HappyF:BC

          inet addr:192.168.10.176  Bcast:192.168.10.255  Mask:255.255.255.0

          inet6 addr: fe80::3ed9:2bff:fef6:dfbc/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1hb067170:~

< ========== >

 

b. Below is an example of configurating the network interface on a VMware host.

 

VMware Host Network Interface Configuration.jpg 

   VMware Host Network Interface Configuration

 

Step 4. Configure VDX Switch Port Settings

a. The host interfaces on the 192.168.9.0 subnet are placed in VLAN 7, and the 192.168.10.0 interfaces are placed in VLAN 8. Below is an example of the syntax for configuring a VDX switch port.

 

< ========== >

VDX6730_066_075# conf t

VDX6730_066_075# int te 101/0/5

VDX6730_066_075(conf-if-te-101/0/5)# mtu 9216

VDX6730_066_075(conf-if-te-101/0/5)# no fabric isl enable

VDX6730_066_075(conf-if-te-101/0/5)# no fabric trunk enable

VDX6730_066_075(conf-if-te-101/0/5)# switchport

VDX6730_066_075(conf-if-te-101/0/5)# switchport mode access

VDX6730_066_075(conf-if-te-101/0/5)# switchport access vlan 7

VDX6730_066_075(conf-if-te-101/0/5)# spanning-tree shutdown

VDX6730_066_075(conf-if-te-101/0/5)# no shutdown

< ========== >

 

b. The storage ports are placed in two Link Aggregation Groups (LAG); two ports on controller A in the first group, and two ports on controller B in the second group.

 

This shows how to configure two LAG groups a VDX switch.

< ========== >

VDX6730_066_075# conf t

VDX6730_066_075(config)# interface Port-channel 7

VDX6730_066_075(config-Port-channel-7)# vlag ignore-split

VDX6730_066_075(config-Port-channel-7)# mtu 9216

VDX6730_066_075(config-Port-channel-7)# switchport

VDX6730_066_075(config-Port-channel-7)# switchport mode access

VDX6730_066_075(config-Port-channel-7)# switchport access vlan 7

VDX6730_066_075(config-Port-channel-7)# spanning-tree shutdown

VDX6730_066_075(config-Port-channel-7)# no shutdown

VDX6730_066_075(config-Port-channel-7)# exit

VDX6730_066_075(config)# interface Port-channel 8

VDX6730_066_075(config-Port-channel-8)# vlag ignore-split

VDX6730_066_075(config-Port-channel-8)# mtu 9216

VDX6730_066_075(config-Port-channel-8)# switchport

VDX6730_066_075(config-Port-channel-8)# switchport mode access

VDX6730_066_075(config-Port-channel-8)# switchport access vlan 8

VDX6730_066_075(config-Port-channel-8)# spanning-tree shutdown

VDX6730_066_075(config-Port-channel-8)# no shutdown

VDX6730_066_075(config-Port-channel-8)# CTRL-Z

< ========== >

 

This shows how switch ports connected to controller A are configured for the first port group.

< ========== >

VDX6730_066_075# config t

VDX6730_066_075(config)# int te 112/0/7

VDX6730_066_075(conf-if-te-112/0/7)# no fabric isl enable

VDX6730_066_075(conf-if-te-112/0/7)# no fabric trunk enable

VDX6730_066_075(conf-if-te-112/0/7)# channel-group 7 mode active type standard

VDX6730_066_075(conf-if-te-112/0/7)# lacp timeout long

VDX6730_066_075(conf-if-te-112/0/7)# no shutdown

VDX6730_066_075(conf-if-te-112/0/7)# CTRL-Z

< ========== >

 

This shows how switch ports connected to controller B are configured for the second port group.

< ========== >

VDX6730_066_075# config t

VDX6730_066_075(config)# int te 111/0/7

VDX6730_066_075(conf-if-te-111/0/7)# no fabric isl enable

VDX6730_066_075(conf-if-te-111/0/7)# no fabric trunk enable

VDX6730_066_075(conf-if-te-111/0/7)# channel-group 8 mode active type standard

VDX6730_066_075(conf-if-te-111/0/7)# lacp timeout long

VDX6730_066_075(conf-if-te-111/0/7)# no shutdown

VDX6730_066_075(conf-if-te-111/0/7)# CTRL-Z

< ========== >

 

Step 5. Configure Violin Array iSCSI targets and IP Addressing

a. The following enables iSCSI on the Violin master array controller Memory Gateway (MG) CLI.

< ========== >

Brocade-Array-mg-a [Brocade-Array: master] > en

Brocade-Array-mg-a [Brocade-Array: master] # conf t

Brocade-Array-mg-a [Brocade-Array: master] (config) # iscsi enable global

Brocade-Array-mg-a [Brocade-Array: master] (config) # wr mem

< ========== >

 

b. Using the iscsi bond command, create a bonded interface consisting of two interfaces from each controller.

< ========== >

Brocade-Array-mg-a [Brocade-Array: master] (config) # iscsi bond iscsi-bond0 interface eth6 interface eth7 mode link-agg

< ========== >

 

c. Configure the IP address for the bonded interface.

< ========== >

Brocade-Array-mg-a [Brocade-Array: master] (config) # interface iscsi-bond0 ip address 192.168.9.1 255.255.255.0

Brocade-Array-mg-a [Brocade-Array: master] (config) # mtu 9000

< ========== >

 

d. Create a new iSCSI target on the Violin master MG.

< ========== >

Brocade-Array-mg-a [Brocade-Array: master] (config) # iscsi target create iscsi9

< ========== >

 

e. Bind the bonded interface to the new iSCSI target on the Violin master MG.

< ========== >

Brocade-Array-mg-a [Brocade-Array: master] (config) # iscsi target bind iscsi9 to 192.168.9.1

< ========== >

 

f. Save the changes on the Violin master MG.

< ========== >

Brocade-Array-mg-a [Brocade-Array: master] (config) # write memory

< ========== >

 

g. Configure the IP address for the bonded interface on the Violin standby MG (controller B).

< ========== >

Brocade-Array-mg-b [Brocade-Array: standby] (config) # interface iscsi-bond0 ip address 192.168.10.1 255.255.255.0

Brocade-Array-mg-b [Brocade-Array: standby] (config) # mtu 9000

< ========== >

 

h. Bind the bonded interface to the iSCSI target on the Violin standby MG.

< ========== >

Brocade-Array-mg-b [Brocade-Array: standby] (config) # iscsi target bind iscsi9 to 192.168.10.1

Brocade-Array-mg-b [Brocade-Array: standby] (config) # wr mem

< ========== >

 

i. On the Brocade VDX switch, confirm that the Link Aggregation has negotiated successfully and links are up.

< ========== >

VDX6730_066_075# show port-channel 7

 LACP Aggregator: Po 7 (vLAG)

 Aggregator type: Standard

 Ignore-split is enabled

  Member rbridges:

    rbridge-id: 111 (1)

    rbridge-id: 112 (1)

  Admin Key: 0007 - Oper Key 0007

  Partner System ID - 0xffff,00-13-cc-02-0e-cf

  Partner Oper Key 0033

 Member ports on rbridge-id 111:

   Link: Te 111/0/8 (0x6F18040007) sync: 1

 

 Member ports on rbridge-id 112:

   Link: Te 112/0/7 (0x7018038006) sync: 1   *

 

VDX6730_066_075# show port-channel 8

 LACP Aggregator: Po 8

 Aggregator type: Standard

 Ignore-split is enabled

  Admin Key: 0008 - Oper Key 0008

  Partner System ID - 0xffff,00-13-cc-02-2c-29

  Partner Oper Key 0033

 Member ports on rbridge-id 111:

   Link: Te 111/0/7 (0x6F18038006) sync: 1   *

 

 Member ports on rbridge-id 112:

   Link: Te 112/0/8 (0x7018040007) sync: 1 

< ========== >

 

Step 6. Configure iSCSI LUNs

a. Determine the host IQN. This example is for RedHat Linux.

 

< ========== >

hb067176:~ # cat /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.1994-05.com.redhat:1471d387fb91

< ========== >

 

b. Add the host IQN to the Violin array.

 

Violin Memory Configuring Host IQN.jpg 

 

   Violin Memory Configuring Host IQN

 

c. Eight x 5GB LUNs are presented to host.  Below is an example from the Violin Memory array configuration tool showing the LUNs.

 

Violin Memory LUN Presentation.jpg 

   Violin Memory LUN Presentation

 

Step 7. Discover iSCSI Targets

a. On a Linux host, use the iscsiadm command set to discover, log into and verify the iSCSI targets.

 

- Discover targets.

< ========== >

hb067176:~ # iscsiadm -m discovery -t sendtargets -p 192.168.9.1

192.168.9.1:3260,1 iqn.2004-02.com.vmem:Brocade-Array-mg-a:iscsi9192.168.8.1:3260,1

iqn.2010-06.com.purestorage:flasharray.39cc4997ca5ea229

hb067176:~ # iscsiadm -m discovery -t sendtargets -p 192.168.10.1 192.168.10.1:3260,1

iqn.2004-02.com.vmem:Brocade-Array-mg-b:iscsi9

< ========== >

 

- Log in to targets.

< ========== >

hb067176:~ # iscsiadm -m node -L all

< ========== >

 

- Verify targets.

< ========== >

hb067176:~ # iscsiadm -m session

tcp: [1] 192.168.9.1:3260,1 iqn.2004-02.com.vmem:Brocade-Array-mg-a:iscsi9

tcp: [2] 192.168.10.1:3260,1 iqn.2004-02.com.vmem:Brocade-Array-mg-b:iscsi9

< ========== >

 

b. On a VMware host, target information is entered into ‘iSCSI Adapter Software -> Properties’ as shown below.

 

VMware iSCSI Target Information.jpg 

   VMware iSCSI Target Information

 

c. On Windows Servers, discovery is completed through ‘iSCSI Initiator Properties’ dialog as shown below.

 

Window Server iSCSI Targets.jpg

   Window Server iSCSI Targets

 

Step 8. Configure Multipathing on Each Host

a. For Linux hosts, this configuration allows all paths to be used in a round-robin fashion. This provides superior performance to the default Linux settings which would only use a single active path per LUN. Below is the recommended /etc/multipath.conf entry on Linux systems.

 

<==========>

devices {

    device {

        vendor                  "VIOLIN"

        path_selector           "round-robin 0"

        path_grouping_policy    multibus

        rr_min_io               1

        path_checker            tur

        fast_io_fail_tmo        10

        dev_loss_tmo            30

    }

}

<==========>

 

This shows the multipath configuration on a Linux host.

< ========== >

hb067176:~ # multipath -ll

mpathag (SVIOLIN_SAN_ARRAY_B716B17947ED61CC) dm-10 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:8  sdi 8:128 active ready running

  `- 10:0:0:8 sdq 65:0  active ready running

mpathz (SVIOLIN_SAN_ARRAY_B716B1794F22910A) dm-9 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:1  sdb 8:16  active ready running

  `- 10:0:0:1 sdj 8:144 active ready running

mpathaf (SVIOLIN_SAN_ARRAY_B716B17984A44ACE) dm-3 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:7  sdh 8:112 active ready running

  `- 10:0:0:7 sdp 8:240 active ready running

mpathae (SVIOLIN_SAN_ARRAY_B716B179C97913F5) dm-8 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:6  sdg 8:96  active ready running

  `- 10:0:0:6 sdo 8:224 active ready running

mpathad (SVIOLIN_SAN_ARRAY_B716B179031B9AE1) dm-4 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:5  sdf 8:80  active ready running

  `- 10:0:0:5 sdn 8:208 active ready running

mpathac (SVIOLIN_SAN_ARRAY_B716B1797FDB343B) dm-7 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:4  sde 8:64  active ready running

  `- 10:0:0:4 sdm 8:192 active ready running

mpathab (SVIOLIN_SAN_ARRAY_B716B17953881EBB) dm-5 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:3  sdd 8:48  active ready running

  `- 10:0:0:3 sdl 8:176 active ready running

mpathaa (SVIOLIN_SAN_ARRAY_B716B1790D302D83) dm-6 VIOLIN,SAN ARRAY

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active

  |- 9:0:0:2  sdc 8:32  active ready running

  `- 10:0:0:2 sdk 8:160 active ready running

< ========== >

 

b. This configuration allows all paths to be used in a round-robin fashion. This provides superior performance to the default VMware ‘Most Recently Used’ settings which would only use a single active path per LUN.

 

VMware Multipath Configuration.jpg

   VMware Multipath Configuration

 

Step 9. Apply Additional Host Tuning

a. The first selects the 'noop' I/O scheduler, which has been shown to get better performance with lower CPU overhead than the default schedulers (usually 'deadline' or 'cfq'). 

 

b. The second change eliminates the collection of entropy for the kernel random number generator, which has high cpu overhead when enabled for devices supporting high IOPS.

 

c. The third change is to CPU affinity. This reduces CPU load by redirecting completions to the originating CPU.

 

The following shows the rules applied at boot in /etc/udev/rules.d/99-violin-storage.rules file.

<==========>

# Use noop scheduler for high-performance solid-state storage

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="VIOLIN", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="VIOLIN", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="VIOLIN", ATTR{queue/rq_affinity}="2"

<==========>

 

d. Configuring these parameters on the host can ensure that the I/O pause time is reduced to an appropriate value. The configuration file can be found here: /etc/iscsi/iscsid.conf. The parameters that need to be changed are:

 

  • node.session.timeo.replacement_timeout = 10
  • node.session.initial_login_retry_max = 4

 

Step 10. Setup Workload Generators

We installed several different workload generators, in order to get a variety of IO coverage. On Windows and Linux system, Medusa Labs Test Tools and vdbench are installed. On VMware systems, VMware’s IOAnalyzer is installed.

 

Step 11. Configure QoS

For host adapters supporting Data Center Bridging protocol for iSCSI, DCB QoS can be configured on the VCS fabric.

 

a. Here is an example CEE map with iSCSI CoS value “4” configured in addition to the Auto-NAS configuration:

 

VDX6730_066_075# show running-config cee-map

cee-map default

precedence 1

priority-group-table 1 weight 30 pfc on

priority-group-table 15.0 pfc off

priority-group-table 15.1 pfc off

priority-group-table 15.2 pfc off

priority-group-table 15.3 pfc off

priority-group-table 15.4 pfc off

priority-group-table 15.5 pfc off

priority-group-table 15.6 pfc off

priority-group-table 15.7 pfc off

priority-group-table 2 weight 20 pfc off

priority-group-table 3 weight 20 pfc off

priority-group-table 4 weight 30 pfc on

priority-table 2 2 3 1 4 2 2 15.0

remap fabric-priority priority 2

remap lossless-priority priority 2

!

 

b. On switch ports which are attached to storage interconnects, default CoS is set to 4 for iSCSI traffic

< =========== >

VDX6730_066_075(conf-if-te-111/0/1)# qos cos 4

< =========== >

 

c. On switch ports which are attached to host adapters, default CEE map is applied

< =========== >

VDX6730_066_075(conf-if-te-75/0/10)# cee default

< =========== >

 

 

Test Cases

 

1.1

FABRIC INITIALIZATION – BASE FUNCTIONALITY

Confirm basic IP Fabric functionality of the storage array

1.1.1

Storage Device – Physical and Logical Login with Speed Negotiation

1.1.2

iSCSI LUN Mapping

1.1.3

vLAG Configuration

1.1.4

Storage Device Multipath Configuration – iSCSI Path integrity

1.2

ETHERNET STORAGE – ADVANCED FUNCTIONALTY

Examine the storage behavior - bandwidth validation, traffic congestion, and protocol recovery

1.2.1

iSCSI Bandwidth Validation

1.2.2

Storage Device – w/ Congested Fabric

1.2.3

Storage Device – iSCSI Protocol Jammer Test Suite

1.3

STRESS & ERROR RECOVERY

Confirm device integrity and recovery during congestion and error injection conditions

1.3.1

Storage Device Fabric IO Integrity – Congested Fabric

1.3.2

Storage Device Nameserver Integrity – Device Recovery with Port Toggle

1.3.3

Storage Device Nameserver Integrity – Device Recovery with Device Relocation

1.3.4

Storage Device Nameserver Stress – Device Recovery with Device Port Toggle

1.3.5

Storage Device Recovery – ISL Port Toggle

1.3.6

Storage Device Recovery – ISL Port Toggle (entire switch)

1.3.7

Storage Device Recovery – Switch Offline

1.3.8

Storage Device Recovery – Switch Firmware Download

1.3.9

Workload Simulation Test Suite

 

Test Case Descriptions

 

1.1 Fabric Initialization – Base Functionality

 

1.1.1 Storage Device – Physical and Logical Login with Speed Negotiation

 

Test Objective

  1. Verify device login to VDX switch with all supported speed settings.
  2. Configure VDX switch for AUTONAS
  3. Configure Storage Port for iSCSI connectivity. Validate Login & base connectivity.

Procedure

1. Set switch port speed settings to 10Gb, and Auto_Negotiate. Confirm link is up

<==========>

VDX6730_066_075(conf-if-te-101/0/5)# speed [1000 | auto]

<==========>

 

2. On 6740 switch, configure autonas

<==========>

VDX6730_066_075(config)# nas auto-qos

VDX6730_066_075(config)# nas server-ip 192.168.7.4 vlan 7

<==========>

 

3. Configure storage port for iSCSI is detailed above in ‘Configuration Steps’

 

Result

1. PASS. Storage negotiates with fabric successfully at 10Gb and auto speeds. Verify with IO.

 

1.1.2 iSCSI LUN Mapping

 

Test Objective

  1. Verify host to LUN access with each mapped OS-type.

Procedure

  1. Configure a LUN on each OS platform (see ‘Configuration Steps’ above for detail) and verify access.

Result

1. PASS. LUNS are presented and verified with IO.

 

1.1.3 vLAG Configuration

 

Test Objective

  1. Configure vLAG connectivity from Storage Ports to 2 separate VDX switches.
  2. Verify data integrity through vLAG.

Procedure

  1. Configure a LAG groups on storage and VDX switches (see ‘Configuration Steps’ above for detail) and verify access.

Result

1. PASS. vLAG links are formed properly and support failover; confirm with IO.

 

1.1.4 Storage Device Multipath Configuration – iSCSI Path integrity

 

Test Objective

  1. Verify multi-path configures successfully.
  2. Each Adapter and Storage port to reside in different switches.
  3. For all device paths, consecutively isolate individual paths and validate IO integrity and path recovery.

Procedure

  1. Configure multipath for each host (see ‘Configuration Steps above for detail) Isolate paths and verify failover.

Result

1. PASS. Multipath failover is successful, verify with IO.

 

1.2 Ethernet Storage - Advanced Functionality

 

1.2.1 iSCSI Bandwidth Validation

 

Test Objective

  1. Validate maximum sustained bandwidth to storage port via iSCSI.
  2. After 15 minutes Verify IO completes error free.

Procedure

  1. Run large-block IO from multiple hosts to storage to saturate storage ports.

Result

1. PASS. IO competes successfully with no errors.

 

1.2.2 Storage Device – w/Congested Fabric

 

Test Objective

  1. Create network bottleneck through a single Fabric ISL. Configure multiple ‘iSCSI to host’ data streams sufficient to saturate the ISL’s available bandwidth for 30 minutes. 
  2. Verify IO completes error free.

Procedure

  1. Isolate a single ISL and saturate with IO for 30 minutes

Result

1. PASS. IO competes successfully with no errors.

 

1.2.3 Storage Device – iSCSI Protocol Jammer Test Suite

 

Test Objective

  1. Perform Protocol Jammer Tests including areas such as:
    - CRC corruption,
    - packet corruption,
    - missing frame,
    - host error recovery,
    - target error recovery

Procedure

1. Insert Finisar Jammer/Analyzer between switch and storage port.

2. Execute Jammer alterations in both directions (storage <-> host)

3. Verify recovery

 

Result

1. PASS.

 

1.3 Stress and Error Recovery

 

1.3.1 Storage Device Fabric IO Integrity – Congested Fabric

 

Test Objective

  1. Configure fabric & devices for maximum link & device saturation.
  2. From all available initiators start a mixture of READ/WRITE/VERIFY traffic with random data patterns continuously to all their targets overnight. 
  3. Verify no host application failover or unexpected change in I/O throughput occurs.
  4. If needed, add L2 Ethernet traffic to fill available bandwidth.

Procedure

  1. Run large block read/write IO from multiple and confirm saturation on links.

Result

1. PASS. IO completes error-free.

 

1.3.2 Storage Device Integrity – Device Recovery from Port Toggle – Manual Cable Pull

 

Test Objective

  1. Sequentially performed for each Storage Device & Adapter port.
  2. With I/O running, perform a quick port toggle every Storage Device & Adapter port. 
  3. Verify host I/O will recover.

Procedure

  1. While running IO, perform a manual cable pull on host and storage ports

 

Result

1. PASS.

 

1.3.3 Storage Device Integrity – Device Recovery from Device Relocation

 

Test Objective

  1. Sequentially performed for each Storage Device & Adapter port.
  2. With I/O running, manually disconnect and reconnect port to different switch in same fabric.
  3. Verify host I/O will failover to alternate path and toggled path will recover.  
  4. Repeat test for all switch types.

Procedure

  1. Configure an alternate switch port with same settings as port under test.
  2. Move cable to alternate port while running IO and verify recovery.

Result

1. PASS.

 

1.3.4 Storage Device Stress – Device Recovery from Device Port Toggle – Extended Run

 

Test Objective

  1. Sequentially toggle each Initiator and Target ports in fabric.  
  2. Verify host I/O will recover to alternate path and toggled path will recover.
  3. Run for 24 hours.

Procedure

  1. Run IO for 24 hours and:
    a. toggle all storage and host ports sequentially in a loop.
    b. Verify paths recover and IO completes error-free

Result

1. PASS. Paths fail over and IO continues error-free.

 

1.3.5 Storage Device Recovery – ISL Port Toggle – Extended Run

 

Test Objective

  1. Verify fabric ISL path redundancy between hosts & storage devices.
  2. Sequentially toggle each ISL path on all switches. 
  3. Host I/O may pause, but should recover. 
  4. Verify host I/O throughout test.

Procedure

  1. Run IO and disable a single ISL port.
  2. Allow IO to recover, then enable ISL port.
  3. Repeat sequentially for all ISL ports on all switches.
  4. Verify paths recover and IO completes error-free.

Result

1. PASS. IO recovers and continues error-free.

 

1.3.6 Storage Device Recovery – ISL Port Toggle (entire switch)

 

Test Objective

  1. Verify fabric switch path redundancy between hosts & storage devices.
  2. Sequentially, and for all switches, disable all ISLs on the switch under test.
    a. For Switches containing device under test (i.e. Switches A,B,E,F in diagram)  IO will pause, and then resume after switch comes online.
    b. For intermediate Switches (i.e. Switches C & D in diagram) IO will pause, and then resume.

Procedure

  1. Run IO and disable all ISL ports on single switch.
  2. Allow IO to recover; enable all ISL ports.
  3. Repeat sequentially for all switches.
  4. Verify paths recover and IO completes error-free.

Result

1. PASS. IO recovers and continues error-free.

 

1.3.7 Storage Device Recovery – Switch Offline

 

Test Objective

  1. Toggle each switch in sequential order using a mix of switchDisable, reboot, and power-cycle.
  2. For Switches containing device under test, IO will pause and then resume after switch comes online.
    a. For intermediate Switches IO will pause, and then resume while switch is online.

Procedure

  1. Run IO. Disable each switch with ‘switchDisable’ command.
    a. Repeat sequentially for all switches.
    b. Verify paths recover and IO completes error-free.
  2. Reboot each switch with ‘reboot’ command.
    a. Repeat sequentially for all switches.
    b. Verify paths recover and IO completes error-free.
  3. Manually power cycle each switch.
    a. Repeat sequentially for all switches.
    Verify paths recover and IO completes error-free.

Result

1. PASS. switchDisable command

2. PASS. reboot command
3. PASS. power-cycle.

 

1.3.8 Storage Device Recovery – Switch Firmware Download HCL (where applicable)

 

Test Objective

  1. Sequentially perform firmware maintenance procedure on all device connected switches under test.
  2. Verify Host I/O will continue (with minimal disruption) through firmwaredownload and device pathing will remain consistent.

Procedure

  1. Download Brocade NOS firmware to all switches in fabric
  2. Activate each switch sequentially.
  3. Confirm that IO and multipathing continue error-free.

Result

1. PASS. Update NOS version from 4.1.2 to 4.1.2a on each switch.

2. PASS. Multipathing and IO recovered error-free.

 

1.3.9 Workload Simulation Test Suite (Optional)

 

Test Objective

  1. Validate Storage/Fabric behavior while running a workload simulation test suite.
  2. Areas of focus may include VM environments, de-duplication/compression data patterns, and database simulation.

Procedure

  1. Setup 4 standalone hosts for iSCSI. Use Medusa I/O tool for generating I/O and simulating workloads.
    a. Run random and sequential I/O in a loop at block transfer sizes of 512, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m.
    b. Include a nested loop of 100% read, 100% write, and 50% read/write.
    c. Run File Server simulation workload
    d. Run Microsoft Exchange Server simulation workload
  2. Setup an ESX cluster of 2 hosts with 4 worker VMs per host. Use VMware IOAnalyzer tool for generating I/O and simulating workloads.
    a. Run random and sequential IO at large and small block transfer sizes.
    b. Run SQL Server simulation workload
    c. Run OLTP simulation workload
    d. Run Web Server simulation workload
    e. Run Video on Demand simulation workload
    f. Run Workstation simulation workload
    g. Run Exchange server simulation workload

Result

1. PASS. Workloads complete successfully, error-free, and with expected throughput.