Design & Build

Data Center Solution, Storage-Validation Test: Brocade Gen 5 Fibre Channel and Tegile Flash Array

by on ‎07-21-2014 01:32 PM - edited on ‎09-08-2014 04:29 PM by pillais (3,633 Views)
 

 

Preface

 

Overview

The Solid State Ready (SSR) program is a comprehensive testing and configuration initiative to provide Fibre Channel SAN and IP interoperability with flash storage. This program provides testing of multiple fabrics, heterogeneous servers, NICs, and HBAs in large port-count Brocade environments.

 

The SSR qualification program will help verify seamless interoperability and optimum performance with solid state storage in Brocade SAN fabrics.

 

Purpose of This Document

The goal of this document is to demonstrate the compatibility of Tegile HA2100EP arrays in a Brocade FC SAN fabric. This document provides a test report on the SSR qualification test plan executed on the Tegile HA2100EP array.

 

Audience

The target audience for this document includes storage administrators, solution architects, system engineers, and technical development representatives.

 

Objective

  1. Test the Tegile HA2100EP array with Brocade FC fabrics, in single and routed configurations for different stress and error recovery scenarios, to validate the interoperability and integration of the Tegile array with Brocade FC fabrics.
  1. Validate the performance of FC fabric in a solid state storage environment for high throughput and low latency applications.

 

Test Conclusions

  1. Achieved 100% pass rate on all the test cases in the SSR qualification test plan. The network and the storage were able handle the various stress and error recovery scenarios without any issues.
  1. Different I/O workload scenarios were simulated using Medusa, Vdbench and VMware IOAnalyzer tools and sustained performance levels were achieved across all workload types. The Brocade fabric handled both the low latency and high throughput I/O workloads with equal efficiency without any I/O errors or packet drops.
  1. The results confirm that the Tegile HA2100EP array interoperates seamlessly with Brocade FC fabrics, and demonstrate high availability and sustained performance.
  1. For optimal availability and performance, consideration should be given to multipath configuration on the host side. While Windows hosts will provide Round-Robin behavior by default, Linux and VMware systems require a custom entry to the multipath configuration settings provided by Tegile to efficiently use all available paths and provide high availability.
  1. Sufficient number of ISL’s need to be provisioned when using 8Gb FC switches in the fabric to prevent bottlenecks. Using Gen5 16Gb FC switches required fewer ISL connections.
  1. We recommend enabling “Bottleneck Detection” on the switches in the FC fabric to proactively monitor fabric performance and best leverage the investment in high performance low-latency storage.

 

Related Documents

References

 
 

 

Document History

Date

Version

Description

2014-07-21

1.0

Initial Publication

 

About Brocade

Brocade networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is realized through the Brocade One™ strategy, which is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

 

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings.

 

To learn more, visit (www.brocade.com)

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Test Architect: Mike Astry, Patrick Stander
  • Test Engineer: Subhish Pillai

 

About Tegile

Tegile SystemsTM is pioneering a new generation of flash-driven enterprise storage arrays that balance performance, capacity, features and price for virtualization, file services and database applications.

 

Our hybrid arrays are significantly faster than legacy arrays and significantly less expensive than all solid-state disk-based arrays. Featuring both NAS and SAN connectivity, these virtual data storage systems are easy-to-use, fully redundant, and highly scalable. Additionally, they come complete with built-in snapshot, replication, near-instant recovery, and virtualization management features.

 

Tegile’s patented IntelliFlashTM technology accelerates performance to solid state speeds without sacrificing the capacity or cost advantage of hard disk storage. Additionally it enables on-the-fly de-duplication and compression so usable capacity is far greater than its raw capacity.

 

Test Plan

 

Scope

Testing is performed with a mix of released and development versions of Brocade’s Fabric Operating System (FOS) in a heterogeneous environment. Test devices include Brocade directors and switches configured in routed and non-routed fabric configurations.

 

Testing is at system level, including interoperability of Storage Devices with the Brocade Fabric Switches. Performance is observed in the context of best practice fabric configuration; however absolute maximum benchmark reporting of storage performance is beyond the scope of this test.

 

Details of the test procedure are covered under “Test Cases” section. Standard test configuration setup includes IBM, HP and Dell chassis server hosts with QLogic and Emulex HBA’s. There are two uplinks from each host to a Brocade FC switch. The Tegile array “active” and “backup” controller HBA ports are balanced between the Brocade FC fabrics.

 

Test Topology

The following diagram shows the test topology and devices configured for this validation test.

 

Tegile_Testbed_CPIL_edit_2.jpg

   Test Topology

 

DUT Descriptions

The following describes the “Devices Under Test” (TUT).

 

Storage Array

DUT ID

Model

Vendor

Description

Notes

Tegile HA2100EP

HA2100EP

Tegile

Dual Controller array with support for FC, iSCSI, NFS, CIFS protocol support.

Each controller has 2x8Gb FC ports and 2x10GbE ports in an active-passive configuration.

 

Switches

DUT ID

Model

Vendor

Description

Notes

6510-1/2/3

BR-6510

Brocade

48 port 16Gb FC switch

 

5100-1/2/3

BR-5100

Brocade

40 port 8Gb FC switch

 

DCX-3

DCX

Brocade

8 slot 8Gb FC chassis

 

DCX-4

DCX-4S

Brocade

4 slot 8Gb FC chassis

 

DCX-2

DCX 8510-8

Brocade

8 slot 16Gb FC chassis

 

DCX-1

DCX 8510-4

Brocade

4 slot 16Gb FC chassis

 

VDX-1/2

VDX 6730

Brocade

60x10GbE ports and 16x8Gb FC port switch

 

 

DUT Specifications

The folloiwng provides details about the specification of each DUT.

 

Device

Release

Configuration Options

Notes

HA2100EP Array

2.1.1.3(140308)-1473

 Setup HA resource groups for disk pool

BR-6510

FOS v7.2.1 and v7.3.0_dev

Fabric Vision License

Integrated Routing License

 

BR-5100

FOS v7.2.1 and v7.3.0_dev

Fabric Vision License

 

DCX

FOS v7.3.0_dev

 

Requires 8Gb FC blade

DCX-4S

FOS v7.3.0_dev

 

Requires 8Gb FC blade

DCX 8510-8

FOS v7.3.0_dev

 

Requires 16Gb FC blade

DCX 8510-4

FOS v7.3.0_dev

 

Requires 16Gb FC blade

VDX 6730

NOS v4.1.1

VCS Fabric License

 

 

Test Equipment

The following test equipment was used for this validation test.

 

Device Type

Model

Server (SRV1-8)

HP DL380p Gen8, HP DL360p Gen8, IBM x3630M4, IBM x3650M4

HBA

QLogic 2600, QLogic (Brocade) 1860-FC, Emulex LPe12000

CNA

QLogic (Brocade) 1860-CNA, Emulex OCe14102-UM

Analyzer/Jammer

JDSU Xgig

I/O Generator

Medusa v6.0, Vdbench v5.04, VMware IOAnalyzer

 

DUT Configuration Procedures

Required and recommended configurations for the DUT are provided belwo.

 

1.0 Multipath Settings on Host

Configuring the multipath settings allows for proper failover and load balancing across the available links. Multipath settings for Linux and VMware as recommended by Tegile are provided here.

 

For Windows, the native MPIO settings are used and no special configuration is necessary.

 

1.1 Multipath for Linux

Add the following to /etc/multipath.conf

 

< =========== >

devices {

   device {

          vendor                  "TEGILE"

          product                 "ZEBI-FC"

          hardware_handler        "1 alua"

          path_selector           "round-robin 0"

          path_grouping_policy    "group_by_prio"

          no_path_retry           10

          dev_loss_tmo            50

          path_checker            tur

          prio                    alua

          failback                30

          }

}

< =========== >

 

Sample output:

 

< =========== >

3600144f0ec770e000000536d06730008 dm-2 TEGILE  ,ZEBI-FC

size=10G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw

|-+- policy='round-robin 0' prio=50 status=active

| |- 5:0:2:15 sdm  8:192  active ready running

| |- 5:0:0:15 sdr  65:16  active ready running

| |- 6:0:1:15 sdad 65:208 active ready running

| `- 6:0:2:15 sdai 66:32  active ready running

`-+- policy='round-robin 0' prio=1 status=enabled

  |- 5:0:3:15 sdd  8:48   active ready running

  |- 6:0:3:15 sde  8:64   active ready running

  |- 6:0:0:15 sdx  65:112 active ready running

  `- 5:0:1:15 sdw  65:96  active ready running

< =========== >

 

1.2 Multipath for VMware

Run the following command from the VMware host CLI to setup round-robin multipathing rule for Tegile arrays:

 

< =========== >

esxcli storage nmp satp rule add -s VMW_SATP_ALUA –V "TEGILE" -M "ZEBI-FC" -c "tpgs_on” –P VMW_PSP_RR -e "Tegile Zebi FC"

< =========== >

 

2.0 Brocade FC Fabric settings

Some of the recommended settings on the Brocade switches in the FC fabric are covered here.

 

2.1 Enable Bottleneck Detection

Enabling bottleneck detection allows administrators to determine the high congestion and high latency points in the fabric and take corrective action if necessary.

 

  1. Use “bottleneckmon –show” command to view the number of bottlenecked ports on the switch.
  2. Use “errdump” commands to view the logs and determine the ports under congestion.
  3. Configure the appropriate latency and congestion alert thresholds as desired, which will control the log trapping for bottleneck detections.

 

Sample output:

 

< =========== >

root> bottleneckmon --status

Bottleneck detection - Enabled

==============================

 

Switch-wide sub-second latency bottleneck criterion:

====================================================

Time threshold                 - 0.800

Severity threshold           - 50.000

 

Switch-wide alerting parameters:

================================

Alerts                                      - Yes

Latency threshold for alert         - 0.200

Congestion threshold for alert    - 0.700

Averaging time for alert             - 150 seconds

Quiet time for alert                   - 150 seconds

< =========== >

 

2.2 Configure “fill word” Setting on the Switch Ports Connecting to the Array

The default setting of “IDLE-IDLE” results in “Invalid Ordered Set” errors on the switch port.

 

Set the fill word setting on the switch port to “Mode 3” to avoid these errors.

 

< =========== >

root> portcfgfillword -h

Usage: portCfgFillWord PortNumber  Mode  [Passive]

Mode: 0/-idle-idle   - IDLE in Link Init, IDLE as fill word (default)

      1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word

      2/-idle-arbff  - IDLE  in Link Init, ARBFF as fill word (SW)

      3/-aa-then-ia  - If ARBFF/ARBFF failed, then do IDLE/ARBFF

Passive: 0/1

< =========== >

 

3.0 Tegile Array settings

Standard array setup with High-Availability configured between the dual controllers for the storage pools.

 

Test Cases

The following table summarizes the test cases peformed.

 

1.0

FC FABRIC AND STORAGE DEVICE INTEGRATION TESTS

1.1

FABRIC INITIALIZATION – BASE FUNCTIONALITY

1.1.1

Storage Device – Physical and Logical Login with Speed Negotiation

1.1.2

Zoning and LUN Mapping

1.1.3

Storage Device Fabric IO Integrity

1.1.4

Storage Device – Portcfgfillword Compatibility

1.1.5

Storage Device Multipath Configuration – Path integrity

1.2

FABRIC – ADVANCED FUNCTIONALTY

1.2.1

Storage Device Bottleneck Detection – w/Congested Host

1.2.2

Storage Device Bottleneck Detection – w/Congested Fabric

1.2.3

Storage Device – QOS Integrity

1.2.4

Storage Device – FC Protocol Jammer Test Suite

1.3

STRESS AND ERROR RECOVERY WITH DEVICE MULIT_PATH

1.3.1

Storage Device Fabric IO integtiry – Congested Fabric

1.3.2

Storage Device Nameserver Integrity – Device Recovery with Port Toggle

1.3.3

Storage Device Nameserver Integrity – Device Recovery with Device Relocation

1.3.4

Storage Device Nameserver Stress – Device Recovery with Device Port Toggle

1.3.5

Storage Device Recovery – ISL Port Toggle

1.3.6

Storage Device Recovery – ISL Port Toggle (entire switch)

1.3.7

Storage Device Recovery – Director Blade Maintenance

1.3.8

Storage Device Recovery – Switch Offline

1.3.9

Storage Device Recovery – Switch Firmware Download

1.4

STORAGE DEVICE – FIBRE CHANNEL ROUTING (FCR) INTERNETWORKING TESTS

1.4.1

Storage Device InterNetworking Validation w/FC host

1.4.2

Storage Device InterNetworking Validation w/FCoE Test

1.4.3

Storage Device Edge Recovery after FCR Disruptions

1.4.4

Storage Device BackBone Recovery after FCR Disruptions

2.0

I/O WORKLOAD TESTS

2.0.1

(Single host) x (2 initiator ports) à All Target ports

2.0.2

(Multi host) x (2 initiator ports) à All Target ports

2.0.3

VMware Cluster I/O tests

2.0.4

Application Specific I/O tests

 

1.0 FC Fabric and Storage Device Integration Tests

The tests under this section cover the Fabric Initialization, QoS, Error Injection and Recovery, FC Routing and I/O Integrity testing scenarios.

 

1.1 Fabric Initialization – Base Functionality

 

1.1.1 Storage Device – Physical and Logical Login with Speed Negotiation

Test Objective

Verify device login to switch and name-server with all supported speed settings.

Procedure

Set switch ports to 2/4/8/Auto_Negotiate speed settings.

        portcfgspeed <port> [2/4/8/0]

Expected Result

Storage target ports should negotiate the port speed and login with the corresponding speed settings.

Actual Result

Pass

 

1.1.2 Zoning and LUN Mapping

Test Objective

Verify host to LUN access exists with valid zoning.

Procedure

  1. Create FC zone on the fabric with the initiator and target WWNs.
  2. Create Host Groups and LUNs on the array with access to initiator WWN.

Expected Result

Host has read/write access to presented LUNs

Actual Result

Pass

 

1.1.3 Storage Device Fabric IO Integrity

Test Objective

Validate single path host-to-LUN IO with write/read/verify testing. Include short device cable pulls/port-toggle to validate device recovery.

Procedure

  1. Setup read/write I/O to LUN using Medusa/vdbench
  2. Perform link disruptions by port-toggles, cable pulls.
  3. Verify I/O recovers after short downtime.

Expected Result

I/O will resume after a short recovery period.

Actual Result

Pass

 

1.1.4 Storage Device – Portcfgfillword Compatibility

Test Objective

Validate with IO all portcfgfillword settings and determine optimal settings.

Procedure

  1. Set switch ports connecting to array target ports to different settings and verify target port operation.

< =========== >

portCfgFillWord PortNumber  Mode 

 Mode: 0/-idle-idle   - IDLE in Link Init, IDLE as fill word (default)

       1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word

       2/-idle-arbff  - IDLE  in Link Init, ARBFF as fill word (SW)

       3/-aa-then-ia  - If ARBFF/ARBFF failed, then do IDLE/ARBFF

< =========== >

 

  1. Monitor “er_bad_os – Invalid Ordered Set” with portstatsshow

Expected Result

“er_bad_os” would increment if incompatible mode is selected. Determine compatible ordered set mode.

Actual Result

The default mode 0 “idle-idle” results in error count incrementing.

Setting fill word to Mode= 1, 2 or 3 do not result in errors.

Setting to mode 3 is recommended.

 

1.1.5 Storage Device Multipath Configuration – Path integrity

Test Objective

Verify multi-path configures successfully. Each Adapter and Storage port to reside in different switches. For all device paths, consecutively isolate invididual paths and validate IO integrity and path recovery.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Perform sequential port toggles across initiator and target switch ports to isolate paths.

Expected Result

I/O should failover to remaining available paths and recover when disrupted path is restored.

Actual Result

Pass

 

1.2 Fabric – Advanced Functionalty

 

1.2.1 Storage Device Bottleneck Detection – w/Congested Host

Test Objective

Enable Bottleneck Detection in fabric. Create congestion on host adapter port. Verify Storage Device and switch behavior.

Procedure

  1. Enable bottleneck detection on all switches. Fabric Vision license required.
  2. Start I/O from single host initiator to multiple targets.
  3. Monitor switch logs for Congestion and Latency bottleneck warnings.
  4. Use “bottleneckmon –show” to monitor bottlenecked ports.

Expected Result

Switch should log the bottlenecked ports. Bottleneck should clear after I/O stops.

Actual Result

Pass

 

1.2.2 Storage Device Bottleneck Detection – w/Congested Fabric

Test Objective

Enable Bottleneck Detection in fabric. Create congestion on switch ISL port. Verify Storage Device and switch behavior.

Procedure

  1. Enable bottleneck detection on all switches. Fabric Vision license required.
  2. Isolate single ISL in the fabric.
  3. Start I/O from multiple host initiators to multiple targets.
  4. Monitor switch logs for Congestion and Latency bottleneck warnings.
  5. Use “bottleneckmon –show” to monitor bottlenecked ports.

Expected Result

Switch should log the bottlenecked ports. Bottleneck should clear after I/O stops.

Actual Result

Pass

 

1.2.3 Storage Device – QOS Integrity

Test Objective

Enable QOS for devices under test. Verify device behavior and validate traffic characteristics.

Procedure

  1. Setup initiator-target pairs with Low/Medium/High QoS zones in the fabric.
  2. Start I/O across all pairs and verify I/O statistics.

Expected Result

Hosts should be able to access the array under all QoS states. Verify I/O completes successfully without any errors.

Actual Result

Pass

 

1.2.4 Storage Device – FC Protocol Jammer Test Suite

Test Objective

Perform FC Jammer Tests including areas such as: CRC corruption, packet corruption, missing frame, host error recovery, target error recovery

Procedure

  1. Insert Jammer device in the I/O path on the storage end.
  2. Execute the following Jammer scenarios:

Delete one frame

Delete R_RDY

Replace CRC of data frame

Replace EOF of data frame

Replace “good status” with “check condition”

Replace IDLE with LR

Truncate frame

Create S_ID/D_ID error of data frame

  1. Verify Jammer operations and recovery with Analyzer.

Expected Result

Host and target should be able to recover from the errors and continue I/O operations.

Actual Result

Pass

 

1.3 Stress and Error Recovery with Device Multi-Path

1.3.1 Storage Device Fabric IO Integrity – Congested Fabric

Test Objective

From all initiators start a mixture of READ, READ/WRITE, and WRITE traffic continuously to all their targets for a 60 hour period.  Verify no host application failover or unexpected change in I/O throughput occurs.

Procedure

Setup multiple host initiators with array target ports and run Read, Read-Write Mix and Write I/O at different block sizes for a long run.

Expected Result

I/O should complete successfully without any errors or faults.

Actual Result

Pass

 

1.3.2 Storage Device Nameserver Integrity – Device Recovery with Port Toggle

Test Objective

Sequentially, manually toggle every adapter/device port.  Verify host I/O will failover to alternate path and toggled path will recover.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Perform multiple iterations of sequential port toggles across initiator and target switch ports.

Expected Result

I/O should failover to remaining available paths and recover when disrupted path is restored.

Actual Result

Pass

 

1.3.3 Storage Device Nameserver Integrity – Device Recovery with Device Relocation

Test Objective

Sequentially performed for each Storage Device port. Disconnect and reconnect port to different switch in same fabric. Verify host I/O will failover to alternate path and toggled path will recover.

Repeat the test to validate behavior in all ASIC types.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Move storage target ports to different switch port in the fabric.

Expected Result

I/O should failover to remaining available paths and recover when disrupted path is restored.

Actual Result

Pass

 

1.3.4 Storage Device Nameserver Stress – Device Recovery with Device Port Toggle

Test Objective

For extended time run. Sequentially Toggle each Initiator and Target ports in fabric.  Verify host I/O will failover to alternate path and toggled path will recover.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Perform multiple iterations of sequential port toggles across initiator and target ports on the host and array.

Expected Result

I/O should failover to remaining available paths and recover when disrupted path is restored.

Actual Result

Pass

1.3.5 Storage Device Recovery – ISL Port Toggle

Test Objective

For extended time run. Sequentially toggle each ISL path on all switches.  Host I/O may pause, but should recover.  Verify host I/O throughout test.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Perform multiple iterations of sequential ISL toggles across the fabric.

Expected Result

I/O should failover to remaining available ISLs and recover when disrupted ISL is restored.

Actual Result

Pass

 

1.3.6 Storage Device Recovery – ISL Port Toggle (entire switch)

Test Objective

For extended time run. Sequentially toggle ALL ISL paths on a switch isolating switch from fabric. Verify host I/O will failover to alternate path and toggled path will recover.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Perform multiple iterations of sequentially disabling all ISLs on a switch in the fabric.

Expected Result

I/O should failover to available switch path and recover when disrupted switch is restored.

Actual Result

Pass

 

1.3.7 Storage Device Recovery – Director Blade Maintenance

Test Objective

For extended time run. Verify device connectivity to DCX blades.

Sequentially toggle each DCX blade. Verify host I/O will failover to alternate path and toggled path will recover. Include blade disable/enable, blade power on/off.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Perform multiple iterations of sequential disable/enable, power on/off of the DCX blades in the fabric.

Expected Result

I/O should failover to available switch path and recover when disrupted blade is restored.

Actual Result

Pass

 

1.3.8 Storage Device Recovery – Switch Offline

Test Objective

Toggle each switch in sequential order. Host I/O will failover to redundant paths and recover upon switch being enabled. Include switch enable/disable, power on/off, and reboot testing.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Perform multiple iterations of sequential disable/enable, power on/off and reboot of all the switches in the fabric.

Expected Result

Host I/O will failover to redundant paths and recover upon switch being enabled.

Actual Result

Pass

 

1.3.9 Storage Device Recovery – Switch Firmware Download

Test Objective

Sequentially perform firmware maintenance procedure on all device connected switches under test. Verify Host I/O will continue (with minimal disruption) through firmware download and device pathing will remain consistent.

Procedure

  1. Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2. Setup multipath on host
  3. Start I/O
  4. Sequentially perform firmware upgrades on all switches in the fabric.

Expected Result

Host I/O will continue with minimal disruption during the upgrade process.

Actual Result

Pass

 

1.4 Storage Device – Fibre Channel Routing (FCR) InterNetworking Tests>

 

1.4.1 Storage Device InterNetworking Validation w/FC host

Test Objective

Configure two FC fabrics with FCR. Verify that edge devices are imported into adjacent name servers and hosts have access to their routed targets after FC routers are configured.

Procedure

  1. Setup FCR in an Edge-Backbone-Edge configuration.
  2. Setup LSAN zoning.
  3. Verify name server and FCR fabric state. fcrproxydevshow; fabricshow
  4. Verify host access to targets.

Expected Result

Both Edge fabrics should have the corresponding proxy name server entries for the host and target ports.

Actual Result

Pass

 

1.4.2 Storage Device InterNetworking Validation w/FCoE Test

Test Objective

Configure a FC fabric with FCR while connected to an FCoE fabric. Verify that edge devices are imported into adjacent name servers and hosts have access to their routed targets after FC routers are configured.

Procedure

  1. Add FCoE VCS fabric to FCR setup.
  2. Setup LSAN zoning.
  3. Verify name server and FCR fabric state. fcrproxydevshow; fabricshow
  4. Verify host access to targets.

Expected Result

Both Edge fabrics should have the corresponding proxy name server entries for the host and target ports.

Actual Result

Pass

 

1.4.3 Storage Device Edge Recovery after FCR Disruptions

Test Objective

Configure FCR for Edge-Backbone-Edge configuration. With IO running, validate device access and pathing. Perform reboots, switch disables, and port-Toggles on Backbone connections to disrupt device pathing and IO. Verify path and IO recovery once switches and ports recover.

Procedure

  1. Setup FCR in an Edge-Backbone-Edge configuration.
  2. Setup LSAN zoning.
  3. Start I/O
  4. Perform sequential reboots, switch disables and ISL port toggles on the switches in the backbone fabric.

Expected Result

I/O should failover to available switch path and recover when disrupted switch is restored.

Actual Result

Pass

 

1.4.4 Storage Device BackBone Recovery after FCR Disruptions

Test Objective

Configure FCR for Edge-Backbone configuration. With IO running, validate device access and pathing. Perform reboots, switch disables, and port-Toggles on Backbone connections to disrupt device pathing and IO. Verify path and IO recovery once switches and ports recover.

Procedure

  1. Connect array target ports to backbone fabric in an Edge-Backbone configuration.
  2. Setup LSAN zoning.
  3. Start I/O
  4. Perform sequential reboots, switch disables and ISL port toggles on the switches in the backbone fabric.

Expected Result

I/O should failover to available switch path and recover when disrupted switch is restored.

Actual Result

Pass

 

2.0 I/O Workload Tests

I/O workload testing is performed to study the overall fabric behavior under different workloads for short and long durations. Simulated and Synthetic I/O workloads are generated through Medusa, Vdbench and VMware IOAnalyzer tools.

 

2.0.1 (Single host) x (2 initiator ports) - All Target ports

Test Objective

Run multipath IO from two initiator ports on one host to all target ports and verify performance characteristics are as expected.

Procedure

  1. Configure paths from two host initiator ports to all target ports (2 active – 2 passive)
  2. Run random and sequential I/O in a loop at block transfer sizes of 512, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write.

Expected Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults.

Actual Result

Pass

 

2.0.2 (Multi host) x (2 initiator ports) - All Target ports

Test Objective

Run multipath IO from two initiator ports on each host to all target ports and verify performance characteristics are as expected.

Procedure

  1. Setup 4 hosts with 2 initiator ports per host to all target ports (2 active – 2 passive)
  2. Run random and sequential I/O in a loop at block transfer sizes of 512, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write.

Expected Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults.

Actual Result

Pass

 

2.0.3 VMware Cluster I/O tests

Test Objective

Run multipath IO from a VMWare cluster with multiple initiator ports to all target ports.

Procedure

  1. Configure a 2-host ESX cluster with 2 initiator ports per host connecting to all target ports with 8 worker VMs on the cluster.
  2. Use VMware IOAnalyzer to create the worker VMs and drive the workload. Run IO at large and small block transfer sizes.

Expected Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults.

Actual Result

Pass

 

2.0.4 Application Specific IO Tests

Test Objective

Run real-world application based simulated I/O using Medusa and VMware IOAnalyzer tools.

Procedure

  1. Use the multi host setup from test 2.0.2 for running Medusa tests.
  2. Use the ESX cluster setup from test 2.0.3 for running IOAnalyzer tests.
  3. Test the following workloads
  4. File Server simulation with Medusa
  5. Microsoft Exchange Server simulation with Medusa and IOAnalyzer
  6. SQL Server simulation with IOAnalyzer
  7. OLTP simulation with IOAnalyzer
  8. Web Server simulation with IOAnalyzer
  9. Video On Demand simulation with IOAnalyzer
  10. Workstation simulation with IOAnalyzer

Expected Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults.

Actual Result

Pass