Design & Build

Data Center Solution, Storage-Validation Test: Brocade Gen 5 Fibre Channel and Pure Storage FA-420 Flash Array

by on ‎08-07-2014 09:30 AM - edited on ‎09-24-2015 09:06 AM by Community Manager (2,786 Views)
 

 

Preface

Overview - PDF Download Available Below

The Brocade Solid State Ready (SSR) program uses comprehensive test and configuration procedures to demonstrate Brocade Fibre Channel SAN and IP network interoperability with flash storage or SSD arrays. The test configuration includes multiple fabric topologies, heterogeneous servers, NICs, and HBAs in a large port-count test environment.

 

The SSR program ensures seamless interoperability with optimum performance of solid state disk (SSD) in Brocade Fibre Channel SAN fabrics and IP networks.

 

Purpose of This Document

Validates Brocade SAN fabrics with the the Pure Storage FA-420 all-flash SSD. The test configuration includes multiple Brocade SAN switch platforms with partner HBAs, and server operating systems. The goal is to ensure the Pure Storage FA-420 SSD array interoperates properly within a Brocade Fibre Channel fabric delivering the high performance and low latency associated with SSD.

 

Audience

This document is written for a technical audience, including solution architects, system engineers, and technical development engineers.

 

Objectives

Test the Pure FA-420 array with Brocade Fibre Channel SAN fabrics, in single fabric and routed multi-fabric configurations. Test cases include different stress and error recovery conditions that validate the interoperability and demonstrate the integration of the FA-420 with Brocade Fibre Channel switches.

 

Validate the performance of a Brocade Fibre Channel SAN with the FA-420 SSD to determine the throughput and latency with different work loads--block sizes and IO per second (IOPS).

 

Test Conclusions

Achieved 100% pass rate on all the test cases in the SSR qualification test plan. The network and the storage were able handle the various stress and error recovery scenarios without any issues.

 

Different I/O workload scenarios were simulated using Medusa, Vdbench and VMware IOAnalyzer tools and sustained performance levels were achieved across all workload types. The results confirm that the Pure Storage FA-420 array interoperates seamlessly with Brocade Fibre Channel fabrics, and together demonstrate high availability, performance, and low latency.

 

For optimal availability and performance, consideration should be given to multipath configuration on the host side. While Windows 2008 and 2012 will provide Round-Robin behavior by default, Linux systems will benefit from adding a custom entry to /etc/multipath.conf, and VMWare hosts systems should be changed from the default "Most Recently Used (VMWare)" setting to "Round-Robin (VMWare)". Actively using all available paths provides a significant improvement in performance throughput.

 

Bottleneck Detection is a recommended tool to proactively monitor fabric performance and best leverage the investment in high performance low-latency storage.

 

Related Documents

 

References

Fabric OS Administrator's Guide

 Brocade SAN Design and Best Practices

Brocade SAN Fabric Administration Best Practices Guide

 

Document History

Date           Release     Description

2014-07-15  1.0              Initial Release

 

About Brocade

Brocade networking solutions help the world�s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is realized through the Brocade One� strategy, which is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

 

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings.

 

To learn more, visit (www.brocade.com)

 

About Pure Storage

Pure Storage has a simple mission: to enable the widespread adoption of flash in the enterprise data center. We're a team of some of the brightest minds in storage, flash memory and related technical industries. Founded on principles of teamwork and knowledge sharing, we focus on producing an exceptional result while we transform the landscape of the enterprise storage market (and have some fun along the way).

 

Key Contributors

The content in this guide was provided by the following key contributors.

  • Test Architects: Mike Astry, Patrick Stander
  • Test Engineer: Randy Lodes

 

Test Plan

The storage array is connected to two SAN fabrics and multiple server hosts to drive IO in a multipath configuration. Error injection is introduced, and failover & recovery behaviors are observed. IO performance is observed across different workload configurations. 

 

Scope

Testing will be performed with a mix of GA and development versions of Brocade�s Fabric Operating System (FOS) in a heterogeneous environment. Test beds will include Brocade directors and switches configured in routed and non-routed fabric configurations.

 

Testing is centered on interoperability and optimal configuration. Performance is observed within the context of best practice fabric configuration; however absolute maximum benchmark reporting of storage performance is beyond the scope of this test.

 

Details of the test steps are covered under "Test Case Descriptions"section. Standard test bed setup includes IBM/HP/Dell chassis server hosts with Brocade/QLogic/Emulex HBAs with two uplinks from every host to a Brocade FC fabric. IO generators included Medusa Labs Test Tools, vdbench, Iometer, Orion, and VMWare IOAnalyzer.

 

Test Configuration

The diagram below shows the test configuration. A variety of Brocade Fibre Channel switches are used and Fibre Channel Routing is available to test traffic flows between independent Fibre Channel fabrics.

 

image1.jpg

Test Equipment Configuration Diagram

 

DUT Descriptions

The following tables provide details about the devices under test (DUT).

 

Storage Array

DUT ID

Model

Vendor

Description

Pure Storage FA-420

FA-420

Pure Storage

Pure Storage FA-420

The Pure Storage FA-420 flash storage array is an all-flash array that supports 11-35 TB raw capacity. Each controller supports 4x 8Gb Fibre Channel connections, 2x 10Gb iSCSI connections, and 2x Infiniband connections.

 

Switches

DUT ID

Model

Vendor

Description

6510-1,2,3

BR-6510

Brocade

48 port 16Gb FC switch

5100-1,2,3

BR-5100

Brocade

40 port 8Gb FC switch

DCX-3

DCX

Brocade

8 slot 8Gb FC chassis 

DCX-4

DCX-4S

Brocade

4 slot 8Gb FC chassis

DCX-2

DCX 8510-8

Brocade

8 slot 16Gb FC chassis 

DCX-1

DCX 8510-4

Brocade

4 slot 16Gb FC chassis

VDX-1,2

VDX 6730

Brocade

60x10GbE ports and 16x8Gb FC port switch 

 

DUT Specifications

 

Storage Array

Version

Pure Storage FA-420 solid state flash array

Purity version 3.4.0

 

Brocade Switches

Version

DCX-4S

FOS 7.3.0 development

DCX

FOS 7.3.0 development

DCX 8510-8

FOS 7.3.0 development

DCX 8510-4

FOS 7.3.0 development

6510 + Integrated Routing, Fabric Vision Licenses

FOS 7.2.1 release and FOS 7.3.0 development

5100 + Integrated Routing, Fabric Vision Licenses

FOS 7.2.1 release and FOS 7.3.0 development

VDX 6730

NOS 4.1.1

 

Adapters

Version

Brocade 1860 2-port 16Gb FC HBA

driver & firmware version 3.2.4.0

QLogic QLE2672 2-port 16GB FC HBA

driver 8.06.00.10.06.0-k, firmware 6.06.03

Emulex LPE 12002 2-port 8Gb Fc HBA

driver 10.0.100.1, firmware 1.00A9

Brocade 1020 2-port CNA adapter

driver & firmware version 3.2.4.0

 

Servers

DUT ID

RAM

Processor

OS

HP Proliant DL380P G8

SRV-1

160GB

Intel Xeon E5-2640

VMWare 5.5 [cluster]

HP Proliant DL380P G8

SRV-2

160GB

Intel Xeon E5-2640

VMWare 5.5 [cluster]

IBM System x3630 M4

SRV-3

24GB

Intel Xeon E5-2420

VMWare 5.1u2

Dell Poweredge R720

SRV-4

64GB

Intel Xeon E5-2640

Windows Server 2012

Dell Poweredge R720

SRV-5

160GB

Intel Xeon E5-2640

RHEL 6.4 x86_64

HP Proliant DL385p G8

SRV-6

16GB

AMD Opteron 6212

Windows Server 2008R2

Dell Poweredge R720

SRV-7

16GB

Intel Xeon E5-2620

SLES 11.3 x86_64

Dell Poweredge R720

SRV-8

16GB

Intel Xeon E5-2620

RHEL 6.5 x86_64

 

Test Equipment

Version

Finisar 16Gb Analyzer/Jammer

XGIG5K2001153

Medusa Labs Test Tools

6.0

Vdbench

5.0401

Iometer

1.1.0-rc1

VMWare IOAnalyzer

1.6.0

Oracle ORION

11.1.0.7.0

 

Test Configuration Procedures

The following steps show how this test is configured.

 

  1. Create zones for each host initiator group
  2. Present LUNs for each initiator group – 8 x 5GB LUNs presented to two initiators from host
  3. Configure multi-pathing on each host
  4. Apply any additional host tuning
  5. Setup workload generators
  6. Configure Brocade Fibre Channel Routing
  7. Enable Bottleneck Detection on switches
  8. Configure Fibre Channel Fill Word value
  9. Configure zones for VDX Switches

 

1. Create Zones for Each Host Initiator Group

 

Example zone:

<==========>

B5100_066_084:root> zoneshow hb067168_pure

 zone:  hb067168_pure

                10:00:8c:7c:ff:24:a0:00; 10:00:8c:7c:ff:24:a0:01;

                52:4a:93:7d:f3:5f:61:00; 52:4a:93:7d:f3:5f:61:01;

<==========>

 

2. Present LUNs for Each Initiator Group

 

The following sets up 8 x 5 GB LUNs presented to two initiators from host as shown below.

 

image2.jpg

   Pure Storage LUN Configuration

 

3. Configure Multi-pathing on Each Host

This configuration allows all paths to be used in a round-robin fashion. This provides superior performance to the default Linux settings which would only use a single active path per LUN.

Recommended /etc/multipath.conf entry on Linux systems:

 

<==========>

devices {

    device {

        vendor                "PURE"

        path_selector         "round-robin 0"

        path_grouping_policy  multibus

        rr_min_io             1

        path_checker          tur

        fast_io_fail_tmo      10

        dev_loss_tmo          30

    }

<==========>

 

3.a Example of Running Multi=path Configuration:

 

<==========>

# multipath -l

mpathe (3624a9370a15a66e949f7d1440001003d) dm-3 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:13 sdac 65:192 active undef running

  |- 1:0:1:13 sdm  8:192  active undef running

  |- 1:0:2:13 sdu  65:64  active undef running

  |- 1:0:3:13 sde  8:64   active undef running

  |- 2:0:0:13 sdas 66:192 active undef running

  |- 2:0:1:13 sdbi 67:192 active undef running

  |- 2:0:2:13 sdba 67:64  active undef running

  `- 2:0:3:13 sdak 66:64  active undef running

mpathd (3624a9370a15a66e949f7d1440001003c) dm-2 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:12 sdab 65:176 active undef running

  |- 1:0:1:12 sdl  8:176  active undef running

  |- 1:0:2:12 sdt  65:48  active undef running

  |- 1:0:3:12 sdd  8:48   active undef running

  |- 2:0:0:12 sdar 66:176 active undef running

  |- 2:0:1:12 sdbh 67:176 active undef running

  |- 2:0:2:12 sdaz 67:48  active undef running

  `- 2:0:3:12 sdaj 66:48  active undef running

mpathc (3624a9370a15a66e949f7d1440001003b) dm-1 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:11 sdaa 65:160 active undef running

  |- 1:0:1:11 sdk  8:160  active undef running

  |- 1:0:2:11 sds  65:32  active undef running

  |- 1:0:3:11 sdc  8:32   active undef running

  |- 2:0:0:11 sdaq 66:160 active undef running

  |- 2:0:1:11 sdbg 67:160 active undef running

  |- 2:0:2:11 sday 67:32  active undef running

  `- 2:0:3:11 sdai 66:32  active undef running

mpathb (3624a9370a15a66e949f7d1440001003a) dm-0 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:10 sdz  65:144 active undef running

  |- 1:0:1:10 sdj  8:144  active undef running

  |- 1:0:2:10 sdr  65:16  active undef running

  |- 1:0:3:10 sdb  8:16   active undef running

  |- 2:0:0:10 sdap 66:144 active undef running

  |- 2:0:1:10 sdbf 67:144 active undef running

  |- 2:0:2:10 sdax 67:16  active undef running

  `- 2:0:3:10 sdah 66:16  active undef running

mpathi (3624a9370a15a66e949f7d14400010070) dm-7 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:17 sdag 66:0   active undef running

  |- 1:0:1:17 sdq  65:0   active undef running

  |- 1:0:2:17 sdy  65:128 active undef running

  |- 1:0:3:17 sdi  8:128  active undef running

  |- 2:0:0:17 sdaw 67:0   active undef running

  |- 2:0:1:17 sdbm 68:0   active undef running

  |- 2:0:2:17 sdbe 67:128 active undef running

  `- 2:0:3:17 sdao 66:128 active undef running

mpathh (3624a9370a15a66e949f7d1440001006f) dm-6 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:16 sdaf 65:240 active undef running

  |- 1:0:1:16 sdp  8:240  active undef running

  |- 1:0:2:16 sdx  65:112 active undef running

  |- 1:0:3:16 sdh  8:112  active undef running

  |- 2:0:0:16 sdav 66:240 active undef running

  |- 2:0:1:16 sdbl 67:240 active undef running

  |- 2:0:2:16 sdbd 67:112 active undef running

  `- 2:0:3:16 sdan 66:112 active undef running

mpathg (3624a9370a15a66e949f7d1440001006e) dm-5 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:15 sdae 65:224 active undef running

  |- 1:0:1:15 sdo  8:224  active undef running

  |- 1:0:2:15 sdw  65:96  active undef running

  |- 1:0:3:15 sdg  8:96   active undef running

  |- 2:0:0:15 sdau 66:224 active undef running

  |- 2:0:1:15 sdbk 67:224 active undef running

  |- 2:0:2:15 sdbc 67:96  active undef running

  `- 2:0:3:15 sdam 66:96  active undef running

mpathf (3624a9370a15a66e949f7d1440001006d) dm-4 PURE    ,FlashArray

size=5.0G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=0 status=active

  |- 1:0:0:14 sdad 65:208 active undef running

  |- 1:0:1:14 sdn  8:208  active undef running

  |- 1:0:2:14 sdv  65:80  active undef running

  |- 1:0:3:14 sdf  8:80   active undef running

  |- 2:0:0:14 sdat 66:208 active undef running

  |- 2:0:1:14 sdbj 67:208 active undef running

  |- 2:0:2:14 sdbb 67:80  active undef running

  `- 2:0:3:14 sdal 66:80  active undef running

<==========>

 

3.b Recommended /etc/multipath.conf Entry on VMWare Systems:

This configuration allows all paths to be used in a round-robin fashion. This provides superior performance to the default VMWare ‘Most Recently Used’ settings which would only use a single active path per LUN.

 

image3.jpg

…Pure Storage SSD Array Multi-path Selection Policy

 

4. Apply Additional Host Tuning

The first FA-420 array tuning choice selects the 'noop' I/O scheduler, which has been shown to get better performance with lower CPU overhead than the default schedulers (usually 'deadline' or 'cfq').  The second change eliminates the collection of entropy for the kernel random number generator, which has high cpu overhead when enabled for devices supporting high IOPS.

 

Rules applied at boot in /etc/udev/rules.d/99-pure-storage.rules:

 

<==========>

# Recommended settings for Pure Storage FlashArray.

#

# Use noop scheduler for high-performance solid-state storage

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"

#

# Reduce CPU overhead due to entropy collection

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"

#

# Spread CPU load by redirecting completions to originating CPU

ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"

<==========>

 

5. Setup Workload Generators

The following workload generators are used to get a variety of IO coverage.

On Windows and Linux systems:

  • Medusa Labs Test Tools,
  • vdbench,
  • Iometer, and
  • Oracle Orion

 

On VMWare systems, VMWare’s IOAnalyzer is used.

 

6. Configure Brocade Fibre Channel Routing

Recommend settings on the Brocade switches in the SAN fabric are shown below.

 

<==========>

> fcrconfigure --bbfid' '100

> fosconfig --enable fcr

> portcfgexport [p#] -a1 -m0 -p 10

 

Example routed zone prepended with ‘lsan’

> zoneshow lsan_hb067166_pure

 zone:  lsan_hb067166_pure

                10:00:8c:7c:ff:23:b7:00; 10:00:8c:7c:ff:23:b7:01;

                52:4a:93:7d:f3:5f:61:00; 52:4a:93:7d:f3:5f:61:11

 

Example output of exported devices

> fcrproxydevshow

  Proxy           WWN             Proxy      Device   Physical    State

 Created                           PID       Exists     PID

in Fabric                                   in Fabric

----------------------------------------------------------------------------

    10   21:00:00:24:ff:48:b9:6a  02f001       20      551a00   Imported

    10   21:00:00:24:ff:48:b9:6b  02f101       20      541e00   Imported

    10   52:4a:93:7d:f3:5f:61:00  02f201       20      550e00   Imported

10   52:4a:93:7d:f3:5f:61:01  02f401       20      540400   Imported

<==========>

 

7. Enable Bottleneck Detection on switches

This will enable reporting of latency and congestion alerts on each switch.

 

<==========>

> bottleneckmon --enable -alert

> bottleneckmon --config -alert -time 150 -qtime 150 -cthresh 0.7 -lthresh 0.2

root> bottleneckmon --status

Bottleneck detection - Enabled

==============================

 

Switch-wide sub-second latency bottleneck criterion:

====================================================

Time threshold                 - 0.800

Severity threshold             - 50.000

 

Switch-wide alerting parameters:

================================

Alerts                         - Yes

Latency threshold for alert    - 0.200

Congestion threshold for alert - 0.700

Averaging time for alert       - 150 seconds

Quiet time for alert           - 150 seconds

<==========>

 

8. Set Fibre Channel Fill Word Value

The fill workd on Brocade Condor2 ASIC 8 Gbps switch platforms should be set to “3” using the portcfgfillword command. Prior to the introduction of 8 Gb Fibre Channel products, IDLEs were used for link initialization, as well as fill words after link initialization. To help reduce electrical noise in copper-based equipment, the use of ARB (FF) instead of IDLEs is now a standard. Because this aspect of the standard was published after some vendors had already begun development of 8 Gbps interfaces, not all equipment can support ARB (FF). IDLEs are still used with 1, 2, and 4 Gbps interfaces. To accommodate the new specifications and different vendor implementations, Brocade developed a user-selectable method to set the fill words to either IDLEs or ARB (FF).

 

<==========>

B5100_066_084:root> portcfgfillword 0 3 0

B5100_066_084:root> portcfgfillword 2 3 0

 

root> portcfgshow

>Ports of Slot 0           0    1    2     3    4    5    6     7    8    9   10   11  12   13  14  15

----------------------        +----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----

Speed                      AN  AN  AN AN  AN AN  AN AN  AN AN  AN AN  AN AN  AN AN

Fill Word(On Active)  3     0     3    0    0    0     0   0      0   0     0    0    0     0    0    2

Fill Word(Current)      3    0      3   0    0    0     0   0      0   0     0    0    0     0    0   2

<==========>

 

9. Configure zones for FCoE initiators on VDX switches

Example zone on a Brocade VDX switch:

 

<==========>

# show zoning enabled-configuration

zoning enabled-configuration cfg-name NOS_SSR

zoning enabled-configuration enabled-zone lsan_hb067166_pure

 member-entry 10:00:8c:7c:ff:1f:7b:00

 member-entry 10:00:8c:7c:ff:1f:7b:01

 member-entry 52:4a:93:7d:f3:5f:61:00

 member-entry 52:4a:93:7d:f3:5f:61:01

<==========>

 

Test Cases

 

1.1

FABRIC INITIALIZATION BASE FUNCTIONALITY

Confirm basic Fibre Channel functionality of the storage array

1.1.1

Storage Device Physical and Logical Login with Speed Negotiation

1.1.2

Zoning and LUN Mapping

1.1.3

Storage Device Fabric IO Integrity

1.1.4

Storage Device Portcfgfillword Compatibility

1.1.5

Storage Device Multipath Configuration Path integrity

1.2

FABRIC ADVANCED FUNCTIONALTY

Examine the storage behavior related to more advanced fabric features such as QoS, Bottleneck Detection, and advanced frame recovery

1.2.1

Storage Device Bottleneck Detection w/Congested Host

1.2.2

Storage Device Bottleneck Detectionw/Congested Fabric

1.2.3

Storage Device QOS Integrity

1.2.4

Storage Device FC Protocol Jammer Test Suite

1.3

STRESS & ERROR RECOVERY WITH DEVICE MULTI-PATH

Confirm proper HA/failover behavior of storage in a multipath environment

1.3.1

Storage Device Fabric IO integtiry Congested Fabric

1.3.2

Storage Device Nameserver Integrity Device Recovery with Port Toggle

1.3.3

Storage Device Nameserver Integrity Device Recovery with Device Relocation

1.3.4

Storage Device Nameserver Stress  Device Recovery with Device Port Toggle

1.3.5

Storage Device Recovery  ISL Port Toggle

1.3.6

Storage Device Recovery  ISL Port Toggle (entire switch)

1.3.7

Storage Device Recovery  Director Blade Maintenance

1.3.8

Storage Device Recovery Switch Offline

1.3.9

Storage Device Recovery Switch Firmware Download

1.4

STORAGE DEVICE FIBRE CHANNEL ROUTING (FCR) INTERNETWORKING TESTS

Confirm proper storage functioning within routed fabrics

1.4.1

Storage Device InterNetworking Validation w/FC host

1.4.2

Storage Device InterNetworking Validation w/FCoE Test

1.4.3

Storage Device Edge Recovery after FCR Disruptions

1.4.4

Storage Device BackBone Recovery after FCR Disruptions

2.1

IO WORKLOADS [RL1] 

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults. Run specific IO patterns and verify advanced performance across specific dimensions of IO workload.

2.1.1

(Single host) x (1 initiator ports) 1 target port

 

(Single host) x (1 initiator ports) 4 target ports

2.1.2

(Single host) x (2 initiator ports)  4 target ports

2.1.3

(4 hosts) x (2 initiator ports per host)  4 target ports

2.1.4

(2 host ESX cluster with 2 initiator ports per host) x (8 VMs on cluster) 4 target ports

2.1.5

Application specific workloads

 

1.1 Fabric Initialization Base Functionality

 

1.1.1 Storage Device Physical and Logical Login with Speed Negotiation

Test Objective

Verify device login to switch and nameserver with all supported speed settings.

Procedure

Set switch ports to 2/4/8/Auto_Negotiate speed settings.

portcfgspeed <port> [2/4/8/0]

Result

Storage logs into fabric and is link up at 2Gb/4Gb/8Gb. Ran additional IO to verify. PASS

 

1.1.2 Zoning and LUN Mapping

Test Objective

Verify host to LUN access exists with valid zoning.

Procedure

  1.        Create FC zone on the fabric with the initiator and target WWNs.
  2.        Create Host Groups and LUNs on the array with access to initiator WWN.

Result

For each host, created a zone containing four storage ports and two host ports. Verified LUNs are presented to host; verify with IO. PASS

 

1.1.3 Storage Device Fabric IO Integrity

Test Objective

Validate single path host-to-LUN IO with write/read/verify testing. Include short device cable pulls/porttoggle to validate device recovery.

Procedure

  1.        Setup read/write I/O to LUN using Medusa/vdbench
  2.        Perform link disruptions by port-toggles, cable pulls.
  3.        Verify I/O recovers after short downtime.

Result

IO integrity is valid and port recovery is successful. PASS

 

1.1.4 Storage Device Portcfgfillword Compatibility

Test Objective

Validate with IO all portcfgfillword settings and determine optimal settings.

Procedure

  1.        Set switch ports connecting to array target ports to different settings and verify target port operation.

portCfgFillWord PortNumberï Modeï - IDLE in Link Init, IDLE as fill word (default)

1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word

2/-idle-arbffï  - IDLE� in Link Init, ARBFF as fill word (SW)

 3/-aa-then-iaï  - If ARBFF/ARBFF failed, then do IDLE/ARBFF

  1.        Monitor er_bad_os  Invalid Ordered Set with portstatsshow

 

Result

Tested portcfgfillword modes 0, 1, 2, 3; verify w/ IO performance and portstatsshow. Mode 0 results in 'Bad Ordered Set' counters incrementing. Recommended setting is '3' on 8GB Condor2 platforms. PASS

 

1.1.5 Storage Device Multipath Configuration Path Integrity

Test Objective

Verify multi-path configures successfully. Each Adapter and Storage port to reside in different switches. For all device paths, consecutively isolate individual paths and validate IO integrity and path recovery.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Perform sequential port toggles across initiator and target switch ports to isolate paths.

Result

Path validation, performance, and recovery are verified on RHEL, SLES, VMWare, Windows 2008, and Windows 2012. Additional configuration steps taken on the Linux and VMWare systems. PASS

 

1.2 Fabric Advanced Functionality

 

Storage Device Bottleneck Detection w/Congested Host

Test Objective

Enable Bottleneck Detection in fabric. Create congestion on host adapter port. Verify Storage Device and switch behavior.

Procedure

  1.        Enable bottleneck detection on all switches.
  2.        Start I/O from single host initiator to multiple targets.
  3.        Monitor switch logs for Congestion and Latency bottleneck warnings.
  4.        Use bottleneckmon show to monitor bottlenecked ports.

 

Result

Enable monitoring with bottleneckmon enable; create host port congestion with high throughput workload. Confirm bottleneck detection is reported. PASS

 

1.2.2 Storage Device Bottleneck Detection w/Congested Fabric

Test Objective

Enable Bottleneck Detection in fabric. Create congestion on switch ISL port. Verify Storage Device and switch behavior.

Procedure

  1.        Enable bottleneck detection on all switches. Fabric Vision license required.
  2.        Isolate single ISL in the fabric.
  3.        Start I/O from multiple host initiators to multiple targets.
  4.        Monitor switch logs for Congestion and Latency bottleneck warnings.
  5.        Use bottleneckmon show to monitor bottlenecked ports.

 

Result

Enable monitoring with bottleneckmon enable simulate ISL port congestion by isolating traffic to a single ISL and running a high throughput workload. Confirm bottleneck detection is reported. PASS

 

1.2.3 Storage Device QOS Integrity

Test Objective

Enable QOS for devices under test. Verify device behavior and validate traffic characteristics.

Procedure

  1.        Setup initiator-target pairs with Low/Medium/High QoS zones in the fabric.
  2.        Start I/O across all pairs and verify I/O statistics.

Result

Create QoS zones with Brocade HBAs; verify traffic runs in high, medium, and low queues. PASS

 

1.2.4 Storage Device FC Protocol Jammer Test Suite

Test Objective

Perform FC Jammer Tests including areas such as: CRC corruption, packet corruption, missing frame, host error recovery, target error recovery

Procedure

  1.        Insert Jammer device in the I/O path on the storage end.
  2.        Execute the following Jammer scenarios:

Delete one frame

Delete R_RDY

Replace CRC of data frame

Replace EOF of data frame

Replace good status� with �check condition�

Replace IDLE with LR

Truncate frame

Create S_ID/D_ID error of data frame

  1.        Verify Jammer operations and recovery with Analyzer.

Result

Insert Finisar Jammer/Analyzer between storage port and switch. Introduce packet anomalies and verify proper recovery. PASS

 

1.3 Stress and Error Recovery with Device Multi-Path

 

1.3.1 Storage Device Fabric IO Integrity - Congested Fabric

Test Objective

From all initiators start a mixture of READ, READ/WRITE, and WRITE traffic continuously to all their targets for a 60 hour period.� Verify no host application failover or unexpected change in I/O throughput occurs.

Procedure

Setup multiple host initiators with array target ports and run Read, Read-Write Mix and Write I/O at different block sizes for a long run.

Result

Long IO ran successfully without issues. PASS

 

1.3.2 Storage Device Nameserver Integrity - Device Recovery with Port Toggle

Test Objective

Sequentially, manually toggle every adapter/device port.� Verify host I/O will failover to alternate path and toggled path will recover.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 4 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Perform multiple iterations of sequential port toggles across initiator and target switch ports.

Result

Failover between 8 logical paths (2 host x 4 storage) tested successfully. PASS

 

1.3.3 Storage Device Nameserver Integrity Device Recovery with Device Relocation

Test Objective

Sequentially performed for each Storage Device port.

Disconnect and reconnect port to different switch in same fabric. Verify host I/O will failover to alternate path and toggled path will recover. Repeat disconnect/reconnect to validate behavior in all ASIC types.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Move storage target ports to different switch port in the fabric.

Result

Physical move of storage port shows successful recovery on Condor2/Condor3. PASS

 

1.3.4 Storage Device Nameserver Stress Device Recovery with Device Port Toggle

Test Objective

For extended time run. Sequentially Toggle each Initiator and Target ports in fabric. �Verify host I/O will failover to alternate path and toggled path will recover.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 4 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Perform multiple iterations of sequential port toggles across initiator and target ports on the host and array.

Result

48-hr run; failover and recovery with port disables successful. PASS

 

1.3.5 Storage Device Recovery ISL Port Toggle

Test Objective

For extended time run. Sequentially toggle each ISL path on all switches.Host I/O may pause, but should recover. Verify host I/O throughout test.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 4 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Perform multiple iterations of sequential ISL toggles across the fabric.

Result

ISL disables shows path failover and recovery while running IO. PASS

 

1.3.6 Storage Device Recovery ISL Port Toggle (entire switch)

Test Objective

For extended time run. Sequentially toggle ALL ISL paths on a switch isolating switch from fabric Verify host I/O will failover to alternate path and toggled path will recover.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 4 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Perform multiple iterations of sequentially disabling all ISLs on a switch in the fabric.

 

Result

48-hr run; ISL disables shows path failover path recovery while running IO. PASS

 

1.3.7 Storage Device Recovery Director Blade Maintenance

Test Objective

For extended time run. Verify device connectivity to DCX blades.

Sequentially toggle each DCX blade. Verify host I/O will failover to alternate path and toggled path will recover. Include blade disable/enable, blade power on/off, and manual blade removal/insertion.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Perform multiple iterations of sequential disable/enable, power on/off of the DCX blades in the fabric.

Result

IO failover and recovery with DCX reboot, power cycle is successful. PASS

 

1.3.8 Storage Device Recovery Switch Offline

Test Objective

Toggle each switch in sequential order. Host I/O will failover to redundant paths and recover upon switch being enabled.

Include switch enable/disable, power on/off, and reboot testing.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Perform multiple iterations of sequential disable/enable, power on/off and reboot of all the switches in the fabric.

Result

Enable/disable, reboot, and power cycle successful. PASS

 

1.3.9 Storage Device Recovery Switch Firmware Download

Test Objective

Sequentially perform firmware maintenance procedure on all device connected switches under test. Verify Host I/O will continue (with minimal disruption) through firmwaredownload and device pathing will remain consistent.

Procedure

  1.        Setup host with at least 2 initiator ports zoned with 2 target ports on array.
  2.        Setup multipath on host
  3.        Start I/O
  4.        Sequentially perform firmware upgrades on all switches in the fabric.

Result

Firmware download with running IO successful. PASS

 

1.4 Storage Device Fibre Channel Routing (FCR) InterNetworking Tests

 

1.4.1 Storage Device InterNetworking Validation w/FC host

Test Objective

Configure two FC fabrics with FCR. Verify that edge devices are imported into adjacent nameservers and hosts have access to their routed targets after FC routers are configured.

Procedure

  1.        Setup FCR in an Edge-Backbone-Edge configuration.
  2.        Setup LSAN zoning.
  3.        Verify name server and FCR fabric state. fcrproxydevshow; fabricshow
  4.        Verify host access to targets.

Result

Configured routed fabrics and lsan_ zones. Verified with IO. PASS

 

1.4.2 Storage Device InterNetworking Validation w/FCoE Test

Test Objective

Configure a FC fabric with FCR while connected to an FCoE fabric. Verify that edge devices are imported into adjacent nameservers and hosts have access to their routed targets after FC routers are configured.

Procedure

  1.        Add FCoE VCS fabric to FCR setup.
  2.        Setup LSAN zoning.
  3.        Verify name server and FCR fabric state. fcrproxydevshow; fabricshow�
  4.        Verify host access to targets.

Result

Created configuration and zoning. Edge devices are imported, LUNs are presented; verified with IO. PASS

 

1.4.3 Storage Device Edge Recovery after FCR Disruptions

Test Objective

Configure FCR for Edge-Backbone-Edge configuration. With IO running, validate device access and pathing. Perform reboots, switch disables, and port-Toggles on Backbone connections to disrupt device pathing and IO. Verify path and IO recovery once switches and ports recover.

Procedure

  1.        Setup FCR in an Edge-Backbone-Edge configuration.
  2.        Setup LSAN zoning.
  3.        Start I/O
  4.        Perform sequential reboots, switch disables and ISL port toggles on the switches in the backbone fabric.

Result

Verified path recovery with FCR disruptions while running IO. PASS

 

1.4.4 Storage Device BackBone Recovery after FCR Disruptions

Test Objective

Configure FCR for Edge-Backbone configuration. With IO running, validate device access and pathing. Perform reboots, switch disables, and port-Toggles on Backbone connections to disrupt device pathing and IO. Verify path and IO recovery once switches and ports recover.

Procedure

  1.        Connect array target ports to backbone fabric in an Edge-Backbone configuration.
  2.        Setup LSAN zoning.
  3.        Start I/O
  4.        Perform sequential reboots, switch disables and ISL port toggles on the switches in the backbone fabric.

 

Result

Verified path recovery with FCR disruptions while running IO. PASS

 

2.0 Storage and Fabric Performance - IO Workload Tests

 

2.0.1 (Single host) x (1 initiator ports) 1 target port

Test Objective

Run IO on a single path and verify performance characteristics are as expected

Procedure

  1.        Configure a single path from a single host initiator port to a single storage port
  2.        Run IO in a loop at block transfer sizes of 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write.

Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults. Performance behavior is as expected. PASS

 

2.0.2 (Single host) x (1 initiator ports) multiple target ports

Test Objective

Run multipath IO from a single initiator port to multiple target ports and verify performance characteristics are as expected

Procedure

1. Configure paths from a single host initiator port to a 4 target ports.

2. Run IO in a loop at block transfer sizes of 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write.

Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults. Performance behavior is as expected. PASS

 

2.0.3 (Single host) x (2 initiator ports) ->multiple target ports

Test Objective

Run multipath IO from two initiator ports one one host to multiple target ports and verify performance characteristics are as expected.

Procedure

1. Configure paths from a two host initiator ports to a 4 target ports

2. Run IO in a loop at block transfer sizes of 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write.

Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults. Performance behavior is as expected. PASS

 

2.0.4 (Multi host) x (2 initiator ports) -> multiple target ports

Test Objective

Run multipath IO from multiple initiator ports on multiple hosts to multiple target ports and verify performance characteristics are as expected.

Procedure

1. Configure paths from a two host initiator ports per host on 4 hosts, to 4 target ports.

2. Run IO in a loop at block transfer sizes of 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k, 256k, 512k, and 1m. Include a nested loop of 100% read, 100% write, and 50% read/write.

Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults. Performance behavior is as expected. PASS

 

2.0.5 VMWare Cluster IO Tests

Test Objective

Run multipath IO from a VMWare cluster with multiple initiator ports to multiple target ports.

Procedure

Configure a 2-host VMWare cluster with multipath on 2 initiator ports per host, 4 target ports, and 8 VMs. Use VMWare IOAnalyzer to create the worker VMs and drive the workload. Run IO at large and small block transfer sizes.

Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults. Performance behavior is as expected. PASS

 

2.0.6 Application Specific IO Tests

Test Objective

Use a few real-world application specific workloads to confirm behavior. Examples could include VMWare IOanalyzer trace replay feature, Iometer application specific workload emulation, or a database workload emulator like Oracle Orion.

Procedure

Configure paths from 2 initiators per host to 4 target ports. Run the following workloads:

  1.        File Server simulation with Medusa
  2.        OLTP simulation with Orion
  3.        Microsoft Exchange Server simulation with Medusa and IOAnalyzer
  4.        SQL Server simulation with IOAnalyzer and Iometer
  5.        Video On Demand simulation with IOAnalyzer � only on ESX
  6.        Workstation simulation with IOAnalyzer � only on ESX

 

Result

All workload runs are monitored at the host, storage and fabric and verify they complete without any I/O errors or faults. Performance behavior is as expected. PASS

 

The image below shows the output from the Pure Storage SSD Array monitoring tool wih 512 byte sequential reads from a single multipath host running 720,000 IOPS. While this is not a realistic workload, it does demonstrate that the network is easily capable of supporting the high number of transactions

 

image4.jpg

   Pure Storage SSD Array Performance Monitor Display

 

The diagram below shows the performance of the array with a multi-host configuration running large block (512KB) reads. The performace reached 5.25 GB/s.

 

image5.jpg  Pure Storage SSD Array Multihost Configuration with Large Block Reads

 

The display below shows the latency from a simulate Oracle OLTP work load with varying queue depths on the Pure Storage SSD Array. The array achieved sub-millisecond latency and the lower the latency, the better the performance.

 

image6.jpg

   Pure Storage SSD Array Latency Running Oracle OLTP-type Workload

 

The display belows shows the performance for a simulated Email application using the VMware IO Analyser.

 

image7.jpg

   Pure Storage SSD Array Simulated Email Server Workload Using VMWare IOAnalyzer