Design & Build

Data Center Solution, Storage-Deployment Guide: Brocade VCS Fabric with Hitachi HNAS 4060 Array

by ‎08-13-2014 10:40 AM - edited ‎08-13-2014 10:52 AM (3,045 Views)
 

 

Preface

 

Overview

This document provides guidance for deploying Brocade VCS Fabric using Brocade VDX Switch products with Hitachi Data Systems (HDS) HNAS 4060 IP storage array. The deployment shown is validated and tested. The following document provides information about the validation testing.

 

References

 

The deployment uses Brocade’s VCS Fabric to eliminate Spanning Tree Protocol (STP) providing an easy to manage virtual chassis with full utilization of all least cost links between switches. Like Brocade Fibre Channel fabrics that have been proven for mission critical application storage requiring high bandwidth, low latency and simplicity of deployment and operation, Brocade VCS Fabric is ideal for IP file storage using NAS as well as IP block storage using either iSCSI or FCoE.

 

Audience

Engineers responsible for designing, deployment and operation of IP networks who want to successfully add HDS HNAS storage to a Brocade VCS Fabric networks.

 

Objectives

This guide provides a deployment of Brocade VCS Fabric using the Brocade VDX Switch family (chassis and Top-or-rack (ToR)) connected to servers and NICs supporting 10 GE interfaces. This guide assumes an existing VCS Fabric installation exits. The procedures only cover specific settings or configuration changes needed to ensure the best performance, reliability and management of a VCS Fabric when using Hitachi HNAS NAS storage arrays.

 

Related Documents

The following publications are useful when deploying Brocade Gen 5 Fibre Channel fabrics, Brocade VCS Fabrics and Pure Storage flash arrays.

 

References

 

Document History

Date                  Version        Description

2014-08-13         1.0                Initial Release

 

Key Contributors

The content in this guide was provided in part by the following key contributors.

  • Test Architects: Dustin Maiers, Patrick Stander
  • Test Engineer: Robert Batesole
  • Technical Editor: Brook Reams

 

About Brocade

Brocade networking solutions help the world�s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is realized through the Brocade One® strategy, which is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

 

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

 

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings.

 

To learn more, visit (www.brocade.com)

 

About Hitachi Data Systems

Innovate with Information™. For the world's most information-driven industries, Hitachi Data Systems is a game changer. Every day top global corporations, including over 80% of Fortune Global 100 companies, use our integrated solutions to transform mountains of data into useful insights that drive real innovation. As their needs grow, our solutions scale to meet them. As new challenges arise, our solutions adapt to overcome them. It's a partnership that yields ongoing results.

 

Technical Architecture

Network attached storage (NAS) is commonly used in data centers for storage of files, HTML and web page content. Two protocols, Network File System (NFS) and Common Interchange File System (CIFS) are supported by NAS storage arrays such as the HDS HNAS 4060 array.

 

The network connecting NAS servers to their clients uses TCP/IP over Ethernet. But today, with larger volumes of data than ever, 10 GE Ethernet is becoming more common to scale network bandwidth to keep up with the growth in applications and scale of the application storage. However, 10 GE can be used with traditional Ethernet relying on Spanning Tree Protocol (STP) but this has limitations including only a single path (physical or logical) allowed between switches, traffic outages when a connection changes (addition, removal or outage) and relatively high latency.

 

For storage traffic, an optimal network can move traffic over all the least cost paths available, not just one, automatically reconfigures the least cost paths whenever a physical change is made (add/remove a switch, add/remove a link) and simplifies management by eliminating “per port” configuration settings to “per chassis” settings whenever possible.

 

Brocade’s VCS Fabric built with Brocade VDX Switches eliminates the use of STP and its limitations so it is very well suited for the rigorous requirements of NAS storage traffic.

 

Hitachi HNAS products are commonly deployed in data centers around the world providing a wide range of storage scalability (tens of TB to PB) with low latency so applications perform efficiently at scale.

 

Building Blocks

The Brocade Data Center Base Reference Architecture (see References below) includes building blocks for designing a VCS Fabric. A VCS Fabric Edge and Spine block and IP Core block are included in this deployment example as shown below.

 

VCS Fabric Spine Block

This block includes the VDX 8770 Switch with the option to use 40 GE ISL connections to the leaf blocks or 10 GE ISL connections. Brocade ISL Trunks composed of multiple 10 GE links automatically form when cables are connected between switches. Unlike Link Aggregation which assigns a flow to a specific physical link in a trunk, Brocade ISL Trunks move all traffic across all links at the frame level providing near 100% utilization of all physical links in the trunk with no “hot spots” requiring manual changes to the hashing.

 

The Spine block provides the Layer 3 / Layer 2 boundary of the network and provides VRRP-E for high availability of the Layer 3 gateway to the IP Core building block (not shown).

 

DataCenter_BlockVCSFabric_Spine-LeafSpine.JPG

 

   Brocade VCS Fabric Spine Block

 

References

 

VCS Fabric Leaf Blocks

There are two blocks: the VDX 6740 Switch with 10 GE converged enhanced Ethernet (CEE) ports and optional 40 GE uplink ports and the VDX 6710 with 1 GE ports and 10 GE uplink ports. Client hosts connect to one or more leaf switches using NIC Teaming on the host and vLAG on the VDX switches for high availability and resiliency. 10 GE hosts attach to VDX 6740 switches and 1 GE hosts attach to VDX 6710 switches. Each leaf switch connects to the spine using a Brocade ISL Trunk of 10GE links or for the VDX 6740; there is an option to use multiple 40 GE ISL links.

 

DataCenter_BlockVCSFabric_Leaf10GEDevices40GEISL.JPG 

   Brocade VCS Fabric-10 GE Leaf Building Block

 

DataCenter_BlockVCSFabric_Leaf1GEDevices.JPG

   Brocade VCS Fabric-1 GE Leaf Building Block

 

References

 

Design Template

These building blocks are combined into a design template as shown below.

 

Template_VCSFabricLeafSpine-HDSHNAS.jpg 

 

   VCS Fabric Leaf/Spine Template for HDS HNAS

 

The NAS arrays attach to the Spine block using one, or more, VDX 8770 Switches. The Spine block uses Brocade ISL Trucks of 10GE or optionally with the VDX 6740 switch multiple ISL of 40. Client hosts attach to the VDX 6740 Leaf using 10 GE connections and to the VDX 6710 with 1 GE connections.

 

NAS traffic can be assigned to a VLAN to enhance security over the fabric. As NAS storage grows, additional storage can be added to the HNAS arrays and additional arrays added to the Spine switches.

 

References

 

Base Deployment: VCS Leaf/Spine Fabric with HDS HNAS Storage

 

Deployment Topology

The diagram below shows the deployment topology. It consists of a VCS Fabric using a Leaf Spine topology. The HDS HNAS storage servers are attached to the Spine switches (as depicted by the blue arrows). All client hosts attach to the Leaf switches. This topology ensures uniform latency for all hosts and full utilization of all equal cost paths between VDX switches in the VCS Fabric.

 

DeploymentTopology.jpg 

 

   Deployment Topology

 

Pre-requisites

  1. An existing Brocade VCS Fabric designed and deployed in accordance with the references in the Design Template section.
  2. Sufficient rack space, power and cooling for the HDS HNAS array(s).
  3. Correct firmware releases for Brocade switches and HDS HNAS arrays.
  4. Supported servers/hosts with 10 GE NICs and/or Ethernet converged network adaptors (CNA).

 

Bill of Materials

The following table show the bill of materials used for this deployment. The references contain links to product data sheets. Although not shown, the VDX 6720 and VDX 6730 switches can be used as leaf switches if desired.

 

Identifier

Vendor

Model / Release

Notes

HNAS Servers

Hitachi

HNAS 4060
SMU 11.2.3319.04

Network Attached Storage (NAS) array

VDX6740-1

VDX6740-2

Brocade

VXD 6740 Switch
NOS 4.1.2

The Brocade VDX 6740 and 6740T are Ethernet fabric Top of Rack (ToR) switches featuring 10 GbE ports with 40 GbE uplinks. The new Brocade VDX 6740T-1G Switch offers dual-speed functionality. It can be deployed with1000BASET for existing 1 GbE server connectivity and upgraded via software to 10GBASE-T for future bandwidth growth. Together with Brocade VCS Fabric technology, these switches deliver the high performance and low latency needed to support demanding virtualized data center environments.

VDX6710-1

VDX6710-2

Brocade

VDX 6710 Switch
NOS 4.1.2

The Brocade VDX 6710 Switch is a high-performance 1 Gigabit Ethernet (GbE) fixed configuration switch that provides a reliable, scalable, and flexible foundation for supporting the most demanding business applications. It offers a cost-effective solution for connecting 1 GbE servers to an Ethernet fabric using Brocade VCS Fabric technology.

VDX8770-1

VDX8770-2

Brocade

VDX 8770 Switch
NOS 4.1.2

The Brocade VDX 8770 Switch is a highly scalable, low-latency, 1/10/40/100 Gigabit Ethernet (GbE) modular switch. Designed to easily scale out Brocade VCS fabrics, the Brocade VDX 8770 Switch brings new levels of performance to VCS fabric deployments.

Hosts/Servers

Various

Various

Hosts and Servers supporting Brocade Gen 5 Fibre Channel switches and supported Fibre Channel host bus adaptors (HBA) and converged Ethernet adaptors (CAN)

Host Bus Adaptors

QLogic       -- >

(Brocade)

 

 

 

Emulex      -- >

 

 

 

QLogic     -- >

(Brocade)

Brocade 1860

2-port 16Gb FC HB

Drvr: 3.2.4.0

Frmw: 3.2.4.0

 

Emulex OCD14102

2-port 8Gb Fc HBA

Drvr: 8.3.5.68.5p

 

Brocade 1020

2-port CNA

Drvr: 3.2.4.0

Frmw: 3.2.4.0

Ethernet NIC and CNA adaptors.

 

 

 

VCS Fabric Configuration Data

Identifier

Device

Configuration

Notes

rbridge-id 1

VDX8770-1

IP: 9.79.1.1/16

Spine Switch

rbridge-id 2

VDX8770-2

IP: 9.79.1.2/16

Spine Switch

rbridge-id 11

VDX6740-1

IP: 9.79.1.11/16

Leaf Switch

rbridge-id 12

VDX6740-2

IP: 9.79.1.12/16

Leaf Switch

rbridge-id 13

VDX6710-1

IP: 9.79.1.13/16

Leaf Switch

rbridge-id 14

VDX6710-2

IP: 9.79.1.14/16

Leaf Switch

channel-group 31

VDX8770-1

1/1/5

HNAS vLAG Port Channel

channel-group 31

VDX8770-2

2/0/5

HNAS vLAG Port Channel

channel-group 32

VDX8770-1

1/1/6

HNAS vLAG Port Channel

channel-group 32

VDX8770-2

2/0/6

HNAS vLAG Port Channel

 

References

 

Task 1: Brocade VDX Switch Configuration-Logical Chassis Mode

 

Description

The existing VDX switches in the VCS Fabric are configured to support the HDS HNAS array. A VLAN (979) is used to secure the NAS traffic.

 

Assumptions

  1. The VDX switches are running in Logical Chassis mode.

 

Step 1: Configure VLAN for NAS Traffic

In the VCS Fabric, configure VLAN 979 and add a VE interface on all switches in the VCS Fabric.

 

< =========== >

interface Vlan 979

!

rbridge-id 1

 interface Ve 979

  ip address 9.79.1.1/16

  no shutdown

 !

!

rbridge-id 2

 interface Ve 979

  ip address 9.79.1.2/16

  no shutdown

 !

!

rbridge-id 11

 interface Ve 979

  ip address 9.79.1.11/16

  no shutdown

 !

!

rbridge-id 12

 interface Ve 979

  ip address 9.79.1.12/16

  no shutdown

 !

rbridge-id 13

 interface Ve 979

  ip address 9.79.1.13/16

  no shutdown

 !

!

rbridge-id 14

 interface Ve 979

  ip address 9.79.1.14/16

  no shutdown

 !

<==========>

 

Step 2: Configure vLAG for HNAS Array

Configure the VCS Fabric vLAG interfaces on all ports connecting to the Hitachi HNAS storage array.

 

< =========== >

interface TenGigabitEthernet 1/1/5

 no fabric isl enable

 no fabric trunk enable

 channel-group 31 mode active type standard

 lacp timeout long

 no shutdown

!

interface TenGigabitEthernet 2/0/5

 no fabric isl enable

 no fabric trunk enable

 channel-group 31 mode active type standard

 lacp timeout long

 no shutdown

!

interface TenGigabitEthernet 1/1/6

 no fabric isl enable

 no fabric trunk enable

 channel-group 32 mode active type standard

 lacp timeout long

 no shutdown

!

interface TenGigabitEthernet 2/0/6

 no fabric isl enable

 no fabric trunk enable

 channel-group 32 mode active type standard

 lacp timeout long

 no shutdown

!

interface Port-channel 31

 vlag ignore-split

 switchport

 switchport mode access

 switchport access vlan 979

 spanning-tree shutdown

 no shutdown

!

interface Port-channel 32

 vlag ignore-split

 switchport

 switchport mode access

 switchport access vlan 979

 spanning-tree shutdown

 no shutdown

!

< =========== >

 

References

 

Task 2: Hitachi HNAS Configuration

 

Description

The Hitachi HNAS array is configured. This requires configuration of the physical interfaces, logical interfaces (ag interfaces), EVS configuration, file system configuration, and NFS and CIFS file share configuration.

 

Assumptions

  1. Hitachi HNAS array is installed in rack with power and management interface connected.
  2. Hitachi HNAS array is connected to Spine switches of VCS Fabric.

Step 1. Physical Interface Configuration

Add physical ports (tg interfaces) to logical interfaces (ag interfaces). When selecting ports to add to an ag interface, be aware that you cannot select individual tg ports on a single node.  You select ALL tg ports across all clusters to add to one ag interface. This is to provide node redundancy.

 

========== 

NOTE:
By adding tg1 to ag1, you are adding the physical port tg1 on EVERY node in the cluster to ag1. You cannot add node1-tg1 to ag1 and then add node2-tg1 to ag2 because node2-tg1 was already added to ag1.

==========

 

HDSHNASConfiguration-LinkAggregation.jpg    HDS HNAS Configuration-Link Aggregation

 

Step 2. EVS Configuration

Create two EVS (enterprise virtual servers) to correspond with the ag interfaces previously created.

 

HDSHNASConfiguration-EVSforag1.jpg

   HDS HNAS Configuration-EVS for ag1

 

HDSHNASConfiguration-EVSforag2.jpg

   HDS HNAS Configuration-EVS for ag2

 

HDSHNASConfiguration-EVSManagement.jpg

   HDS HNAS Configuration-EVS Management

 

Step 3. File System Configuration

From back-end storage with previously configured storage pools, the file systems are created and assigned to an EVS. Repeat to create number of desired file systems.

 

HDSHNASConfiguration-CreateFileSystemScreen 1.jpg

   HDS HNAS Configuration-Create File System, Screen 1

 

HDSHNASConfiguration-CreateFileSystemScreen 2.jpg

   HDS HNAS Configuration-Create File System, Screen 2

 

Step 4. CIFS Share Configuration

Assign a CIFS share to the EVS and file systems previously created. Repeat to create number of desired CIFS shares.

 

HDSHNASConfiguation-AddVolumeShare.jpg    HDS HNAS Configuration-Add Volume Share

 

Step 5. NFS Share Configuration

Assign an NFS share to the EVS and file systems previously created. Repeat to create number of desired NFS shares.

 

HDSHNASConfiguration-ExportNFSShares.jpg    HDS HNAS Configuration-Export NFS Shares

 

Task 3: Windows Host Configuration

 

Description

For Microsoft Windows Server 2012 and 2008 servers, NIC team interfaces exist for high availability and load sharing. The HNAS shares need to be mapped to Windows volumes.

 

Assumptions

  1. NIC Teaming has been installed on the windows servers.
  2. VCS Fabric ports are configured for vLAG connections to the Windows server(s).
  3. HDS HNAS configuration is complete and CIFS shares created.

 

Step 1. Map Hitachi HNAS Network Shares for Windows 2012 Server

 

MapWindows2012FileShare#1.jpg 

  Map Windows 2012 File Share #1

 

MapWindows2012FileShare#2.jpg 

  Map Windows 2012 File Share #2

 

Step 2. Map Hitachi HNAS Network Shares for Windows 2008 Server

The following shows how to map the network drives for a Windows 2008 Server.

 

MapWindows2008FileShare#1.jpg 

   Map Windows 2008 File Share #1

 

MapWindows2008FileShare#2.jpg 

   Map Windows 2008 File Share #2

Task 4: Red Hat Linux Host Configuration

 

Description

The following configuration steps are used for Red Hat Linux hosts.

Assumptions

  1.        NIC Bonding has been installed on the Linux servers in the /etc/sysconfig/network-scripts file.
  2.        VCS Fabric ports are configured for vLAG connections to the Linux host(s).
  3.        HDS HNAS configuration is complete and NFS shares created.

 

Step 1. Mount NFS Disk Shares

The following commands mount the Hitachi HNAS NFS shares on hosts running Red Hat 6.3/6.5.

 

< =========== >

mount -t nfs 9.79.30.1:/REDHAT_66.109 /mnt/hnas

mount -t nfs 9.79.30.1:/REDHAT_79.141 /mnt/hnas

< =========== >