Design & Build

Data Center Solution, Storage-Deployment Guide: Brocade VCS Fabric and EMC Isilon Scale-out NAS

by ‎09-25-2013 01:59 PM - edited ‎08-06-2014 08:35 AM (6,380 Views)

Synopsis: Provides a deployment guide for configuring Brocade VCS Fabric of VDX Switches with EMC Isilon Scale-out NAS and VMware ESXi clients.

 

Contents

Preface

Overview

Storage admins are constantly faced with requirements to expand their infrastructure to accommodate new data, retain old data longer, and meet the performance needs of new applications. Traditional scalable, high capacity and high performance storage systems were built on SANs; separate networks designed to accommodate storage-specific data flows. However, new developments in distributed applications and server virtualization see increasing adoption of Network Attached Storage (NAS) on Ethernet, thereby bringing the same requirements to Ethernet networks supporting storage that are traditionally found in SANs: scalability, capacity, predictable latency and reliability. Brocade VCS Fabric technology delivers high-performance, reliable networks for NAS solutions which can scale when needed without disruption to meet the new requirements for NAS storage infrastructure such as found with EMC Isilon Scale-out NAS.

 

A VCS Fabric with NAS is ideal providing predictable performance and reliability with simplified change management. VCS Fabric technology is built with TRILL/FSPF and provides unique capabilities including distributed intelligence, Automated Migration of Port Profiles (AMPP), virtual link aggregation groups (vLAG), and lossless Ethernet transport removing previous limitations of Ethernet for storage traffic.

 

Purpose of This Document

Thi document shows the procedures used  to deploy EMC Isilon with a VCS Fabric and VMware ESXi servers where Isilon NAS storage is the data store. The document is based on EMC Isilon X200 with a variety of Brocade VDX switches; VDX8770-4, VDX6720-60, VDX6720-24, and VDX6710-54. There is also a Brocade ICX6610-48p used for the management network.

 

The procedures cover configuration of a VCS Fabric of VDX Switches, an EMC Isilon NAS cluster and mounting NAS volumes as the datastores for the VMware vSphere ESXi servers. When appropriate, best practice recommendations are provided. The implementation is validated using clients VMs running a mix of operating systems accessing Isilon NAS storage. AMPP integration between VMware vCenter and the VCS Fabric ensures VMs (and VM kernels) are automatically placed in the VLANs corresponding to their Port Group configuration in the virtual distributed vSwitch (vDS) including initial VLAN creation if the VLAN is not already configured in the VCS Fabric.

 

Audience

This content targets cloud compute, solution, storage and network architects and engineers who are evaluating and deploying Isilon NAS solutions in their networks and want to understand how to deploy it with Brocade VCS Fabric technology.

 

Objectives

This deployment guide covers deployment of Isilon storage with a Brocade VCS Fabric, including configuration of NAS datastores for VMware vSphere. The guide is valuable beyond the specific EMC Isilon and Brocade VDX products used. The example configuration can be used as a building block for large scale-out NAS deployments with VMware virtual machines or physical servers. This deployment does not include configuration of disaster recovery or data protection mechanisms, such as replication or backup procedures, outside the basic redundancies included within the VCS Fabric, VMware ESXi cluster, and Isilon storage cluster.

 

Related Documents

 

References

  1. Data Center Infrastructure: Base Reference Architecture
  2. VCS Fabric Blocks
  3. Data Center Template, Server Virtualization
  4. Data Center Template, VCS Fabric Leaf-Spine
  5. Brocade Network OS Administrator’s Guide, v3.0.0
  6. Brocade Network OS Command Reference, v3.0.0
  7. Brocade VCS Fabric Technical Architecture

Note: The following require login to EMC PowerLink site

 

  1. Isilon OneFS Administration Guide, v7.0
  2. https://support.emc.com/docu44506_OneFS-7.0-Administration-Guide.pdf?language=en_US
  3. Isilon OneFS Command Reference, v7.0
  4. https://support.emc.com/docu44507_OneFS-7.0-Command-Reference.pdf?language=en_US
  5. Best Practice Guide, EMC Isilon Scale-out Storage with VMware vSphere 5
  6. https://support.emc.com/docu39424_Best-Practice-Guide:-EMC-Isilon-Scale-Out-Storage-with-VMware-vSphere-5.pdf?language=en_US

 

About Brocade

Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.

Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.

To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)

 

Key Contributors

The content in this guide was provided by the following key contributors.

Lead Architect: Marcus Thordal, Strategic Solutions Lab

 

Document History

Date                  Version        Description

2013-09-26     1.0              Initial Release

 

Technical Architecture

In the tested environment we use a variety of VDX Switch models to demonstrate that any combination of VDX Switches work together in a VCS Fabric enabling switch selection to be solely based on port density and speed 1/10/40Gbps, as required. The Isilon storage subsystem uses aggregated interfaces (LAG) across switches for redundancy and increased bandwidth. Within the VCD Fabric the LAG can span across multiple switches (vLAG) providing redundancy and flexibility, should one of the links fail, the storage will remain available through the other switch.

 

Topology

Below is a diagram of the network topology showing a Spine-Leaf architecture with the Isilon cluster nodes attached to the spine and the ESXi servers attached to leaf switches at the Top of Rack (ToR). This provides uniform and redundant access for all servers to all storage and simplifies scale-out when adding more servers and NAS nodes. Low latency, high bandwidth, high availability and simple management are maintained as physical resources are added

 

DeploymentTopology.jpg 

  Deployment Topology

 

Isilon intra-cluster communication is handled by a dedicated Isilon Infiniband network. All internal Isilon cluster traffic uses these paths, and must be assigned an internal IP address range during cluster setup.

 

Network Isolation

When connecting the VMware ESXi servers, it is recommended that a separate dedicated network interface is used respectively for management, storage, vMotion and VM application access. For high availability, the best practice is to use redundant interfaces for each of these networks. It is very common to use on-board 1GbE NICs for management and 10GbE interfaces for storage, vMotion and VM application access. This best practice requires a minimum of 2 x 1GbE and 6 x 10Gb interfaces and it may not be possible in practice due to the limited number of physical interfaces available on the server. Some network adapters, such as the Brocade FA-1860, provide traffic separation by partitioning the physical adapter transparently to ESXi server. Logical NICs appear as physical interfaces to the ESXi server. This guide shows a fully redundant deployment example and an example where the uplink is a single NIC with logical traffic isolation to show the corresponding VCS Fabric and VMware vDS switch configuration.

 

IP Addresses

When deploying a NAS infrastructure, the logical network infrastructure and IP topology must be planned in advance; in the test bed we use a separate management network with all ip addresses in the default VLAN 1. For the VCS network VLAN separation is used for storage (VLAN50), VM application (VLAN60) and  vMotion (VLAN70)  shown in Table 1 IP Addresses.

 

IPAddressTable.jpg

    IP Addresses

 

Configuring EMC Isilon

Deploying the EMC Isilon is one of the more simple and direct processes available in a NAS appliance. After following the hardware installation guide to rack and connect the nodes and switches, the first step is to configure the top-of-rack (ToR) management switch, which will provide access to all devices independently of user-facing network connections. Next, we describe how to configure the Isilon cluster, join all nodes to it, and create bonded (vLAG) network connections. Once the storage network connections are in place, we will prepare the VMware vCenter virtual network environment to take advantage of AMPP within the VDX Ethernet Fabric. Next, we explain how to provision and mount the Isilon NAS storage for the VMware ESXi hosts.

 

Finally, in addition to the basic functional system, we outline several optional parameters for an optimized experience, based on EMC recommended best practices for Isilon in VMware environments, as well as a few additional options for the VMware clusters and VMs used to test and verify the deployment.

 

Pre-requisites

    All Management interfaces for VDX switches should have IP addresses and accessible via SSH. All ESX hosts have the hypervisor OS installed and management IPs assigned. The vCenter Server Appliance is deployed on one of the ESX hosts and a virtual datacenter created.

Bill of Materials

The following products are used in this deployment.

 

Identifier

Vendor

Model

Notes

Spine Switch

Brocade

VDX 8770-4

Modular switch with 10Gb and 40Gb interfaces

Spine Switch

Brocade

VDX 8770-4

Modular switch with 10Gb and 40Gb interfaces

ToR

Brocade

VDX 6720-60

60 ports of 10Gb

ToR

Brocade

VDX 6720-60

60 ports of 10Gb

ToR

Brocade

VDX 6720-24

48 ports of 1Gb and 6 ports of 10Gb

ToR

Brocade

VDX 6720-24

48 ports of 1Gb and 6 ports of 10Gb

Management Network

Brocade

ICX 6610-48P

 

ESXi Server

IBM

X3630 M3

ESXi 5.1 (4 total)

Isilon Node

EMC

Isilon X200

4 total in cluster

       

 

Task 1: Management Network

 

Description

For completion we briefly describe setup of the switch used for the management network in the test bed. All switches, servers, and storage cluster nodes have management network interfaces separate from the production or dataflow network. These connect to the Top of Rack management switch, supplied by the Brocade ICX 6610-48P. We apply basic switch authentication with SSH logins so the ICX login process is similar to the VDX, for a consistent management experience.

 

Assumptions

    No directory authentication exists in this setup, so we will use internal accounts and passwords.

Step 1: Configure ICX Switch

      Connect to the serial console of the ICX switchEnter Enable mode and then Config mode
enable conf t
      Configure switch addressing
ip address 192.168.90.90 255.255.255.0 no ip dhcp-client enable ip default-gateway 192.168.90.1
      Configure authentication
username admin priv 0 create <password> aaa authentication web-server default local aaa authentication enable default local aaa authentication login default local aaa authentication login privilege-mode console timeout 15 enable telnet authentication telnet timeout 15
      Configure SSH access
crypto key generate ip ssh key-auth no ip ssh scp enable
      Optionally, disable telnet access
no telnet server
 

Task 2: Configure Isilon Cluster

 

Description

The OneFS Admin Guide describes the Isilon cluster as follows:

 

“A cluster includes two networks: an internal network to exchange data between nodes and an external network to handle client connections. Nodes exchange data through the internal network with a proprietary, unicast protocol over InfiniBand. Each node includes redundant InfiniBand ports so you can add a second internal network in case the first one fails. Clients reach the cluster with 1 GigE or 10 GigE Ethernet. Since every node includes Ethernet ports, the cluster's bandwidth scales with performance and capacity as you add nodes.”

To build the cluster, the Isilon x200 requires setting up just one node with the addressing information. OneFS automatically expands its cluster when additional nodes are added after a few keystrokes. Therefore, most of the work involves setting up node1, and then telling subsequent nodes to join its cluster. Once you connect to the serial console, follow the onscreen prompts.

 

Assumptions

 

  1. The test environment uses four Isilon X200 nodes, and limits the IP address ranges to 10 addresses.
  2. Production environments would allow for more nodes for future expansion. With the main purpose of this test being validating the data path, the test did not use advanced domain services such as directory authentication or DNS.
  3. Connect to the serial console on node1 with a null modem connector
          a. 115200/8/N/1/Hardware
          b. Press Enter to start the setup wizard

4. Create a new cluster.
5. Change the root password from the default:      Password!
6. Change the admin password from the default:  Password!
7. Enable SupportIQ
 
     a. Enter company name:  Brocade
     b. Enter contact name:      TestAdmin@brocade
 
8. Enter a new name for the cluster:            EMCworld
9. Use the default current encoding:          utf-8
10. Configure cluster internal IB interface (int-a):
            a. Configure netmask:        255.255.255.0
            b. Configure IP range:        172.16.1.101-172.16.1.110
11. Configure external management interface ext-1:
            a. Configure netmask:        255.255.255.0
            b. Configure MTU:                                1500
            c. Configure IP range:        192.168.90.101-192.168.90.105
12. Enter default gateway:  192.168.90.1
13. Configure SmartConnect settings (optional)

Step 1: Setup Node 1

 

      Note: SmartConnect (VIP for failover) is configured after the cluster is online, during the bonded network interface configuration.

 

1. Configure DNS settings (optional)
2. Configure cluster date and time
       a. Configure time zone:    Pacific Time Zone
3. Configure cluster join mode:      Manual
 

Note: The default option is Manual, and as this is an initial cluster setup, it’s fine for the additional node to initiate the join. After the system moves into production, it may be prudent to change the join mode to Secure.
Manual — joins can be initiated by either the node or the cluster.
Secure— joins can be initiated only by the cluster.

      Commit these changes and initialize node1
      Connect to the serial console on additional nodes with a null-modem connector
        115200/8/N/1/Hardware
        Press Enter to start the setup wizard
      Join an existing cluster: EMCworld SSH to each node via their ext-1 IP address or connect via serial console Run
isi status
    to view cluster status

Step 2: Add Remaining Nodes to the Cluster

    Connect to the serial console on additional nodes with a null-modem connector
      115200/8/N/1/Hardware
      Press Enter to start the setup wizard
    Join an existing cluster: EMCworld

Step 3: Verify Node & Cluster Status

      SSH to each node via their ext-1 IP address or connect via serial console Run
isi status
    to view cluster status

--------------

EMCworld-1# isi status

Cluster Name: EMCworld

Cluster Health:   

Cluster Storage:  HDD                SSD

Size:            41T (43T Raw)      0 (0 Raw)

VHS Size:        2.0T

Used:            23G (< 1%)          0 (n/a)

Avail:            41T (> 99%)        0 (n/a)

                  Health Throughput (bps)    HDD Storage      SSD Storage

ID |IP Address |DASR|  In  Out Total| Used / Size      |Used / Size

---+---------------+----+-----+-----+-----+------------------+----------------

  1|192.168.90.101  | OK | 74K| 264K| 338K|  5.8G/  10T(< 1%)|    (No SSDs)

  2|192.168.90.102  | OK | 0|    0|    0| 5.7G/  10T(< 1%)|    (No SSDs)

  3|192.168.90.103  | OK | 171|    0|  171| 5.7G/  10T(< 1%)|    (No SSDs)

  4|192.168.90.104  | OK | 98K|    0|  98K| 5.7G/  10T(< 1%)|    (No SSDs)

------------------------+-----+-----+-----+------------------+----------------

Cluster Totals:        | 173K| 264K| 437K|  23G/ 41T(< 1%)|    (No SSDs)

    Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only

Critical Events:

Cluster Job Status:

No running jobs.

No paused or waiting jobs.

No failed jobs.

Recent job results:

Time            Job                        Event

--------------- -------------------------- ------------------------------

04/02 17:44:17 MultiScan Succeeded (LOW)

04/02 17:59:10 MultiScan              Succeeded (LOW)

04/02 18:01:50 MultiScan Succeeded (LOW)

--------------

3. Login to the Web GUI using the node1 management IP address

With the setup complete we login to the Isilon Administration Console with a browser interface pointing to the management IP address of the cluster.

 

Task 3: Setup Bonded Storage Cluster Interfaces

 

Description

The Isilon storage system will use bonded interfaces for client connections to increase performance and availability should one or more 10Gb connections fail. Each node will use port channel groups configured in the two spine VDX 8770-4 (RB1 and RB2).

 

Node1, 1/2/41 & 2/2/42, port-channel 101

Node2, 1/2/43 & 2/2/44, port-channel 102

Node3, 1/2/45 & 2/2/46, port-channel 103

Node4, 1/2/47 & 2/2/48, port-channel 104

 

Assumptions

    The fabric should already be configured and RBridge and VCS IDs assigned to the switches.

Note: When the VCS is deployed in distributed mode the VCS is configured as a Logical Chassis from a single entry point using the VCS Virtual IP and configuration changes are automatically saved across all switches in the fabric. In the following example we will show configuration for distributed mode (Logical Chassis)

 

 

Step 1: Connect 10Gb interfaces to RB1 & configure ports for VLAN access

    SSH to the VDX switch or connect to the serial console Configure VLANs

-----------

VDX8770_RB1# conf t

VDX8770_RB1(config)# interface Vlan 50

VDX8770_RB1(config-Vlan-50)# description IsilonTest1_Storage

VDX8770_RB1(config)# interface Vlan 60

VDX8770_RB1(config-Vlan-60)# description IsilonTest1_VM_Application

VDX8770_RB1(config)# interface Vlan 70

VDX8770_RB1(config-Vlan-60)# description IsilonTest1_vMotion

-----------

Note: When the VCS is deployed in distributed mode the VCS is as a Logical Chassis and therefore VLANs only need to be configured once to be available across the complete VCS

 

    3. Configure vLAG (LACP Port Channel) for Isilon Node1 connected to RB1 & RB2

-----------

VDX8770_RB1(config)# interface Port-channel 101

VDX8770_RB1(config-Port-channel-101)# description vLAG_Isilon_Node1

VDX8770_RB1(config-Port-channel-101)# switchport

VDX8770_RB1(config-Port-channel-101)# switchport mode access

VDX8770_RB1(config-Port-channel-101)# switchport access vlan 50

VDX8770_RB1(config-Port-channel-101)# no shutdown

-----------

 

    4. Add the physical ports on RB1 & RB2 (where Isilon node 1 is connected) to the vLAG

 

-----------

VDX8770_RB1(config)# int ten 1/2/41

VDX8770_RB1(conf-if-te-1/2/41)# channel-group 101 mode active type standard

VDX8770_RB1(conf-if-te-1/2/41)# no shutdown

VDX8770_RB1(conf-if-te-1/2/42)# int ten 2/2/42

VDX8770_RB1(conf-if-te-2/2/42)# channel-group 101 mode active type standard

VDX8770_RB1(conf-if-te-2/2/42)# no shutdown

VDX8770_RB1(conf-if-te-2/2/42)# end

-----------

 

    5. Repeat steps 3-4 to enable vLAGs for Isilon nodes 2-4

    6.  Enable QOS Flow Control for both tx and rx on RB1 and RB2

 

Step 2: Enable Ethernet Pause/Flow Control Support

 

-----------

VDX8770_RB1# conf t

VDX8770_RB1(config)# interface Port-channel 101

VDX8770_RB1(config-Port-channel-101)# qos flowcontrol tx on rx on

-----------

2. Validate vLAG Port-channel 101 Interface qos

-----------

VDX8770_RB1# show running-config interface Port-channel 101

interface Port-channel 101

vlag ignore-split

switchport

switchport mode access

switchport access vlan 50

qos flowcontrol tx on rx on

no shutdown

-----------

      Repeat steps 1-2 to enable Ethernet Pause for Isilon nodes 2-4 From the Cluster tab, select Networking Click “Add Subnet” and follow the onscreen instructionsSet subnet Name, Description, and Netmask
        Name:  dvS-Datastore Description:        VMware Datastore Netmask:            255.255.255.0 Gateway:           
none
      SmartConnect:  192.168.50.111

Step 3: Configure Isilon Network from the Web GUI

      From the Cluster tab, select Networking Click “Add Subnet” and follow the onscreen instructionsSet subnet Name, Description, and Netmask
        Name:  dvS-Datastore Description:        VMware Datastore Netmask:            255.255.255.0 Gateway:           
none
      SmartConnect:  192.168.50.111

Isilon-SettingSubnets.jpg

    4. Add and IP address pool for the cluster nodes

      Name: Datastore IP range: 192.168.50.101-192.168.50.110

    5. Define the SmartConnect Settings

Note: SmartConnect has two modes available: Basic and Advanced, which requires an additional license from EMC. Unlicensed Basic mode balances client connections by using a round robin policy, selecting the next available node on a rotating basis. For more info on the Advanced policies, see the OneFS Admin Guide.

 

    Zone name:        vSphere Connect Policy: Round Robin Service Subnet:                dvS-Datastore

Isilon-SettingSmartConnect.jpg

 

  6. Add available interfaces to the subnet, choose aggregated links from each node

  7. Use LACP for the Aggregation Mode (since we configured the vLAG as LACP)

 

Isilon-SettingIPPoolMembers.jpg

 

8. When the switch port channels are configured properly, the Isilon will show green indicators for all 10Gb interfaces in the cluster

 

Isilion-ShowingGreenIndicators.jpg

 

This completes the network connectivity setup for the Isilon NAS Scale-out NAS cluster.

 

Task 4: Setup VMware Network Connections

 

Description

The virtual distributed switch provides  network access to storage via the physical uplinks on each ESXi server. Best practice is to separate virtual switches for VM application access and vMotion using dedicated uplinks for each distributed or standard vSwitch. In this document we will only show the setup for the distributed vSwitch used to connect to the NFS namespace provided by the Isilon for the ESXi servers to use as data store.

Note: In the currently available versions of ESXi –all traffic to a single NFS mount point will always use a single uplink interface regardless if you have multiple interfaces which are defined in a LAG. The Lag will provide redundancy not load balancing. You can achieve some level of load balancing by configuring Load Based Teaming (LBT) which kicks in when a single vmnic reaches 75% utilization. For more information on LBT see vmware.com.

 

After completing the VMware ESXi configuration we will configure AMPP for the VCS to automatically integrate with vCenter for VM placement in VLANs based on Port Group membership.

Each ESXi server uses vLAGs configured on the connected ToR switch in the respective racks (see figure 1) IN the following we will go through the configuration for ESXi_231. Server ESXi_231 is connected on ports 3/0/37 and 4/0/37. This is defined as port-channel 231.

 

Prerequisites

    Two uplinks per server for the virtual switch used for storage traffic. For each ESX server a VMkernel interface is defined to be used for NFS traffic and IP address assigned according to the IP topology. vCenter Server (or vCenter Appliance) is already deployed and the ESX servers are already added/managed by the vCenter. Configure vLAG (Static Port Channel) for ESXi_231 connected to RB1 & RB2

Step 1: Configure VCS ports with connected uplink interfaces for ESXi storage path

 

    1. Configure vLAG (Static Port Channel) for ESXi_231 connected to RB1 & RB2

----------

VDX8770_RB1(config)# interface Port-channel 231

VDX8770_RB1(config-Port-channel-101)# description vLAG_ESXi231_Storage

VDX8770_RB1(config-Port-channel-101)# port-profile-port VDX8770_RB1(config-Port-channel-101)# no shutdown

----------

    2. Add the physical ports on RB1 & RB2 (where ESXi221 is connected) to the vLAG

----------

VDX8770_RB1(config)# int ten 3/0/37

VDX8770_RB1(conf-if-te-3/0/37)# channel-group 101 mode on type standard

VDX8770_RB1(conf-if-te-3/0/37)# no shutdown

VDX8770_RB1(config)# int ten 4/0/37

VDX8770_RB1(conf-if-te-4/0/37)# channel-group 101 mode on type standard

VDX8770_RB1(conf-if-te-4/0/37)# no shutdown

----------

    3. Repeat steps 1-2 for the interfaces on the remaining ESX nodes

 

Step 2: Create Distributed vSwitch in vCenter

    1. Login to vSphere Client and press Ctrl-Shift-N to open the Network inventory

    2. Click Add a vSphere Distributed Switch

 

VMwareVDistributedSwitch.jpg

   

3. Set the switch name: dvS-Storage

 

VMwareVDS-SetSwitchName.jpg

   

4. Add the host and both 10G physical interfaces

 

VMwareVDS-AddHosts.jpg

   

5. Click Finish to complete the creation of the distributed vSwitch

 

VMwareVDS-Complete.jpg

   

6. Edit Settings for dvS-Storage

 

    7. Under Advanced, enable CDP Operation = Both (this is necessary for the VCS integration to work)

 

VMwareVDS-AdvancedSettings.jpg

   

8. Click OK

 

    9. Edit Settings for dvPortGroup

 

    10. Change name to dvPG-50_Storage

 

Note: It is useful to include the VLAN ID in the port group name for easy identification.

 

    11. Set the VLAN Policy to 50

 

VMwareVDS-Advanced-SetVLANPolicy.jpg

 

    12. Verify NIC Teaming option is “Route based on IP hash” since we have connected to a vLAG

 

VMwareVDS-ConfirmNICTeaming.jpg

 

    13. Click OK to complete the Port group configuration.

 

Step 3: Configure Host Networking in vCenter

 

    1. Navigate to Hosts and Clusters in vSphere Client

    2. Select the first ESX node and open Networking in the Configuration tab

    3. Select the Distributed Switch dvS-Storage

    4. Click Manage virtual adapters and select Add

    5. Select New virtual adapter

    6. Select VMkernel type

    7. Select port group dvSPG-50_Storage

 

VMwareVDS-HostConnectionSettings.jpg

 

    8. Enter the IP address and netmask for the ESX host

      IP address: 192.168.50.xx Netmask: 255.255.255.0

VMwareVDS-HostIPAddressSettings.jpg

 

    9. Review settings and click Finish

    10. Repeat configuration steps for all ESX hosts

 

Step 4: Register vCenter in VCS

 

    1. SSH to the VDX switch or connect to the serial console

-------------

VDX8770_RB1# conf t

VDX8770_RB1(config)# vcenter IsilonTest1url https://192.168.90.100 username root password "Password!"

VDX8770_RB1(config)# vcenter IsilonTest1 activate

VDX8770_RB1(config)# vcenter IsilonTest1 interval 10

-------------

    2. Verify status of vCenter networks

-------------

VDX8770_RB1# show vnetwork vcenter status

vCenter          Start                Elapsed (sec)  Status

================= ====================================================

IsilonTest1 2013-04-09 20:26:07  11            Success

VDX8770_RB1# show vnetwork dvs vcenter IsilonTest1

dvSwitch            Host                      Uplink Name  Switch Interface

=====================================================================

dvS-Storage          ESXi_221                  vmnic4        -

                                                vmnic5        -

ESXi_231                    vmnic4        -

                                                vmnic5        -

Total Number of Entries: 4

VDX8770_RB1# show vnetwork dvs vcenter dvpgs

dvPortGroup                      dvSwitch            Vlan   

=================                ===============      =========

dvPG-50_Storage                  dvS-Storage          50-50,

dvPG-60_VMs                      dvS-VMs              60-60,

dvPG-70_vMotion                  dvS-vMotion          70-70,

dvS-Storage-DVUplinks-17          dvS-Storage          0-4094,

dvS-VMs-DVUplinks-20      dvS-VMs              0-4094,

dvS-vMotions-DVUplinks-23        dvS-vMorion          0-4094,

Total Number of Entries: 6

VDX8770_RB1# sh vnet vms

Virtual Machine      Associated MAC      IP Addr      Host                 

==================  ===============      ===========  ==============

vCenter Server      00:0c:29:56:8a:00    -            ESXi_221

vmware-io-analyzer  00:50:56:bb:60:24    -            ESXi_231

w2k8-VM1            00:50:56:99:00:01    -            ESXi_211

Total Number of Entries: 3

-------------

Confirm network connections between ESX hosts and Isilon storage

    SSH to each ESX hostRun ping checks between the host and its counterparts and the X200 nodes for dvSPG-50_Storage
      vmkping 192.168.50.101 vmkping 192.168.50.102 vmkping 192.168.50.103 vmkping 192.168.50.104 vmkping 192.168.50.111
    Run ping checks between the host and its counterparts for dvPG-70_vMotion
      vmkping 192.168.70.231 vmkping 192.168.70.219 vmkping 192.168.70.227 vmkping 192.168.70.229

Task 5: Setup Isilon NAS Shares

 

Description

VMware vSphere 5 currently only allows NFS mounting for datastores, making NFS the only share type required. We also enable SMB sharing on the datastore to allow a Windows management station to upload/manipulate files to the datastore, such as software installers and other utilities. The SMB share also provides another path to show customers the NAS system’s capabilities.

 

Step 1: Configure Volume

    Login to the Web GUI using the node1 management IP address Navigate to File System -> Smart Pools -> Disk PoolsClick Manually Add Disk Pool
      Pool Name: X200_43TB_6GB-RAM Protection Level: +2:1 Add all node resources to the pool

Isilon-ConfigureVolumeAddDiskPool.jpg

 

    4. Click Submit

 

Step 2: Configure SMB Shares

    Navigate to File Sharing -> SMB -> Add Share
      Share name: ifs Description: Isilon OneFS Users and Groups: <default>

Isilon-ConfigureSMBShares.jpg

 

    2. Click submit.

 

Step 3: Configure NFS Shares

    Navigate to Files Sharing -> NFS -> Add Export
      Description: MountPoint for ESXi Servers Directories: /ifs Enable mount access to subdirectories Access Control: <defaults>

Isilon-ConfigureNFSShares.jpg

 

    2. Click submit.

 

Task 6: VMware Storage Configuration

 

Description

 

NFS allows multiple connections from a single host, meaning an ESX host can mount the same NFS export multiple times as separate datastores to distribute sessions. For demo purposes, setup at least one datastore using the Isilon SmartConnect IP address for storage failover. Add multiple datastores using the same IP if desired.

 

Step 1: Add Isilon Datastores to ESX Hosts

    Login to vSphere Client and press Ctrl-Shift-H to open Hosts and Clusters Select the first ESX node and open Storage in the Configuration tabClick Add Storage…
      Type: Network File System

VMware-AddIsilonDataStores.jpg 

    4. Enter NFS access information using the Isilon SmartConnect IP

      Server: 192.168.50.111 Folder: /ifs Datastore Name: IsilonVIP-50

Note: Change the datastore name to support additional mounts

 

VMware-SetNFSAccess.jpg

 

    5. Review your settings and click Finish

    6. Repeat these steps for remaining ESX hosts

 

Additional Parameters for Optimized Experience

 

Task 1: Configuring Advanced Settings for Isilon Best Practices

Description

 

At this point, the system is ready to create virtual machines on the NFS datastore(s). EMC has additional recommended best practice options that may improve performance and manageability in larger demos or production environments.

 

Step 1: Enable advanced parameters for Isilon storage in a VMware environment

These recommendations come from the document “EMC Isilon Scale-out Storage with VMware vSphere 5” provided by EMC.

 

    Enable SDRS in the vSphere Web Client:
      Browse to the datastore cluster in the vSphere Web Client navigator Click the Manage tab and click Settings Under Services, select Storage DRS and click Edit Select Turn ON vSphere DRS and click OKOptionally, disable only I/O-related functions of SDRS
        Under Storage DRS, click Edit Deselect the Enable I/O metric for Storage DRS option and click OK
    Enable SIOC
      Select a datastore in the vSphere Client inventory and click the Configuration tab Click Properties Enable Storage I/O Control Leave Congestion Threshold at 30ms Note: This setting is specific to the datastore and not to the host.
    Download the VAAI-NAS plugin from EMC and install with VMware Update Manager to offload certain cloning, snapshot, and vMotion operations from ESX to the Isilon cluster.
      Full File Clone – moves jobs for cloning to the storage backend, reducing ESX load Extended Statistics – improve utilization accuracy of VMs Reserve Space – enable Thin-provisioning for Eager/Lazy Zeroed virtual disks
    Enable vSphere API for Storage Awareness (VASA)
      SSH to any node in the cluster and log in as root Enable VASA by running the following command:

isi services apache2 enable

isi services isi_vasa_d enable

          c. Download the vendor provider certificate to your desktop via http://<ip_addr>

          d. In vSphere, navigate to Home->Administration->Storage Providers and  click Add

          e. Fill out the following fields in the Add Vendor Provider window:

Name: name for this VASA provider, e.g. EMC Isilon Systems

URL: http://<ip_addr>:8081/vasaprovider

Login: root

Password: root password

          f. Enable Use Vendor Provider Certificate checkbox

          g. Browse to the Certificate location for the certificate on your desktop

          h. Click OK.

Note: to disable VASA later, run the following commands from SSH:

    isi services apache2 disable

  isi services isi_vasa_d disable

    5. Define custom attributes for VASA

      10g Clustername Diskpool iSCSI NplusX Replica

    6. Create VM storage profiles

    7. Assign multiple dynamic IPs to each Isilon node in a dynamic IP pool

    8. Mount /ifs datastore to each ESX host in a mesh topology

    9. Enable jumbo frames on 10G storage links

    10. Configure MCT paths between switches for cluster nodes

    11. Enable X200 protection at N+2:1 using SmartPool policy

    12. Set SmartCache optimization to Random using SmartPool policy

    13. Use a single dedicated datastore to hold the hypervisor swap files (.vswp) for all ESX hosts.

 

Task 2: Additional Options for VMware Clusters and VMs

 

Description

The following items were useful specifically for building the Isilon setup in a closed environment, but may have application in other environments so we document them here.

 

Step 1: vSphere Optimizations

 

    1. Disable Shell Warnings for SSH/remote access in vSphere

 

Note: The default settings for ESX will show a security warning when SSH is enabled, and since most production activities do not require SSH, VMware recommends that administrators only enable SSH when they need it. For proof of concept and demo labs, or full-time SSH access, it’s useful to disable the SSH warning for a clean interface.

 

      Select the ESXi host from the Inventory Select Configuration tab, Advanced Settings from the Software menu Set UserVars > UserVars.SuppressShellWarning = 1 You can also do this via the command-line:

vim-cmd hostsvc/advopt/update UserVars.SuppressShellWarning long 1

    2. For IO-intensive VMs, use the PVSCSI adapter (Paravirtual) which increases throughput & reduces CPU overhead

    3. Align VMDK files at 8K boundaries for OneFS & create VM templates

Note: Since Windows Vista and Windows Server 2008, all Windows versions align automatically during OS installation. Previous versions and upgraded systems are not aligned.

Note: RedHat & CentOS Linux version 6 systems align automatically during OS installation. Previous versions and upgraded systems are not aligned.

      Format legacy Windows disks with 8K Blocks with diskpart
create partition primary align=8

http://support.microsoft.com/kb/923076

    4. Use 8192KB allocation unit (block size) when formatting virtual disks

a.  Windows: DISKPART> format fs=NTFS label=<"label"> unit=8192 quick b.  Linux: mke2fs –b 8192 –L <"label"> /dev/<dev#>

    5. Advanced NFS Settings for vSphere are available from VMware in KB #1007909. Heed all cautions and recommendations from VMware and Isilon.

          http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007909#NFSHEap

 

Step 2: Windows VM Optimizations

      Create a Windows Server 2008 R2 Template
      Enable copy/paste through vSphere
        Edit Settings -> Options -> Advanced -> General Click Configuration Parameters
c. 
        Add
isolation.tools.copy.disable = falsed. 
        Add
isolation.tools.paste.disable = false
      Disable Password Expiration in local Group Policy
        Run gpedit Navigate to Computer Config/Windows Settings/Security Settings/Account Policies/Password Policy Set
Maximum password age = 0
      Disable Require Ctrl-Alt-Del to login
        Run gpedit Navigate to Computer Config/Windows Settings/Security Settings/Local Policies/Security Options Enable
Interactive logon: Do not require Ctrl+Alt+Del
      Run gpupdate to apply new policies Extract Sysinternals to
C:\Program Files\SysinternalsSuite
      and add to the Path
        System Properties/Advanced/Environment Variables
      Install PuTTY utility (installs & adds path for
C:\Program Files\PuTTY
      )
a. 
        Run
putty-0.62_x64-installer.exe Import putty-sess.reg
        to preload putty sessions for the demo
      Create hosts file for internal name resolution in
C:\Windows\System32\drivers\etc\hosts

192.168.60.240  VMIO1

192.168.60.241  VMIO2

192.168.60.242  RHEL242

192.168.60.243  Win2K8-VM1

192.168.60.244  Win2K8-VM2

192.168.60.245  WEBSRV1

192.168.60.246  VMS

192.168.50.101  STORE1

192.168.50.102  STORE2

192.168.50.103  STORE3

192.168.50.104  STORE4

192.168.50.111  ISILONVIP-50

    9. Configure w32time server on W2K8-VM1

        Import
w32time-server.reg
        to the registry Run
sc triggerinfo w32time start/networkon stop/networkoff Run net start w32time

    10. Configure w32time client on other hosts

        Import
w32time-client.reg
        to the registry Run
w32tm /config /manualpeerlist:"192.168.60.243,0x01" /syncfromflags:manual /update Run sc triggerinfo w32time start/networkon stop/networkoff Run net start w32time

Step 3: Windows 8 Optimizations

    Enable Built-in Administrator account
      Open compmgmt.msc and navigate to Local UsersRight-click Administrator and select Set Password
        Password: Password!
      Right-click Administrator and select Properties
        Uncheck Account Disabled
Contributors