Synopsis: Provides a deployment guide for configuring Brocade VCS Fabric of VDX Switches with EMC Isilon Scale-out NAS and VMware ESXi clients.
Storage admins are constantly faced with requirements to expand their infrastructure to accommodate new data, retain old data longer, and meet the performance needs of new applications. Traditional scalable, high capacity and high performance storage systems were built on SANs; separate networks designed to accommodate storage-specific data flows. However, new developments in distributed applications and server virtualization see increasing adoption of Network Attached Storage (NAS) on Ethernet, thereby bringing the same requirements to Ethernet networks supporting storage that are traditionally found in SANs: scalability, capacity, predictable latency and reliability. Brocade VCS Fabric technology delivers high-performance, reliable networks for NAS solutions which can scale when needed without disruption to meet the new requirements for NAS storage infrastructure such as found with EMC Isilon Scale-out NAS.
A VCS Fabric with NAS is ideal providing predictable performance and reliability with simplified change management. VCS Fabric technology is built with TRILL/FSPF and provides unique capabilities including distributed intelligence, Automated Migration of Port Profiles (AMPP), virtual link aggregation groups (vLAG), and lossless Ethernet transport removing previous limitations of Ethernet for storage traffic.
Thi document shows the procedures used to deploy EMC Isilon with a VCS Fabric and VMware ESXi servers where Isilon NAS storage is the data store. The document is based on EMC Isilon X200 with a variety of Brocade VDX switches; VDX8770-4, VDX6720-60, VDX6720-24, and VDX6710-54. There is also a Brocade ICX6610-48p used for the management network.
The procedures cover configuration of a VCS Fabric of VDX Switches, an EMC Isilon NAS cluster and mounting NAS volumes as the datastores for the VMware vSphere ESXi servers. When appropriate, best practice recommendations are provided. The implementation is validated using clients VMs running a mix of operating systems accessing Isilon NAS storage. AMPP integration between VMware vCenter and the VCS Fabric ensures VMs (and VM kernels) are automatically placed in the VLANs corresponding to their Port Group configuration in the virtual distributed vSwitch (vDS) including initial VLAN creation if the VLAN is not already configured in the VCS Fabric.
This content targets cloud compute, solution, storage and network architects and engineers who are evaluating and deploying Isilon NAS solutions in their networks and want to understand how to deploy it with Brocade VCS Fabric technology.
This deployment guide covers deployment of Isilon storage with a Brocade VCS Fabric, including configuration of NAS datastores for VMware vSphere. The guide is valuable beyond the specific EMC Isilon and Brocade VDX products used. The example configuration can be used as a building block for large scale-out NAS deployments with VMware virtual machines or physical servers. This deployment does not include configuration of disaster recovery or data protection mechanisms, such as replication or backup procedures, outside the basic redundancies included within the VCS Fabric, VMware ESXi cluster, and Isilon storage cluster.
Note: The following require login to EMC PowerLink site
Brocade® (NASDAQ: BRCD) networking solutions help the world’s leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. (www.brocade.com)
The content in this guide was provided by the following key contributors.
Lead Architect: Marcus Thordal, Strategic Solutions Lab
2013-09-26 1.0 Initial Release
In the tested environment we use a variety of VDX Switch models to demonstrate that any combination of VDX Switches work together in a VCS Fabric enabling switch selection to be solely based on port density and speed 1/10/40Gbps, as required. The Isilon storage subsystem uses aggregated interfaces (LAG) across switches for redundancy and increased bandwidth. Within the VCD Fabric the LAG can span across multiple switches (vLAG) providing redundancy and flexibility, should one of the links fail, the storage will remain available through the other switch.
Below is a diagram of the network topology showing a Spine-Leaf architecture with the Isilon cluster nodes attached to the spine and the ESXi servers attached to leaf switches at the Top of Rack (ToR). This provides uniform and redundant access for all servers to all storage and simplifies scale-out when adding more servers and NAS nodes. Low latency, high bandwidth, high availability and simple management are maintained as physical resources are added
Isilon intra-cluster communication is handled by a dedicated Isilon Infiniband network. All internal Isilon cluster traffic uses these paths, and must be assigned an internal IP address range during cluster setup.
When connecting the VMware ESXi servers, it is recommended that a separate dedicated network interface is used respectively for management, storage, vMotion and VM application access. For high availability, the best practice is to use redundant interfaces for each of these networks. It is very common to use on-board 1GbE NICs for management and 10GbE interfaces for storage, vMotion and VM application access. This best practice requires a minimum of 2 x 1GbE and 6 x 10Gb interfaces and it may not be possible in practice due to the limited number of physical interfaces available on the server. Some network adapters, such as the Brocade FA-1860, provide traffic separation by partitioning the physical adapter transparently to ESXi server. Logical NICs appear as physical interfaces to the ESXi server. This guide shows a fully redundant deployment example and an example where the uplink is a single NIC with logical traffic isolation to show the corresponding VCS Fabric and VMware vDS switch configuration.
When deploying a NAS infrastructure, the logical network infrastructure and IP topology must be planned in advance; in the test bed we use a separate management network with all ip addresses in the default VLAN 1. For the VCS network VLAN separation is used for storage (VLAN50), VM application (VLAN60) and vMotion (VLAN70) shown in Table 1 IP Addresses.
Deploying the EMC Isilon is one of the more simple and direct processes available in a NAS appliance. After following the hardware installation guide to rack and connect the nodes and switches, the first step is to configure the top-of-rack (ToR) management switch, which will provide access to all devices independently of user-facing network connections. Next, we describe how to configure the Isilon cluster, join all nodes to it, and create bonded (vLAG) network connections. Once the storage network connections are in place, we will prepare the VMware vCenter virtual network environment to take advantage of AMPP within the VDX Ethernet Fabric. Next, we explain how to provision and mount the Isilon NAS storage for the VMware ESXi hosts.
Finally, in addition to the basic functional system, we outline several optional parameters for an optimized experience, based on EMC recommended best practices for Isilon in VMware environments, as well as a few additional options for the VMware clusters and VMs used to test and verify the deployment.
The following products are used in this deployment.
Modular switch with 10Gb and 40Gb interfaces
Modular switch with 10Gb and 40Gb interfaces
60 ports of 10Gb
60 ports of 10Gb
48 ports of 1Gb and 6 ports of 10Gb
48 ports of 1Gb and 6 ports of 10Gb
ESXi 5.1 (4 total)
4 total in cluster
For completion we briefly describe setup of the switch used for the management network in the test bed. All switches, servers, and storage cluster nodes have management network interfaces separate from the production or dataflow network. These connect to the Top of Rack management switch, supplied by the Brocade ICX 6610-48P. We apply basic switch authentication with SSH logins so the ICX login process is similar to the VDX, for a consistent management experience.
The OneFS Admin Guide describes the Isilon cluster as follows:
“A cluster includes two networks: an internal network to exchange data between nodes and an external network to handle client connections. Nodes exchange data through the internal network with a proprietary, unicast protocol over InfiniBand. Each node includes redundant InfiniBand ports so you can add a second internal network in case the first one fails. Clients reach the cluster with 1 GigE or 10 GigE Ethernet. Since every node includes Ethernet ports, the cluster's bandwidth scales with performance and capacity as you add nodes.”
To build the cluster, the Isilon x200 requires setting up just one node with the addressing information. OneFS automatically expands its cluster when additional nodes are added after a few keystrokes. Therefore, most of the work involves setting up node1, and then telling subsequent nodes to join its cluster. Once you connect to the serial console, follow the onscreen prompts.
Note: SmartConnect (VIP for failover) is configured after the cluster is online, during the bonded network interface configuration.
1. Configure DNS settings (optional)
Note: The default option is Manual, and as this is an initial cluster setup, it’s fine for the additional node to initiate the join. After the system moves into production, it may be prudent to change the join mode to Secure.
Manual — joins can be initiated by either the node or the cluster.
Secure— joins can be initiated only by the cluster.
EMCworld-1# isi status
Cluster Name: EMCworld
Cluster Storage: HDD SSD
Size: 41T (43T Raw) 0 (0 Raw)
VHS Size: 2.0T
Used: 23G (< 1%) 0 (n/a)
Avail: 41T (> 99%) 0 (n/a)
Health Throughput (bps) HDD Storage SSD Storage
ID |IP Address |DASR| In Out Total| Used / Size |Used / Size
1|192.168.90.101 | OK | 74K| 264K| 338K| 5.8G/ 10T(< 1%)| (No SSDs)
2|192.168.90.102 | OK | 0| 0| 0| 5.7G/ 10T(< 1%)| (No SSDs)
3|192.168.90.103 | OK | 171| 0| 171| 5.7G/ 10T(< 1%)| (No SSDs)
4|192.168.90.104 | OK | 98K| 0| 98K| 5.7G/ 10T(< 1%)| (No SSDs)
Cluster Totals: | 173K| 264K| 437K| 23G/ 41T(< 1%)| (No SSDs)
Health Fields: D = Down, A = Attention, S = Smartfailed, R = Read-Only
Cluster Job Status:
No running jobs.
No paused or waiting jobs.
No failed jobs.
Recent job results:
Time Job Event
--------------- -------------------------- ------------------------------
04/02 17:44:17 MultiScan Succeeded (LOW)
04/02 17:59:10 MultiScan Succeeded (LOW)
04/02 18:01:50 MultiScan Succeeded (LOW)
3. Login to the Web GUI using the node1 management IP address
With the setup complete we login to the Isilon Administration Console with a browser interface pointing to the management IP address of the cluster.
The Isilon storage system will use bonded interfaces for client connections to increase performance and availability should one or more 10Gb connections fail. Each node will use port channel groups configured in the two spine VDX 8770-4 (RB1 and RB2).
Node1, 1/2/41 & 2/2/42, port-channel 101
Node2, 1/2/43 & 2/2/44, port-channel 102
Node3, 1/2/45 & 2/2/46, port-channel 103
Node4, 1/2/47 & 2/2/48, port-channel 104
Note: When the VCS is deployed in distributed mode the VCS is configured as a Logical Chassis from a single entry point using the VCS Virtual IP and configuration changes are automatically saved across all switches in the fabric. In the following example we will show configuration for distributed mode (Logical Chassis)
VDX8770_RB1# conf t
VDX8770_RB1(config)# interface Vlan 50
VDX8770_RB1(config-Vlan-50)# description IsilonTest1_Storage
VDX8770_RB1(config)# interface Vlan 60
VDX8770_RB1(config-Vlan-60)# description IsilonTest1_VM_Application
VDX8770_RB1(config)# interface Vlan 70
VDX8770_RB1(config-Vlan-60)# description IsilonTest1_vMotion
Note: When the VCS is deployed in distributed mode the VCS is as a Logical Chassis and therefore VLANs only need to be configured once to be available across the complete VCS
3. Configure vLAG (LACP Port Channel) for Isilon Node1 connected to RB1 & RB2
VDX8770_RB1(config)# interface Port-channel 101
VDX8770_RB1(config-Port-channel-101)# description vLAG_Isilon_Node1
VDX8770_RB1(config-Port-channel-101)# switchport mode access
VDX8770_RB1(config-Port-channel-101)# switchport access vlan 50
VDX8770_RB1(config-Port-channel-101)# no shutdown
4. Add the physical ports on RB1 & RB2 (where Isilon node 1 is connected) to the vLAG
VDX8770_RB1(config)# int ten 1/2/41
VDX8770_RB1(conf-if-te-1/2/41)# channel-group 101 mode active type standard
VDX8770_RB1(conf-if-te-1/2/41)# no shutdown
VDX8770_RB1(conf-if-te-1/2/42)# int ten 2/2/42
VDX8770_RB1(conf-if-te-2/2/42)# channel-group 101 mode active type standard
VDX8770_RB1(conf-if-te-2/2/42)# no shutdown
5. Repeat steps 3-4 to enable vLAGs for Isilon nodes 2-4
6. Enable QOS Flow Control for both tx and rx on RB1 and RB2
VDX8770_RB1# conf t
VDX8770_RB1(config)# interface Port-channel 101
VDX8770_RB1(config-Port-channel-101)# qos flowcontrol tx on rx on
-----------2. Validate vLAG Port-channel 101 Interface qos
VDX8770_RB1# show running-config interface Port-channel 101
interface Port-channel 101
switchport mode access
switchport access vlan 50
qos flowcontrol tx on rx on
4. Add and IP address pool for the cluster nodes
5. Define the SmartConnect Settings
Note: SmartConnect has two modes available: Basic and Advanced, which requires an additional license from EMC. Unlicensed Basic mode balances client connections by using a round robin policy, selecting the next available node on a rotating basis. For more info on the Advanced policies, see the OneFS Admin Guide.
6. Add available interfaces to the subnet, choose aggregated links from each node
7. Use LACP for the Aggregation Mode (since we configured the vLAG as LACP)
8. When the switch port channels are configured properly, the Isilon will show green indicators for all 10Gb interfaces in the cluster
This completes the network connectivity setup for the Isilon NAS Scale-out NAS cluster.
The virtual distributed switch provides network access to storage via the physical uplinks on each ESXi server. Best practice is to separate virtual switches for VM application access and vMotion using dedicated uplinks for each distributed or standard vSwitch. In this document we will only show the setup for the distributed vSwitch used to connect to the NFS namespace provided by the Isilon for the ESXi servers to use as data store.
Note: In the currently available versions of ESXi –all traffic to a single NFS mount point will always use a single uplink interface regardless if you have multiple interfaces which are defined in a LAG. The Lag will provide redundancy not load balancing. You can achieve some level of load balancing by configuring Load Based Teaming (LBT) which kicks in when a single vmnic reaches 75% utilization. For more information on LBT see vmware.com.
After completing the VMware ESXi configuration we will configure AMPP for the VCS to automatically integrate with vCenter for VM placement in VLANs based on Port Group membership.
Each ESXi server uses vLAGs configured on the connected ToR switch in the respective racks (see figure 1) IN the following we will go through the configuration for ESXi_231. Server ESXi_231 is connected on ports 3/0/37 and 4/0/37. This is defined as port-channel 231.
1. Configure vLAG (Static Port Channel) for ESXi_231 connected to RB1 & RB2
VDX8770_RB1(config)# interface Port-channel 231
VDX8770_RB1(config-Port-channel-101)# description vLAG_ESXi231_Storage
VDX8770_RB1(config-Port-channel-101)# port-profile-port VDX8770_RB1(config-Port-channel-101)# no shutdown
2. Add the physical ports on RB1 & RB2 (where ESXi221 is connected) to the vLAG
VDX8770_RB1(config)# int ten 3/0/37
VDX8770_RB1(conf-if-te-3/0/37)# channel-group 101 mode on type standard
VDX8770_RB1(conf-if-te-3/0/37)# no shutdown
VDX8770_RB1(config)# int ten 4/0/37
VDX8770_RB1(conf-if-te-4/0/37)# channel-group 101 mode on type standard
VDX8770_RB1(conf-if-te-4/0/37)# no shutdown
3. Repeat steps 1-2 for the interfaces on the remaining ESX nodes
1. Login to vSphere Client and press Ctrl-Shift-N to open the Network inventory
2. Click Add a vSphere Distributed Switch
3. Set the switch name: dvS-Storage
4. Add the host and both 10G physical interfaces
5. Click Finish to complete the creation of the distributed vSwitch
6. Edit Settings for dvS-Storage
7. Under Advanced, enable CDP Operation = Both (this is necessary for the VCS integration to work)
8. Click OK
9. Edit Settings for dvPortGroup
10. Change name to dvPG-50_Storage
Note: It is useful to include the VLAN ID in the port group name for easy identification.
11. Set the VLAN Policy to 50
12. Verify NIC Teaming option is “Route based on IP hash” since we have connected to a vLAG
13. Click OK to complete the Port group configuration.
1. Navigate to Hosts and Clusters in vSphere Client
2. Select the first ESX node and open Networking in the Configuration tab
3. Select the Distributed Switch dvS-Storage
4. Click Manage virtual adapters and select Add
5. Select New virtual adapter
6. Select VMkernel type
7. Select port group dvSPG-50_Storage
8. Enter the IP address and netmask for the ESX host
9. Review settings and click Finish
10. Repeat configuration steps for all ESX hosts
1. SSH to the VDX switch or connect to the serial console
VDX8770_RB1# conf t
VDX8770_RB1(config)# vcenter IsilonTest1url https://192.168.90.100 username root password "Password!"
VDX8770_RB1(config)# vcenter IsilonTest1 activate
VDX8770_RB1(config)# vcenter IsilonTest1 interval 10
2. Verify status of vCenter networks
VDX8770_RB1# show vnetwork vcenter status
vCenter Start Elapsed (sec) Status
IsilonTest1 2013-04-09 20:26:07 11 Success
VDX8770_RB1# show vnetwork dvs vcenter IsilonTest1
dvSwitch Host Uplink Name Switch Interface
dvS-Storage ESXi_221 vmnic4 -
ESXi_231 vmnic4 -
Total Number of Entries: 4
VDX8770_RB1# show vnetwork dvs vcenter dvpgs
dvPortGroup dvSwitch Vlan
================= =============== =========
dvPG-50_Storage dvS-Storage 50-50,
dvPG-60_VMs dvS-VMs 60-60,
dvPG-70_vMotion dvS-vMotion 70-70,
dvS-Storage-DVUplinks-17 dvS-Storage 0-4094,
dvS-VMs-DVUplinks-20 dvS-VMs 0-4094,
dvS-vMotions-DVUplinks-23 dvS-vMorion 0-4094,
Total Number of Entries: 6
VDX8770_RB1# sh vnet vms
Virtual Machine Associated MAC IP Addr Host
================== =============== =========== ==============
vCenter Server 00:0c:29:56:8a:00 - ESXi_221
vmware-io-analyzer 00:50:56:bb:60:24 - ESXi_231
w2k8-VM1 00:50:56:99:00:01 - ESXi_211
Total Number of Entries: 3
VMware vSphere 5 currently only allows NFS mounting for datastores, making NFS the only share type required. We also enable SMB sharing on the datastore to allow a Windows management station to upload/manipulate files to the datastore, such as software installers and other utilities. The SMB share also provides another path to show customers the NAS system’s capabilities.
4. Click Submit
2. Click submit.
2. Click submit.
NFS allows multiple connections from a single host, meaning an ESX host can mount the same NFS export multiple times as separate datastores to distribute sessions. For demo purposes, setup at least one datastore using the Isilon SmartConnect IP address for storage failover. Add multiple datastores using the same IP if desired.
4. Enter NFS access information using the Isilon SmartConnect IP
Note: Change the datastore name to support additional mounts
5. Review your settings and click Finish
6. Repeat these steps for remaining ESX hosts
At this point, the system is ready to create virtual machines on the NFS datastore(s). EMC has additional recommended best practice options that may improve performance and manageability in larger demos or production environments.
These recommendations come from the document “EMC Isilon Scale-out Storage with VMware vSphere 5” provided by EMC.
isi services apache2 enable
isi services isi_vasa_d enable
c. Download the vendor provider certificate to your desktop via http://<ip_addr>
d. In vSphere, navigate to Home->Administration->Storage Providers and click Add
e. Fill out the following fields in the Add Vendor Provider window:
Name: name for this VASA provider, e.g. EMC Isilon Systems
Password: root password
f. Enable Use Vendor Provider Certificate checkbox
g. Browse to the Certificate location for the certificate on your desktop
h. Click OK.
Note: to disable VASA later, run the following commands from SSH:
isi services apache2 disable
isi services isi_vasa_d disable
5. Define custom attributes for VASA
6. Create VM storage profiles
7. Assign multiple dynamic IPs to each Isilon node in a dynamic IP pool
8. Mount /ifs datastore to each ESX host in a mesh topology
9. Enable jumbo frames on 10G storage links
10. Configure MCT paths between switches for cluster nodes
11. Enable X200 protection at N+2:1 using SmartPool policy
12. Set SmartCache optimization to Random using SmartPool policy
13. Use a single dedicated datastore to hold the hypervisor swap files (.vswp) for all ESX hosts.
The following items were useful specifically for building the Isilon setup in a closed environment, but may have application in other environments so we document them here.
1. Disable Shell Warnings for SSH/remote access in vSphere
Note: The default settings for ESX will show a security warning when SSH is enabled, and since most production activities do not require SSH, VMware recommends that administrators only enable SSH when they need it. For proof of concept and demo labs, or full-time SSH access, it’s useful to disable the SSH warning for a clean interface.
vim-cmd hostsvc/advopt/update UserVars.SuppressShellWarning long 1
2. For IO-intensive VMs, use the PVSCSI adapter (Paravirtual) which increases throughput & reduces CPU overhead
3. Align VMDK files at 8K boundaries for OneFS & create VM templates
Note: Since Windows Vista and Windows Server 2008, all Windows versions align automatically during OS installation. Previous versions and upgraded systems are not aligned.
Note: RedHat & CentOS Linux version 6 systems align automatically during OS installation. Previous versions and upgraded systems are not aligned.
4. Use 8192KB allocation unit (block size) when formatting virtual disksa. Windows: DISKPART> format fs=NTFS label=<"label"> unit=8192 quick b. Linux: mke2fs –b 8192 –L <"label"> /dev/<dev#>
5. Advanced NFS Settings for vSphere are available from VMware in KB #1007909. Heed all cautions and recommendations from VMware and Isilon.
isolation.tools.copy.disable = false
isolation.tools.paste.disable = false
9. Configure w32time server on W2K8-VM1
10. Configure w32time client on other hosts