06-03-2011 07:40 AM
I want to pre-empt all this with the fact that we are new at this hope everyone can bear with us.
We have 7x IBM servers (mostly x3650 M3), a DS3512 + EXP3512 for storage, and a single SAN24B-4.
On each host we manage VMs, eventually moving everything into a 2-node and 5-node cluster.
As I understand we can only manage a single fabric with DCFM Pro, which is fine.
We're introducing a second SAN24B-4 to be used for redundancy, however I can't figure out how it's to be recognized within DCFM.
From each host we have either a dual-channel or two single-channel HBAs.
Until now we only utilized one port to the current switch, along with a single port from each controller on the DS3512.
We are not licensed for ISL trunking between the two fibre switches.
We have 16 ports licensed on each switch.
I read somewhere that the fabric takes time to recognize the new switch.
The new switch has not been configured with ay zones at this point.
How do we achieve redundancy with the two switches.
The way it was presented to us, we would add the second fibre switch and cross-connect everything.
The diagram depicting the following:
From controller A and B on the DS3512, one fibre connection to each switch
From each host, one fibre connection to each switch.
How does the host know which path to communicate with the storage?
Even though the hosts are ultimately connected to the same storage at the other end how does that work if they are passing through seemingly independent switches? I understand that MPIO has policies to control communication but I need more info.
We have Web Tools access to each switch
We have DCFM Pro which has a zone config incorporating the initial switch only.
06-04-2011 12:35 AM
welcome to the FC world.
MPIO: In a dual fabric configuration you need MPIO software. Some OS do not provide MPIO out of the box (WIndows 2003).
VMWare have MPIO on board. You can configure Failover or Round Robin. Failover uses only a single path and only in case of an outage of the path the second link will be used.
Round Robin will send one IO over path one and the next IO over the second path and so on. I am not sure if VMWare 4.x has an extended IO policy which means VMWare sends a couple of IOs over one path before switching to the next path.
I would recommend to connect storage controller A to switch A and controller B to switch B.
But to be honest I do not have any IBM storage arrays in our production.
My reason for this cabling is that if you update the controller firmware, each controller has to reboot and in your configuration you will have an interruption event in both fabrics at the same time.
If you have more questions please let me know other wise please mark the thread.