05-23-2018 01:10 PM
New to the forum and Brocade! So hello!
I'd like some advice on a proposed topolgy change we need to make to add resilience and ports.
We're concerned with connectivity between two sites, currently, there are a Fabric A and B switch at the DC, connected via a single path Dark Fibre through a DWDM MUX and Bluebell configuration into a single 5100 which has been split into two 20 port Virtual Fabric Switches in the one chassis.
We've got a 2nd ISP, two new 5100s, a 2nd DWDM MUX and Bluebell to add a completely isolated path from the DC to the Office.
The proposal is to put 3 x Trunked connections per fabric down each path (we have an upper limit of 6 spare MUX channels per bluebell), so 6 connections per fabric down two distinct paths giving 24Gb per fabric per path, 96Gb in total across the two fabrics in a ring fabric configuration.
We do have other switches connected to the DC 5100, which aren't in a ring fabric. Does that matter?
I've attached diagrams to (hopefully!) explain!
Thanks in advance for any support.
05-26-2018 12:51 PM - edited 05-29-2018 12:01 PM
I see that in pop A office you are removing virtual fabric and adding switches.that way you have redundancy at hardware level.
Looks like you are taking off 5100-3 and adding 5100-6 and 5100-7.
In your proposed topology I see that you have only 6 links (3x8 Gb per fabric) going to path A and path B. So on each path you should have 48Gb b/w. And Total b/w across both paths is 96Gb.
But please note that even though each path has 48Gbps it is limited by your actual WAN link b/w.
Regarding other switches ISLd to 5100 and not on ring shouldn't be of concern if it was like that since your current topology and there are no performance issues to connected devices to those other switches while communicating to other devices across Ring.
05-29-2018 12:57 AM - edited 05-29-2018 01:12 AM
One other question. We're using exchange based routing, and using the proposed topology, our tape traffic (which is split across two fabrics) will run down the two diverse paths, depending on traffic etc.
The two paths differ in length by about 18km. Will that difference be an issue when the tape traffic arrives or should we consider sending all Fabric A traffic down one physical path and Fabric B down the other?
05-30-2018 01:35 PM
I couldn't find any document that specifically discusses about backup over long distance using link of different lengths. As per the multipath diagram the differance in length should be about 8km and not 18km. (20km and 28Km)
As per Brocade SAN admin best practice guide
In-Order Delivery (IOD)
- In a stable fabric, frames are always delivered in order, even when the traffic between switches is shared among multiple paths (yet it does not specifically mentions paths of different length)
At the same time it also says
• Exchange-based routing:
Because different paths are chosen for different exchanges, this policy does not maintain the order of frames across exchanges
I also found a following Brocade community link
- A backup exchange can be very large and would stay on one single path. Seen this several times.
Following is my recommendation
- Once you set up new topology try running test backups and see if it works without any issues.
- If possible engage your backup vendor and see if they have any answer for this.
- If you want to make sure that the backup stream always follows the certain path then consider using Brocade Traffic Isolation zones
05-31-2018 08:16 AM
The LTO7 drive uses LTFS to manage the blocks on the tape with an indexed FS. This is an assist to help alleviate shoe-shine of the drive(tape movement back and forth to locate blocks and read/write as needed). It also has a generous front end cache which is supposed to handle modest out of order delivery of frames. You may have an exchange that is out of order exchange-wise, but of course in order frame-wise. Hopefully the LTO7 FS will be able to handle it.
Does it work perfectly? No. Does it work in most cases? Yes. Tape backup creates huge long block-chain writes. To optimize this in a SAN, one would select port based routing, and assign static routes and backup static routes to insure delivery in the event of a path failure. This is grinding, granular work, which Brocade doesn't really like to support, but it's avail to you should you want to go as fast as the literature from BRCD and LTO say you can.
It does come down to trial and error, or trial and optimize. There are a lot of results for "LTO 0 tape shoe-shining" on the web. Suggest you have a look at what some others have found. Making the data rate fit the tape write method will work wonders for throughput. But - a fair amount of work. The good folks at LTO are completely oblivious to shoe-shining, and they will tell you there is no such thing on an LTFS drive. That is - ahem, Bovine Scatology.