Like many of you, I still get excited about new cool technologies. What really gets me going is when customers share that excitement. Recent news from VMworld put a spring in my step when I read that Qlogic in collaboration with Brocade demonstrated their StorFusion VMID technology. So… what is this all about, and why would this excite my customers?
Innovation beyond eye-watering speed
Brocade continues to bake new goodies into successive generations of switching ASICs. Gen 5 (16Gbps) brought a set of capabilities under the heading of Fabric Vision. One of the many functions monitors SCSI release timing. In a complex shared storage environment, many physical servers accessing each storage port. If a “misbehaving” server/HBA is slow to release SCSI reservations, multiple applications can be impacted. Monitoring SCSI release timing enables storage admins to be alerted when a threshold is reached. In many cases this allows mitigating actions to be taken before application owners hit the phone or the send button on a nasty-gram.
Think about that for a moment… out of thousands of data flows at 16Gbps, the switching chips are smart enough to pick up tiny anomalies in storage traffic. This still blows me away.
VMware environments have additional challenges. VM Datastore volumes are often shared between many VMs. Brocade’s Gen 5 Flow Vision can point to a problematic physical host, but then it’s up to the server dudes to work out which VM is causing the problem. Brocade’s Gen 6 (32/128Gbps) ASICs have a new function – VM Vision - baked in. VM Vision adds the capability to read VMID tags and bring our monitoring capabilities down to individual VM level. This is what gets the server and storage admins excited. The time saved in identifying problem VMs and workloads (or fixing things before they become problems) promises to be very significant.
Help is on the way
Whilst the switching fabric provides the core monitoring function, VM Vision requires complementary components – which is why reading about Qlogic’s piece of the jigsaw put a spring in my step…
BTW, do I really need 16 and 32Gb FC for my VM storage?
I get asked this question almost every time I talk with customers who are building, scaling or upgrading storage infrastructure for their VM farms. Check out the following studies from VMware. The tests were run using storage array with 8Gb FC interfaces. Moving to Gen 5 16Gb switching and HBAs resulted in almost double the performance for some workloads. A similar doubling was observed moving from Gen 5 to Gen6 32Gb FC. CPU usage decreased. This means that you can run more workloads, faster by upgrading your FC infrastructure.