02-03-2016 04:37 AM - edited 02-03-2016 05:06 AM
I've got quite an open ended question - looking for opinions out there really.
If you had a totally clean slate and were starting again from scratch, would you use QoS in your SAN Fabrics? We don't, and am not about to start using it on our existing SANs, but have been wondering whether it's something we ought to consider for the future.
What are the Pros and Cons? Does it really protect live data from test data? Is there an overhead that can cause other problems?
Thanks in advance!
Solved! Go to Solution.
02-03-2016 06:21 AM
Ages ago I wrote an article about it: https://www.ibm.com/developerworks/community/blogs
But meanwhile and with the info that you start from scratch, it might make sense to use it. Because with 16G platform that already has plenty of buffers built in the hardware and the command portcfgeportcredits the reservation of precious buffers doesn't play a so big role anymore. In addition there are features in the latest FOS versions like Slow Drain Device Quarantine that rely on QoS (AFAIK) and others might come in future releases.
So yeah, if you think the admins in your company will work closely and cooperatively together to determine the best priority for each flow, QoS can help you.
02-04-2016 02:04 AM
Thanks thats really helpful. I guess the key thing is getting the priority of each server right - because the obvious divisions, like live vs dev or highest service level vs lower service levels, could still lead to too much in the top QoS group. Our philosophy up to now, though we realise it has some flaws, is that it's better for everyone to be in one big group (in disk terms as well) then lots of little groups, as then there's more wiggle room for everyone.
Is there any overhead to using QoS? Any sort of extra processing that needs to go on on the switch that could be detrimental?
02-04-2016 03:02 PM