07-10-2018 12:30 AM
I am currently at a site where the avg read latency we get from an all flash array is often 1-2.x ms.
That is for small reads in the 4k-64k range (host filesystem striping).
One specialty in comparison to other sites where read latency is comparably lower: the directors are using virtual fabrics (VF).
So, my question: is it possible that VF imposes a latency penalty?
Thank you for sharing your thoughts
07-10-2018 12:31 PM
I've never noticed that VF enabled switches would have higher latency than those without VF, although I used to manage an environment with 50+ non-VF switches and 50+ VF-enabled switchs.
The own latency of the ASIC is less than microsecond. In case if you have multi ASIC switch (i.e. DCX director or 2U fixed port switch) the frame might traverse three ASICs from one port to another, so the total latency will be tripled, but it will be still less than 2 microseconds. So the question is - how to measure such a low latency. We normally operate in milliseconds, novadays we can get to 0.1 millisececond, but anyway, a 2 microsecond latency would be too hard to detect.
When you say "read latency is comparably lower" - how high are the values that you compare?