For more details, please see ourCookie Policy.

Data Center

“VDX 6720’s in a HPC (High Performance Computing) Environment

by Jeremy_Roach on ‎02-02-2011 12:11 PM (2,411 Views)

I would like to discuss my recent experience with a rollout of the VDX 6720’s (60 port) in a HPC Cluster. The VDX 6720’s are being used at this time as a “Top of Rack Switch” without running VCS. You may ask why? First and foremost is for their high port density. When using the VDX’s in this type of scenario there are several things to take into consideration. Some of them being:

  • Flow control is turned off by default on all ports. This will cause a big performance issue from the compute nodes connected behind it.
  • At this point in time there is no port range selection command. If you need flow control turned on 50 out of the 60 ports, then you will need to do this one port at a time. You can also create a script to make this task easier. This also is the case when you have multiple ports with the same configurations on them.
  • If you have 10gig twinax connections from your compute nodes, make sure they are active twinax cables.
  • At this point in time the capability for having a default gateway on the inband management isn’t present. You can drop down & login as root and set a default route at the Linux level (only problem is it doesn’t save & if the switch gets reloaded, the default route will have to be re-added upon the switch coming back online). This is something that will have to be done in order to manage the VDX from other vlan’s.


Thanks for reading and look for more technical posts soon