Data Center

The Terabit Project at SuperComputing 2014

by Ed O'Connell on ‎11-20-2014 01:05 PM (10,737 Views)

SuperComputing 2014 (SC’14) is a unique event in that many of the exhibits are National Research and Education Networks (NRENs); universities; national laboratories and research facilities and public sector research institutions presenting their projects and research in high-performance computing (HPC). HPC is really about enabling the gathering, sharing, and analysis of copious amounts of data by researchers. Copious amounts of data in the HPC research is measured in a multi-multitude of Petabytes. Data flows being shared by researchers across great distances routinely reach tens and sometimes hundreds of Terabytes. In essence researchers (e.g. medicine, meteorological, material physics)in the HPC segment of the overall computer market has been dealing with ‘big data’ long before the phrase was made common more recently.

 

One of the more interesting booths at SC’14 was that of the Terabit Project. At the ZIH TU-Dresden booth, I was shown a network infrastructure and applications that were in production and delivering over 1 Terabit of network throughput over 1000km (622mi). This project is focused on demonstrating terabit network capacity for research applications that required maximum high data rates to deliver data in a timely fashion for researchers without tying up the network for days or weeks. The team built the network using a combination of stringent network QoS, the application of new Software Defined Network (SDN) approaches and Network functionality Virtualized(NFV).

 

How do you get to a Terabit?
The Terabit team designed the network infrastructure by trunking (Link Aggregation) of ten 100G connections within modular routers into a single 1 Terabit network. This aggregation of network capacity enabled routing of traffic, traffic QoS and monitoring to be managed by single GUI screens. Below is a picture I took of the monitoring screen (Java-based application interacting with an S-flow collector) that enabled a real-time view of all data flows on the Terabit network.

 

sc2014 blog.jpg

 

For demonstration purposes of research use cases at SC’14, the following applications were enabled over the Terabit network:


• Computing and rendering of a water turbine flow model at TU-Dresden transfer of the display data in very high resolution to a cave at HLRS in Stuttgart
• Distributed computing of climate models

 

Software Defined Networking (SDN) enables real-time bandwidth visibility and management
The Terabit team set out to use SDN to quickly help create applications that would drive the orchestration of network traffic. The SDN bandwidth management application was developed at Technical University Dresden using the initial Terabit link between Dresden and Stuttgart. The application enables users to configure their bandwidth requests via a web based user portal. The requested bandwidth is provided for user / application by automated configuration of related QoS profiles. The application allows users to access the Terabit link and share the available bandwidth on a best effort basis. The SDN bandwidth management application uses the Netconf interface (based on a Yang data model) on the Brocade network nodes to configure the needed QoS profiles. The application tracks all bandwidth reservation and avoids over booking. The application also contains a traffic monitoring (traffic visualization) capability to showcase the impact of traffic management and to monitor link utilization (see photo above).

 

Brocade’s role in the Terabit Project
The Terabit team chose Brocade to be a partner as Brocade networking delivers support open standards (e.g. OpenFlow 1.3) and API’s that enable effective design and control.
Brocade MLXe routers and VDX switches enabled the team to:

 

  • Design infrastructure that could deliver Terabit of network bandwidth reliably over great distances
  • Enable the use of open source tools to create applications to manage network bandwidth control, scheduling, and visibility of network traffic.

 

The Terabit Project is another example of how the underlying IP is rapidly evolving as a result of the expanding scale and changing needs. The Terabit team was able to build the infrastructure and applications in a relatively short time frame by focusing on ‘open’ SDN model to enable control and management. Brocade data center networking routing and switching enabled the Terabit team to deliver the scale and scope needed to empower scientists to do their research in a timely manner.

 

To find out more about the Terabit Project and details on the design and applications go to a Lightreading story posted earlier this week.

 

Finally, New Orleans, affectionately known as the ‘Big Easy’ was more like the ‘Big Freezy’ this week as the temperatures dropped into the low 40’s with high winds. I never was able to make it to the Central Grocery for a Muffaleta. Maybe next time.

Comments
by
on ‎11-21-2014 02:05 PM

I am curious if the IO is moving data sets between sites.  If so, is the storage object, NAS, or ???

 

Were there any discussions about moving the applications to the data sets instead to reduce bandwidht requirements (and cost) for these types of scientific data analysis projects?

Top Kudoed Posts