For more details, please see ourCookie Policy.

Data Center

Integrating SLX Telemetry Streaming with Splunk Enterprise

by asardell ‎08-10-2017 12:51 PM - edited ‎08-10-2017 12:52 PM (12,168 Views)

Combined With Analytics, Data Streaming Forms an Invaluable Visibility Solution


In a previous blog, Deepak Patil discussed the theory and operations behind using data streaming to provide network visibility and analytics. That article also detailed how SLX products support data streaming.


Here, we’ll detail a concrete example of how you can build a complete solution (illustrated in this video) using SLX products and two collectors configured in Splunk Enterprise. One collector provides an interface profile and the other provides a system profile.


Many of our enterprise customers are focused on operational capabilities and best practices around reporting from their network elements, and using this demo is a great way to bootstrap that effort. The plugins to Splunk Enterprise can be found in this GitHub link so you can evaluate the effort involved, or implement it in your own lab.  


Background and Purpose of Streaming


Data streaming allows you to collect real time data to support this, and does so in a much more continuous and useful manner than traditional collection methods such as SNMP.


This data can then be analyzed to get a clear picture of the state of the system through parameters such as CPU and memory usage, security breaches, interface usage, and changes to logical topology. This analysis can in turn be used by automation tools for auto-remediation.


What is Being Demonstrated?


The intent of the telemetry/analytics demo discussed here is as follows:


  1. Integrate telemetry (made available via the SLX Insight Architecture) data streaming with Splunk Enterprise
  2. Configure Splunk Enterprise to receive and interpret the telemetry response
  3. Define the “receivers” of the data through collector profiles
  4. Visualize the collected data through reports and dashboards


Fulfilling the third goal involves:


  • Providing collector details such as IP addresses and port numbers
  • Defining profiles of the data to be collected and analyzed

Streaming to a Client Collector


As described here, Brocade provides two models of streaming; data can be streamed to a collector, or a gRPC client can request data which is then pushed at desired intervals to the client. This demo uses the collector model (Figure 1), wherein the network element itself (an SLX 9540) acts as a client and streams data to the desired collector over TCP. 


Collector Model.png

Figure 1: Collector Model Showing Two Collectors


The demo configures two collectors providing information such as IP address, TCP/UDP ports for connectivity, the streaming interval, and a telemetry profile to identify which data to stream to the collector.


Splunk Collection, Analysis, and Reporting


After the Splunk collector receives the data, it performs necessary conversions on it (to JSON, for instance) so that Splunk can understand it. Then the collector delivers the converted payload into Splunk. At that point, a search application in Splunk can view the data in its raw form. 


Splunk users can then create specific reports and dashboards (Figure 2) to view the data in useful formats.

Def Report Dash.png

Figure 2: Defining Reports and Dashboards


The reports shown here can give, at a point in time:


  • CPU utilization
  • Cached and free memory
  • Total memory used


And dashboards are available so you can continually see:


  • Total memory utilization
  • Traffic flow on selected interfaces

Sample Reports for Memory and Traffic Flow


Reports with multiple attributes give you more information at a single glance. For instance, in the following report (Figure 3), cache memory and total free memory (at points in time) are plotted on the same graph.


Cache and Free.png


Figure 3: Cached Memory and Free Memory Shown Over Time


To analyze activity on the data plane, a report that allows you to see traffic flow on different interfaces is extremely useful. In the following dashboard (Figure 4), you can choose different interfaces and time periods.


Traffic Flow.png


Figure 4: Traffic Flow on Selected Interfaces


Both received (InOctets) and sent (OutOctets) traffic counters are plotted.

Next Steps: New Reports, Event Logs, Automation


We are continually enhancing this integration to show new reports and dashboards, including more detailed throughput information, errored and discarded packets, and event logging to help with auditing and analytics. Additionally, data streaming will continue to be enhanced in the future to stream new attributes to address various use cases.


Finally, Workflow Composer integration can be used for event-based automation tied to analytics of the streamed data (Figure 5).  





BWC and Analytics.png

Figure 5: Workflow Composer Integration


As an event-based automation engine, Workflow Composer can accept events from the collection unit (SLX in this case) or analytics engine (Splunk in this case) and can run workflows for automation actions. These could include remediation actions to the SLX element.


A typical way to set up a remediation stream would be to use triggering events from Splunk. For example, you might first forward all logs to Splunk, then trigger events when specific patterns are matched.


You can do this by creating a saved search in Splunk, and configuring it to send a webhook request to StackStorm when that event matches. After that, you would configure StackStorm to run a workflow when that event occurs. See here for a walkthrough example.


Workflow Composer can also be used to statically or dynamically configure the data streaming parameters on SLX. For example, on the occurrence of an event, Workflow Composer could tell the SLX element to start streaming specific data.

Call to Action


Follow the links above or optionally consult the following videos for more insight into the importance of network visibility:



Also, the following blogs from our community provide concrete examples of application or network remediation using visibility and automation:


And as always, contact your Brocade Sales or SE representative for more information.