on 05-01-201711:19 AM - last edited on 06-23-201701:25 PM by jason_cmgr
My intent in putting this post together is not to bore you with mundane talk about how we can make networks great again. Rather, I’m going to quickly review the challenges I’ve personally faced operating networks over the past 20+ years and offer a revolutionary solution for networkers. I’ll show you how this works in a real world scenario that solves “The Needle in the Haystack” challenge.
Three Operational Challenges
The Network is Slow: Operationally, simple problems like user access issues, links that have gone down, or switches or routers that have gone belly up can be quickly identified and remediated by network engineers targeting specific points in the infrastructure.
What happens with my favorite support call? When the complaint is that the network is slow, the real fun begins. Engineers sift through the latest batch of Syslogs and check NMS for error messages or alerts that may have been logged. Nothing found? Then it’s time to go element by element to see if anything along the path is behaving badly. Chances are that nothing is going to immediately jump out, leading to a painstaking search through the entire infrastructure to identify and resolve the cause of the slowdown.
Border Gateway Protocol (BGP) Peer Goes Down: Another one of my favorites challenges is when a long established BGP peer goes down overnight. Depending on peering relationships and network failover, this could be a big deal. For instance, if a customer uses BGP peer A for their primary network traffic and have a contract with BGP peer B as a backup connection, this comes at a hefty usage cost. The longer they are on BGP peer B, the more money they are paying for this service. Service providers typically staff BGP experts 7x24, but most federal agencies do not and cannot afford to do so. This means costs are mounting as the issue goes either unnoticed or unresolved until appropriate resources arrive. Worst case scenario…you’re in Texas and the outage is in Minot, North Dakota. Come on, tell me some of you haven’t been in this situation before.
Command Line Interface (CLI) is Cumbersome: If you’ve been following the Federal Insights Tech Corner, then you may have read through the most recent Tech Corner post on how the CLI is dead. The piece explains why using CLI and point management tools are inefficient and operationally cumbersome. I’m not going to belabor the fact that this is correct, but want to present the current widespread use of CLI as another operational challenge agencies face.
bykevin.deterra02-02-201712:57 PM - edited 02-08-201708:34 AM
Open source software and open standards continue to rapidly evolve data center technologies much in the same way that Linux and Android have enhanced our lives over the last decade. Thanks to them, it’s possible to order a pizza from top-ranked local shop on the way home from work or to find the closest gas station on the way to the airport in an unfamiliar city.
The agility these tools enable on a personal level can be brought to government and business through OpenStack and Software Defined Networking (SDN), making an impact on citizens and warfighters that goes far beyond the convenience of ordering a pizza. In government, what open source technology makes possible can help mitigate security concerns or maximize agency cost savings. Agility and customization are possible as a result of virtualization and open source, both open standards-based tools.
This blog will cover a range of open source tools that can help make new possibilities a reality for government and will illustrate how they work together to provide a flexible, virtualized environment.
Technological advances in all areas across the federal government have changed the way agencies work and interact with citizens. For government agencies to keep pace with technological innovation, network modernization and a transition away from hardware-centric data centers must be a top priority.
Hardware-centric legacy data centers were not built to keep pace with the needs of modern IT and make provisioning new technology slow, expensive, and error-prone. This hinders innovation in the era of mobile, social, cloud, and big data and may even lead employees to turn elsewhere for services when delays and other issues prohibit productivity.
California’s Department of Water Resources (DWR) is one example of an organization that was prohibited by its legacy networks and found a solution through a Software-Defined Data Center (SDDC). Challenges managing data center security policies and enabling efficient network provisioning negatively impacted DWR employees’ abilities to quickly access the applications they needed to do their jobs. The challenges faced by DWR are all too common in agencies across the federal government, as well.