The Evolution of Network Management

When I started in the networking industry in the 90s,90s-AAH I worked for a large consulting firm that helped manage a number of networks. A frequent customer request was to know what was going on in their network before an end user could call in to report a problem. Since that time—and not surprisingly—the desire to pro-actively monitor computing environments has only increased.

What Is Network Management?

The International Organization for Standardization (ISO) identifies five essential areas of network management: Fault, Configuration, Accounting, Performance, and Security (FCAPS).

Fault and configuration management

In my experience, the easier FCAPS components to implement are fault and configuration. Both are typically done using an in-band solution that engages the devices that run your network.

Fault management is often initiated at the management system using ICMP pings and SNMP, and may also include information pushed from the network devices using Syslog and SNMP Traps. The goal is to detect faults as they occur and initiate a manual or automated correction.

Inband NMSFor configuration management, the objective is to track network device configuration. Often, configuration management will make periodic backups of the configuration of each network device, and many network managers use this information to ensure that no unauthorized changes are made. Another common task in the configuration area is to ensure that the firmware or software running on the network components is kept up to date. In some configuration management systems—typically element management software from the network equipment vendor—software updates may be performed centrally.

Security management

Security is the next most commonly implemented network management function and can be viewed as two parts: one is to ensure the security of the network devices; the other is to ensure the security of the computing environment.

Securing network devices is relatively straightforward. You apply firmware patches on a regular basis and centrally control access to the devices using RADIUS or TACACS+.  These two services provide authentication (controlled access to the network device); authorization (defines what a particular user can do on the network device); and accounting (keeps track of the changes a particular user makes to the network device).

Securing the overall computing environment, by contrast, is more complex. Today, there are numerous tools available for monitoring the computing environment to detect malicious activity that seeks to disrupt network operations, steal information, or control systems. Increasingly, more and more tools are being developed to prevent such malicious activity from starting in the first place. These tools are deployed within the network and may collect information from the devices that manage the network. However, to provide a complete security analysis, they also require visibility into the traffic traversing the network.

Performance management

The final area is performance management. While easy to understand, it is difficult to do. Over the last 15 years, many products have been developed to help network managers identify when the network is struggling to perform at an acceptable level. Regardless of the varied root causes of a performance issue, the impact to the end users is real.

Early knowledge of performance issues within a computing environment can allow network managers to be pro-active in solving minor issues before they become major. The devices that run the network provide critical information on their health; this information is typically collected in-band directly from the network devices themselves via SNMP or a proprietary API. Unfortunately, this data is not enough to provide an accurate, complete picture of the performance of the network or its applications. While there are a large number of software vendors who have developed solutions to monitor the overall performance of the network and/or the critical applications running in the network, these tools almost always rely on an examination of the packets traversing the network to provide the best possible analysis.

The challenge for security and performance monitoring of the computing environment is how to provide these tools visibility across all the areas of the network that need to be monitored. Traditionally, these tools have been deployed either inline in protect mode at the Internet connection or out of band in monitor-only mode, in which case they are connected to a SPAN port on a switch or router.  We will talk more about the best ways to gain visibility into the network packets in the “Sources of Traffic” article that will appear later in this series.

The Next Step in Network Management: Packet Delivery Platforms

Device-based management is no longer sufficient. The next generation of performance and security tools require visibility into the packets passing through the network.

Enter Packet Delivery Platforms.

A Packet Delivery Platform provides the means to aggregate all sources of network traffic (inline bypass, SPANs, TAPs, virtual); send the right data to the right tool through filtering; clean up the data streams by transforming the packet flows; and, lastly, send the data to all the tools or tool groups that need to see those packets.

Gone are the days of running out of SPAN ports. Gone are the days of having to upgrade tools to faster network interfaces because of a network upgrade. Gone are the days of overloading an analysis tool because it had to receive unfiltered, duplicate packets.

Instead, we welcome the world of Packet Delivery Platforms.

Untitled 7

The Ultimate Network

SC15ColorSoftShadowEarlier this year I learned that one of my favorite technology industry conference would be held in Austin, TX again.  I immediately contacted my company’s events team to request to be a part of it since I live just north of Austin.  Turns out Gigamon had already signed up to be a Gold Contributor to SCinet.

What I really like about the SC conference is that it is more than just a lot of experts giving talks, more than just vendors trying to sell products on the show floor, it is more of a collaboration among peers who are seeking to push the limits of computing.

SCinet

A great example of how different this conference is can be found by looking at the SCinet project.  Each year a one of a kind network is created for the conference, by volunteers.  Planning begins over a year in advance and requires around 150 volunteers to bring it to fruition.  While planning takes place for over a year, thenetwork actual implementation spans about 3 weeks.  The first week is staging, during this week the equipment is inventoried, unboxed, racked, cabled and powered on.  The goal is to get the network up and functioning at a basic level.  Then everyone goes home for a week.  The week prior to the conference everyone comes back and continues the work of configuring the equipment and software they are assigned.  As the beginning of the conference draws near, the days get longer and the work more intense.

The goal of SCinet is to support the SC conference; and showcase the best products, technology and research in high performance computing, networking, storage and analysis.  The best engineers from universities, industry and government research labs partner with vendors creating a unique environment that tests the boundaries of technology and engineering.  SCinet provides wireless connectivity to the conference attendees and high-speed links to exhibitor booths on the show floor.  Internet connectivity is provided by multiple high-speed networks like LEARN, ESnet, Internet2, and CenturyLink—aggregating more bandwidth than any other conference.  SCinet offers an opportunity for vendors to demonstrate their products in a computing environment where everyone is trying to demonstrate just how high performance their products are.  Vendors from all parts of the computing industry donate or loan their products towards building this one of a kind network.

IMG_5634Why Am I Here?

I am here because Gigamon is one of those vendors that is providing its products to SCinet.  As part of Gigamon’s involvement, they needed a technical resource to help get our product installed and configured to work with the rest of the network.  I happily volunteered.  I thought I would just be working with our products, but in reality I am an official member of SCinet security team, whose responsibility is to ensure that SCinet is not used for nefarious purposes.  Think of SCinet as a university-like network, there are very few limits on what can be done within SCinet.  Since this is a network designed to show off high performance computing, security could be viewed as a potential roadblock to that goal.  In the security plan for SCinet, great care was taken to ensure that the security monitoring was as unobtrusive as possible.  Gigamon is providing TAPs for the network, four 100 Gig and thirty-six 10 Gig TAPs, and two of our GigaVUE chassis (GigaVUE-HC2 and GigaVUE-HD8) that are clustered together.  Once we aggregate all these TAPs together, we will then deduplicate the data and send it to security devices from Reservoir Labs running Bro and to a Firewall sandwich featuring Dell’s SonicWALL Firewall.  In addition to providing wire data to these two products, I will also be generating 1 to 1 NetFlow data and sending it to Gigamon’s Fabric Manager running FabricVUE Traffic Analyzer as well as InMon and SplunkIMG_5624

So in addition to working with some pretty cool Gigamon products, I also have the opportunity to work with some outstanding individuals from academia, commercial, and government research labs utilizing some of the best in class security and performance tools.  What more could a systems engineer ask for?

SC, sponsored by IEEE Computer Society and ACM (Association for Computing Machinery) offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC15, please visit: http://sc15.supercomputing.org/

The Network Just Works

NPB-Logo-with-bgAt least that is what most people think. I was talking with a college student this afternoon who is doing some work with my Father at his Electrical Service business. He is getting an EE degree and to him the network just works and he is not alone in that viewpoint.

When I started in the networking industry, the network didn’t just work, it worked most of the time, but problems arose often and the repair of the network was not always an easy task. Network troubleshooting and repair often stretched into multiple hours with engineers from the network group, the server group and the tools group all viewing their pieces of the puzzle to ensure that the problem was not with their stuff.

Luckily times have changed and now the components of the network are far more robust than they were 25 years ago. Not only are the network devices, servers and applications better designed and tested, they are also able to be monitored with much greater ease. Many advanced in quality assurance departments have helped hardware and software components to have a much better chance of success in doing their job and not disrupting the computing environment.

Monitoring the health of the computing environment has taken monumental leaps forward as well. Back in the day, monitoring was done in a very manual fashion with people toting around Network General sniffers (anyone remember the Dolch?) ready to hook up to the network to see what was going on in case of a catastrophic failure of the computing environment. What started out as purely a focus on Fault Management, using something like HP Openview, Syslog, or SNMP traps, has evolved into full blown computing environment health assessment, security and compliance monitoring, and performance tuning.

Where there were once only a dozen tools on the market to do fault, configuration, accounting, performance and security management, that number has now tripled or even quadrupled. I believe this shows the critical importance that companies and individuals place on the computing environment that facilitates their access to the information necessary to do their job, pay their bills, interact with family and friends, shop, and have their social life. The network is no longer a nice thing to have, it is a required tool for doing business much like the telephone, typewriter and notepads were years ago.

Which brings me to the point of this article, the network have evolved into a computing environment that is an integral part of most companies and now there is a plethora of network monitoring tools that exist to help those responsible for keeping a very important company asset in tip top shape. As our networks continue to get faster and process more and more data, we need to ensure that all the data required by these network monitoring tools gets to its intended destination and doesn’t overload the tool.

This is where Network Packet Brokers come into play, this is why this new market exists. Network switches and routers provide some of the information that these “Tools” require, but not all. That is because the primary purpose of networking devices is to move production traffic from clients to servers and providing a copy of that data to a network tool is of secondary importance and thus has a lower priority.

Network Packet Brokers provide a means to obtain the required data from various points throughout the network (via network taps, preferred or SPAN ports) and transport a copy of the data collected to the tools that need that information and not drop a single bit of data.

Network Packet Brokers are being deployed in large networks today with increasing frequency, because the network is a critical part of operating your business, and its health needs to be monitored to prevent a disruption in service, and the security of your computing environment needs to be ensured to keep your confidential data confidential and provide your clients with confidence that their data that they have entrusted to you is secure.

What do you think, are network packet brokers a technology whose time has come? Or is it a waste of precious IT resources?