The Solid Foundation of Packet Delivery Platforms

One of the most important steps in designing and deploying a Packet Delivery Platform is determining where and how to collect packets from across your computing environment. No matter how great your security or performance tools, if you don’t provide them with the right packets, they won’t be able to effectively do their job monitoring and protecting your infrastructure. As a packet delivery platform is designed from the ground up to collect packets of interest from your entire computing environment, filter, transform, and then deliver them to the tools that will process them.

This first step in the design process builds a solid foundation for the packet delivery platform.  In this article we will explore the process of identifying the traffic of interest and figuring out where in your network you can collect those packets.

Which Packets Do I Need to Collect?

It really depends on the tools you are using to monitor and secure your computing environment.  These applications will dictate what traffic you need to collect.  If you have tools that are analyzing packets to report on application or network performance, or troubleshooting, you will likely wish to collect data on those uplinks from the network access to distribution layer or perhaps from the distribution to the core. The goal is to see the packets from the end users to the servers they commonly access either in your corporate datacenter or on the Internet and those packets typically traverse these uplinks.

For security tools, the critical parts of your computing environment are the links to the Internet, the data center core to aggregation layer links within your datacenter, the virtual switches in your private cloud and the packets flowing to or from your servers in the public cloud.  There are many different types of security tools available in the marketplace, and each has specific data collection preferences. Some tools focus on data loss prevention (DLP), intrusion and malware detection and protection; others on application, Web, database, and file security.  For example, if you are concerned about malware and intrusion protection or perhaps DLP, you should collect data from your Internet connections and data center links containing packets destined for the file and mail servers. If you are concerned about malicious activity within your datacenter (DLP, IDS, data exfiltration), you will want to monitor east-west (server to server) traffic by collecting packets from the links that carry server-to-server traffic as well as the virtual switches within your private cloud.  As the saying goes, you can’t protect against threats that you can’t see.

Though it is beyond the scope of this article to discuss the specific data needs of each type of security tool, I advise learning the data needs of your tools to ensure that you are collecting the packets that matter. To maximize their effectiveness and protect your network from performance problems or security threats, it is best to collect and deliver every packet a tool needs to see—and avoid sending irrelevant packets, which will increase the bandwidth required on the physical network interface and add extra processor cycles to filter out extraneous data.

Collecting Packets: The Physical Layer

Customers often ask, “Where should I deploy my TAPs?” Typically, only a select few within your organization will have the answer. In fact, it’s a relatively rare occurrence that those who own the tools have a deep knowledge of the network architecture they are tasked with monitoring. The security team (and perhaps other teams that have tools that wish to see packets) knows what type of data their tools need to see, and the network team knows where the packets are flowing within their network.  Close coordination between these two groups is key to effectively deploying a packet delivery platform.

Once you have defined the data collection requirements of the tools, you need to select where in your computing environment this data can be obtained. You will need a network drawing that includes information on both the physical connections between networking equipment as well as the logical path of the packets. Typically such a drawing doesn’t exist, there is just simply too much information to fit it all into a single drawing.  The logical drawing of the computing environment that shows servers, datacenters, end-user VLANs, and Internet access points, as well as how they connect to each other through firewall, and routing functions. This logical view of the computing environment must then be laid on top of the physical network drawing such that it clearly shows the path packets take from point A to point B.

The physical network drawing should include the physical medium for each connection (copper, single mode/multimode fiber) and the speeds of each link (10M/100M/1G/10G/40G/100G etc). This information will be used to select the right TAPs to deploy within your network.

In addition to traditional locations (e.g., Local Area Network, Data Center, DMZ), packets of interest can be found in a virtual server environment or a public cloud service such as Amazon Web Services (AWS).

To provide the most benefit, any tools that work by analyzing packets need to collect all the packets they need from the computing environment which they monitor. In order to be fiscally responsible, you will want to identify the smallest number of physical network cables that achieve this goal and install a Test Access Point (TAP) on these links.  A TAP is normally a relatively simple piece of hardware that is installed on the cable that connects two network devices and it generates a copy of all information that travels between the two network devices.

 

It is best to use TAPs 99% of the time, but if you cannot afford to TAP every link connected to a given distribution switch, then a copy of the traffic may be obtained by configuring a SPAN/Mirror port on a switch or router within a logical data path where traffic of interest is flowing.  The golden rule of a collecting packets is: TAP where you can, SPAN if you must.  SPANs and TAPs are not created equal, so please check out the following white papers from Gigamon for detailed information on the pros and cons of TAPs and SPANs.

* https://www.gigamon.com/sites/default/files/resources/whitepaper/wp-span-port-or-tap-3051.pdf

* https://www.gigamon.com/sites/default/files/resources/infographic/in-taps-vs-spans-infographic-1033.pdf

Quantity and Quality

Packet collection is probably the most important part of designing your packet delivery platform.  If you collect the wrong packets, collect too few packets by neglecting a portion of your computing environment, or lose packets during a denial of service attack or other performance-impacting situation in your network, tool performance will suffer. Most security tools depend on seeing the entire conversation to locate and isolate threats and if even one packet of an application session is missing, the whole transaction is discarded.  Packets matter. Build your visibility network on a solid foundation. Plan out your ideal packet collection schema, implement what you are able, and document the weak points in your data collection plan.

Your goal of creating a platform that enhances monitoring of your computing environment requires proper design, implementation, and maintenance. I have seen many deployments of fantastic tools that are connected to a single SPAN port off a core router.  In my opinion this approach does a disservice to the tool vendor who created a great product and to the customer who expected a great product to meet the business need for which they purchased it.

If you are reading this article, you are probably unhappy with the status quo and are looking for a more reliable, predictable, and scalable approach to monitor and secure your data in motion.  Packet Delivery Platforms give you the ability to achieve this goal.

 

 

Advertisements

The Evolution of Network Management

When I started in the networking industry in the 90s,90s-AAH I worked for a large consulting firm that helped manage a number of networks. A frequent customer request was to know what was going on in their network before an end user could call in to report a problem. Since that time—and not surprisingly—the desire to pro-actively monitor computing environments has only increased.

What Is Network Management?

The International Organization for Standardization (ISO) identifies five essential areas of network management: Fault, Configuration, Accounting, Performance, and Security (FCAPS).

Fault and configuration management

In my experience, the easier FCAPS components to implement are fault and configuration. Both are typically done using an in-band solution that engages the devices that run your network.

Fault management is often initiated at the management system using ICMP pings and SNMP, and may also include information pushed from the network devices using Syslog and SNMP Traps. The goal is to detect faults as they occur and initiate a manual or automated correction.

Inband NMSFor configuration management, the objective is to track network device configuration. Often, configuration management will make periodic backups of the configuration of each network device, and many network managers use this information to ensure that no unauthorized changes are made. Another common task in the configuration area is to ensure that the firmware or software running on the network components is kept up to date. In some configuration management systems—typically element management software from the network equipment vendor—software updates may be performed centrally.

Security management

Security is the next most commonly implemented network management function and can be viewed as two parts: one is to ensure the security of the network devices; the other is to ensure the security of the computing environment.

Securing network devices is relatively straightforward. You apply firmware patches on a regular basis and centrally control access to the devices using RADIUS or TACACS+.  These two services provide authentication (controlled access to the network device); authorization (defines what a particular user can do on the network device); and accounting (keeps track of the changes a particular user makes to the network device).

Securing the overall computing environment, by contrast, is more complex. Today, there are numerous tools available for monitoring the computing environment to detect malicious activity that seeks to disrupt network operations, steal information, or control systems. Increasingly, more and more tools are being developed to prevent such malicious activity from starting in the first place. These tools are deployed within the network and may collect information from the devices that manage the network. However, to provide a complete security analysis, they also require visibility into the traffic traversing the network.

Performance management

The final area is performance management. While easy to understand, it is difficult to do. Over the last 15 years, many products have been developed to help network managers identify when the network is struggling to perform at an acceptable level. Regardless of the varied root causes of a performance issue, the impact to the end users is real.

Early knowledge of performance issues within a computing environment can allow network managers to be pro-active in solving minor issues before they become major. The devices that run the network provide critical information on their health; this information is typically collected in-band directly from the network devices themselves via SNMP or a proprietary API. Unfortunately, this data is not enough to provide an accurate, complete picture of the performance of the network or its applications. While there are a large number of software vendors who have developed solutions to monitor the overall performance of the network and/or the critical applications running in the network, these tools almost always rely on an examination of the packets traversing the network to provide the best possible analysis.

The challenge for security and performance monitoring of the computing environment is how to provide these tools visibility across all the areas of the network that need to be monitored. Traditionally, these tools have been deployed either inline in protect mode at the Internet connection or out of band in monitor-only mode, in which case they are connected to a SPAN port on a switch or router.  We will talk more about the best ways to gain visibility into the network packets in the “Sources of Traffic” article that will appear later in this series.

The Next Step in Network Management: Packet Delivery Platforms

Device-based management is no longer sufficient. The next generation of performance and security tools require visibility into the packets passing through the network.

Enter Packet Delivery Platforms.

A Packet Delivery Platform provides the means to aggregate all sources of network traffic (inline bypass, SPANs, TAPs, virtual); send the right data to the right tool through filtering; clean up the data streams by transforming the packet flows; and, lastly, send the data to all the tools or tool groups that need to see those packets.

Gone are the days of running out of SPAN ports. Gone are the days of having to upgrade tools to faster network interfaces because of a network upgrade. Gone are the days of overloading an analysis tool because it had to receive unfiltered, duplicate packets.

Instead, we welcome the world of Packet Delivery Platforms.

Untitled 7

The Ultimate Network

SC15ColorSoftShadowEarlier this year I learned that one of my favorite technology industry conference would be held in Austin, TX again.  I immediately contacted my company’s events team to request to be a part of it since I live just north of Austin.  Turns out Gigamon had already signed up to be a Gold Contributor to SCinet.

What I really like about the SC conference is that it is more than just a lot of experts giving talks, more than just vendors trying to sell products on the show floor, it is more of a collaboration among peers who are seeking to push the limits of computing.

SCinet

A great example of how different this conference is can be found by looking at the SCinet project.  Each year a one of a kind network is created for the conference, by volunteers.  Planning begins over a year in advance and requires around 150 volunteers to bring it to fruition.  While planning takes place for over a year, thenetwork actual implementation spans about 3 weeks.  The first week is staging, during this week the equipment is inventoried, unboxed, racked, cabled and powered on.  The goal is to get the network up and functioning at a basic level.  Then everyone goes home for a week.  The week prior to the conference everyone comes back and continues the work of configuring the equipment and software they are assigned.  As the beginning of the conference draws near, the days get longer and the work more intense.

The goal of SCinet is to support the SC conference; and showcase the best products, technology and research in high performance computing, networking, storage and analysis.  The best engineers from universities, industry and government research labs partner with vendors creating a unique environment that tests the boundaries of technology and engineering.  SCinet provides wireless connectivity to the conference attendees and high-speed links to exhibitor booths on the show floor.  Internet connectivity is provided by multiple high-speed networks like LEARN, ESnet, Internet2, and CenturyLink—aggregating more bandwidth than any other conference.  SCinet offers an opportunity for vendors to demonstrate their products in a computing environment where everyone is trying to demonstrate just how high performance their products are.  Vendors from all parts of the computing industry donate or loan their products towards building this one of a kind network.

IMG_5634Why Am I Here?

I am here because Gigamon is one of those vendors that is providing its products to SCinet.  As part of Gigamon’s involvement, they needed a technical resource to help get our product installed and configured to work with the rest of the network.  I happily volunteered.  I thought I would just be working with our products, but in reality I am an official member of SCinet security team, whose responsibility is to ensure that SCinet is not used for nefarious purposes.  Think of SCinet as a university-like network, there are very few limits on what can be done within SCinet.  Since this is a network designed to show off high performance computing, security could be viewed as a potential roadblock to that goal.  In the security plan for SCinet, great care was taken to ensure that the security monitoring was as unobtrusive as possible.  Gigamon is providing TAPs for the network, four 100 Gig and thirty-six 10 Gig TAPs, and two of our GigaVUE chassis (GigaVUE-HC2 and GigaVUE-HD8) that are clustered together.  Once we aggregate all these TAPs together, we will then deduplicate the data and send it to security devices from Reservoir Labs running Bro and to a Firewall sandwich featuring Dell’s SonicWALL Firewall.  In addition to providing wire data to these two products, I will also be generating 1 to 1 NetFlow data and sending it to Gigamon’s Fabric Manager running FabricVUE Traffic Analyzer as well as InMon and SplunkIMG_5624

So in addition to working with some pretty cool Gigamon products, I also have the opportunity to work with some outstanding individuals from academia, commercial, and government research labs utilizing some of the best in class security and performance tools.  What more could a systems engineer ask for?

SC, sponsored by IEEE Computer Society and ACM (Association for Computing Machinery) offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC15, please visit: http://sc15.supercomputing.org/

Flying Blind is Fun!

flying-blindWell maybe for some, but for a vast majority of people (pilots included), flying blind is not desirable. I am a private pilot and there is nothing quite like flying an airplane of a beautiful clear day when you have greater than 10 miles visibility and clear skies. You can see forever and most importantly you can see other airplanes, the landing field and all the beautiful sights as you fly along to your intended destination.

Poor Visibility

When the weather isn’t ideal, which happens often, you need special tools to help you fly safely and reach your destination. Since I wanted to be able to fly even under less than ideal conditions, I signed up to learn how to fly under Instrument Flight Rules (IFR), meaning that I would be able to fly without being able to see any visual references outside the airplane. IFR means that you rely solely on the instruments in your airplane and communication with Air Traffic Control (ATC) to navigate. Most of the time, I would take off under IFR conditions, then break out of the clouds a few thousand feet above ground level and I could see clearly once again. When making a decision whether or not to fly on a particularly dreary day, I would check the conditions along my route and most importably at the destination. The more information I was could gather prior to flight the better decision I was able to make on whether or not the flight could be safely made. I always preferred to fly under visual flight rules, but when a flight had to be made in order to make an appointment or a family event on time, it was nice to know that my airplane was certified for IFR flight, meaning it was equipped with working instruments and equipment that would provide visibility about where I was going when my eyes were not able to make contact with reference points outside the aircraft, I also had to be proficient in my ability to fly with the aid of only my instruments, which meant recent experience.

The Analogy

I really loved learning to fly, especially the instrument part. When I completed my instruction and achieved my instrument rating, I was a 10x better pilot. Even when I was flying VFR, my IFR skills were used, I was more confident in my flying abilities and more comfortable talking with ATC. In my professional career, I see quite a few similarities between network administration and aviation. In network design, we begin by flying under visual flight rules, we physically configure the network components, run ping tests and trace routes to ensure that data is flowing where it should and ask users to test their connections to critical resources on the network like their email server or access to various sites on the Internet. We planned out our deployment, had our network design reviewed by our peers and we carried out the implementation and test plan and everything passed and we accomplished our goal, just like when I got my private pilot’s license flying under VFR. I was a good pilot, but didn’t have great visibility, only visual reference points. Like obtaining an IFR rating, we can take our network to the next level, we can add all the instruments and radios necessary to gain better visibility into our network during the times when the fog is thick (like during a DDoS attack or a broadcast storm). And this increased visibility will make us better network admins during the clear times also. We will be able to fine tune our networks and adjust our heading to achieve a perfect course towards our destination of 100% uptime, rock solid security and outstanding performance. The Visibility Network (which is composed of Network Packet Broker products) is a conduit to collection points around our networks, collecting vital information about the network landscape and funneling that data to the tools that can analyze that data and provide visibility into the health, security and performance of one of the most critical components of our workplace environment. I wouldn’t fly without visibility, because you just never know what type of weather is on the other side of the mountain or the other side of the firewall.