The Solid Foundation of Packet Delivery Platforms

One of the most important steps in designing and deploying a Packet Delivery Platform is determining where and how to collect packets from across your computing environment. No matter how great your security or performance tools, if you don’t provide them with the right packets, they won’t be able to effectively do their job monitoring and protecting your infrastructure. As a packet delivery platform is designed from the ground up to collect packets of interest from your entire computing environment, filter, transform, and then deliver them to the tools that will process them.

This first step in the design process builds a solid foundation for the packet delivery platform.  In this article we will explore the process of identifying the traffic of interest and figuring out where in your network you can collect those packets.

Which Packets Do I Need to Collect?

It really depends on the tools you are using to monitor and secure your computing environment.  These applications will dictate what traffic you need to collect.  If you have tools that are analyzing packets to report on application or network performance, or troubleshooting, you will likely wish to collect data on those uplinks from the network access to distribution layer or perhaps from the distribution to the core. The goal is to see the packets from the end users to the servers they commonly access either in your corporate datacenter or on the Internet and those packets typically traverse these uplinks.

For security tools, the critical parts of your computing environment are the links to the Internet, the data center core to aggregation layer links within your datacenter, the virtual switches in your private cloud and the packets flowing to or from your servers in the public cloud.  There are many different types of security tools available in the marketplace, and each has specific data collection preferences. Some tools focus on data loss prevention (DLP), intrusion and malware detection and protection; others on application, Web, database, and file security.  For example, if you are concerned about malware and intrusion protection or perhaps DLP, you should collect data from your Internet connections and data center links containing packets destined for the file and mail servers. If you are concerned about malicious activity within your datacenter (DLP, IDS, data exfiltration), you will want to monitor east-west (server to server) traffic by collecting packets from the links that carry server-to-server traffic as well as the virtual switches within your private cloud.  As the saying goes, you can’t protect against threats that you can’t see.

Though it is beyond the scope of this article to discuss the specific data needs of each type of security tool, I advise learning the data needs of your tools to ensure that you are collecting the packets that matter. To maximize their effectiveness and protect your network from performance problems or security threats, it is best to collect and deliver every packet a tool needs to see—and avoid sending irrelevant packets, which will increase the bandwidth required on the physical network interface and add extra processor cycles to filter out extraneous data.

Collecting Packets: The Physical Layer

Customers often ask, “Where should I deploy my TAPs?” Typically, only a select few within your organization will have the answer. In fact, it’s a relatively rare occurrence that those who own the tools have a deep knowledge of the network architecture they are tasked with monitoring. The security team (and perhaps other teams that have tools that wish to see packets) knows what type of data their tools need to see, and the network team knows where the packets are flowing within their network.  Close coordination between these two groups is key to effectively deploying a packet delivery platform.

Once you have defined the data collection requirements of the tools, you need to select where in your computing environment this data can be obtained. You will need a network drawing that includes information on both the physical connections between networking equipment as well as the logical path of the packets. Typically such a drawing doesn’t exist, there is just simply too much information to fit it all into a single drawing.  The logical drawing of the computing environment that shows servers, datacenters, end-user VLANs, and Internet access points, as well as how they connect to each other through firewall, and routing functions. This logical view of the computing environment must then be laid on top of the physical network drawing such that it clearly shows the path packets take from point A to point B.

The physical network drawing should include the physical medium for each connection (copper, single mode/multimode fiber) and the speeds of each link (10M/100M/1G/10G/40G/100G etc). This information will be used to select the right TAPs to deploy within your network.

In addition to traditional locations (e.g., Local Area Network, Data Center, DMZ), packets of interest can be found in a virtual server environment or a public cloud service such as Amazon Web Services (AWS).

To provide the most benefit, any tools that work by analyzing packets need to collect all the packets they need from the computing environment which they monitor. In order to be fiscally responsible, you will want to identify the smallest number of physical network cables that achieve this goal and install a Test Access Point (TAP) on these links.  A TAP is normally a relatively simple piece of hardware that is installed on the cable that connects two network devices and it generates a copy of all information that travels between the two network devices.

 

It is best to use TAPs 99% of the time, but if you cannot afford to TAP every link connected to a given distribution switch, then a copy of the traffic may be obtained by configuring a SPAN/Mirror port on a switch or router within a logical data path where traffic of interest is flowing.  The golden rule of a collecting packets is: TAP where you can, SPAN if you must.  SPANs and TAPs are not created equal, so please check out the following white papers from Gigamon for detailed information on the pros and cons of TAPs and SPANs.

* https://www.gigamon.com/sites/default/files/resources/whitepaper/wp-span-port-or-tap-3051.pdf

* https://www.gigamon.com/sites/default/files/resources/infographic/in-taps-vs-spans-infographic-1033.pdf

Quantity and Quality

Packet collection is probably the most important part of designing your packet delivery platform.  If you collect the wrong packets, collect too few packets by neglecting a portion of your computing environment, or lose packets during a denial of service attack or other performance-impacting situation in your network, tool performance will suffer. Most security tools depend on seeing the entire conversation to locate and isolate threats and if even one packet of an application session is missing, the whole transaction is discarded.  Packets matter. Build your visibility network on a solid foundation. Plan out your ideal packet collection schema, implement what you are able, and document the weak points in your data collection plan.

Your goal of creating a platform that enhances monitoring of your computing environment requires proper design, implementation, and maintenance. I have seen many deployments of fantastic tools that are connected to a single SPAN port off a core router.  In my opinion this approach does a disservice to the tool vendor who created a great product and to the customer who expected a great product to meet the business need for which they purchased it.

If you are reading this article, you are probably unhappy with the status quo and are looking for a more reliable, predictable, and scalable approach to monitor and secure your data in motion.  Packet Delivery Platforms give you the ability to achieve this goal.

 

 

Advertisements

The Evolution of Network Management

When I started in the networking industry in the 90s,90s-AAH I worked for a large consulting firm that helped manage a number of networks. A frequent customer request was to know what was going on in their network before an end user could call in to report a problem. Since that time—and not surprisingly—the desire to pro-actively monitor computing environments has only increased.

What Is Network Management?

The International Organization for Standardization (ISO) identifies five essential areas of network management: Fault, Configuration, Accounting, Performance, and Security (FCAPS).

Fault and configuration management

In my experience, the easier FCAPS components to implement are fault and configuration. Both are typically done using an in-band solution that engages the devices that run your network.

Fault management is often initiated at the management system using ICMP pings and SNMP, and may also include information pushed from the network devices using Syslog and SNMP Traps. The goal is to detect faults as they occur and initiate a manual or automated correction.

Inband NMSFor configuration management, the objective is to track network device configuration. Often, configuration management will make periodic backups of the configuration of each network device, and many network managers use this information to ensure that no unauthorized changes are made. Another common task in the configuration area is to ensure that the firmware or software running on the network components is kept up to date. In some configuration management systems—typically element management software from the network equipment vendor—software updates may be performed centrally.

Security management

Security is the next most commonly implemented network management function and can be viewed as two parts: one is to ensure the security of the network devices; the other is to ensure the security of the computing environment.

Securing network devices is relatively straightforward. You apply firmware patches on a regular basis and centrally control access to the devices using RADIUS or TACACS+.  These two services provide authentication (controlled access to the network device); authorization (defines what a particular user can do on the network device); and accounting (keeps track of the changes a particular user makes to the network device).

Securing the overall computing environment, by contrast, is more complex. Today, there are numerous tools available for monitoring the computing environment to detect malicious activity that seeks to disrupt network operations, steal information, or control systems. Increasingly, more and more tools are being developed to prevent such malicious activity from starting in the first place. These tools are deployed within the network and may collect information from the devices that manage the network. However, to provide a complete security analysis, they also require visibility into the traffic traversing the network.

Performance management

The final area is performance management. While easy to understand, it is difficult to do. Over the last 15 years, many products have been developed to help network managers identify when the network is struggling to perform at an acceptable level. Regardless of the varied root causes of a performance issue, the impact to the end users is real.

Early knowledge of performance issues within a computing environment can allow network managers to be pro-active in solving minor issues before they become major. The devices that run the network provide critical information on their health; this information is typically collected in-band directly from the network devices themselves via SNMP or a proprietary API. Unfortunately, this data is not enough to provide an accurate, complete picture of the performance of the network or its applications. While there are a large number of software vendors who have developed solutions to monitor the overall performance of the network and/or the critical applications running in the network, these tools almost always rely on an examination of the packets traversing the network to provide the best possible analysis.

The challenge for security and performance monitoring of the computing environment is how to provide these tools visibility across all the areas of the network that need to be monitored. Traditionally, these tools have been deployed either inline in protect mode at the Internet connection or out of band in monitor-only mode, in which case they are connected to a SPAN port on a switch or router.  We will talk more about the best ways to gain visibility into the network packets in the “Sources of Traffic” article that will appear later in this series.

The Next Step in Network Management: Packet Delivery Platforms

Device-based management is no longer sufficient. The next generation of performance and security tools require visibility into the packets passing through the network.

Enter Packet Delivery Platforms.

A Packet Delivery Platform provides the means to aggregate all sources of network traffic (inline bypass, SPANs, TAPs, virtual); send the right data to the right tool through filtering; clean up the data streams by transforming the packet flows; and, lastly, send the data to all the tools or tool groups that need to see those packets.

Gone are the days of running out of SPAN ports. Gone are the days of having to upgrade tools to faster network interfaces because of a network upgrade. Gone are the days of overloading an analysis tool because it had to receive unfiltered, duplicate packets.

Instead, we welcome the world of Packet Delivery Platforms.

Untitled 7

The High Performance Security Delivery Platform

This past week has been a busy one.  As I wrote in my previous blog, I was SCinet-Racka part of SCinet security team at SC15 in Austin, TX and on Monday we finished 3 weeks of work to bring the network up and while my SCinet security coworkers used the monitoring tools to keep the network safe during the conference, I worked the Gigamon booth on the show floor and visited with current and future Gigamon customers who were either exhibitors or attendees at SC15.

What is a Visibility Fabric?

During my time in the Gigamon SC15 booth, I was asked quite often what our product does?  Some think we are a part of the network infrastructure, and many times we are when our inline solution is deployed, but we are not a Firewall, server load balancer, switch or router.  Others think that we do data analytics, but we are not a malware detection, intrusion detection or prevention, SEIM, Data Loss Prevention, Application Performance Management or Network Performance Management tool.  After I tell them that we don’t do any of these things, then they ask, well what do you do?

We provide a high speed, reliable, configurable pipeline to all of areas of your computing infrastructure where you desire to see the traffic flowing across your network.  Think of it like being in a large room with multiple obstructions preventing you from seeing all areas of that room.  If I stand in one part of the room I can clearly see what is going on around me, but I have limited visibility into the entire room due to obstructions or people standing in the way.  I can observe and attest to what is going on in the area where I am currently located, I can even describe the safety and capacity of the area.  But if I wish to see into other areas, I will need to setup some sort of video camera and a local monitor to provide me with additional visibility.

Cloud-Tools
Where Gigamon Fits in Your Network

A visibility network is pretty similar to the example I provided above.  The security or network performance tool is me standing in the room and can only see the traffic where it is plugged into the network.  In order for that tool to see any other part of the network (room) it needs remote data gathering instruments (TAPs or SPANs) to see the other parts of the network.  If you don’t setup remote data gathering devices then you won’t have visibility to that part of our computing environment.

Most Security, Application and Performance management tools today thrive on access to the actual traffic that is flowing through your network.  The more packets you get to these tools the better analysis they will provide.  Gigamon is the means to get your tools the visibility they need.

Visibility is Key to Security

Being able to see the traffic traveling across your network is key to detecting malicious activity.  Maybe you don’t need to see all parts of your network, so just setup monitoring for those areas of your network where the interesting traffic is traveling across.  Most security and network managers would agree that the links to the Internet should be monitored and the second most common area is your data center.  A good way to determine where to setup your TAPs or SPANs is ask yourself where your traffic flows and secondly, where are my most valuable assets located.  That is where you should be watching.

You can’t protect against what you can’t see.  I may have some outstanding security tools, but unless they get a copy of network traffic from the key areas of my network, they won’t be able see if a malicious actor is trying to steal the valuable information.

The other major threat to visibility is encryption.  While encryption is a very useful technology that protects our valuable data from prying eyes, anything that is used for good can also be used for ill.  Hackers are increasingly using encryption methods to hide their activities within your network.  Having the ability to decrypt encrypted traffic in your network is a valuable tool to consider adding to your security delivery platform.

Security Doesn’t Have to affect Performance

The number one reason why many network and computing professionals resist adding security measures to their network is that they believe it will adversely affect their ability to get their job done or it will degrade the performance of the computing network and thus impacts their end users.

During a time of employment at a major computer manufacturer and software development company, I recall many times when a fellow employee in the company would decide that they wanted to check out the new version of our network management tool and since it was available as a free download to all employees, it would be downloaded and installed.  Then during the initial configuration, the tool would  discover the topolgy of the network, which would interrogate every router and switch asking for SNMP data and for the names and addresses of all routers or switches that router or switch knew of.  This single action would bring the network down.

Luckily great strides have been made to prevent a monitoring tool from

watching-cat
The SCinet Security T-shirt

interfering in the network it is supposed to be helping.  In the SCinet network at SC15, Gigamon provided a critical component of the security monitoring of the network.  The Gigamon visibility fabric acts as a shield to keep the security tools from impacting the performance of the network.  Through the use of four 100 Gig TAPs, thirty two 10 Gig TAPS and one SPAN port, Gigamon obtained a copy of the traffic flowing across all links from the show floor (also known as the commodity network) to the network core as well as the links to the Internet providers.  Based on the path of the movement of data through the SCinet network infrastructure, the data flowing across these links provided the Reservoir Labs RSCOPE Bro servers and the Dell Firewall Sandwich with all the data necessary to  monitor the security of the network.

SCInet-Security-data-flowThe security tools received data that was deduplicated (ensuring only one copy of a given packet made it to the tool) and on selected links, we selected only IPv4 and IPv6 packets and dropped everything else.  We replicated the stream of data and sent an identical copy to the Reservoir Labs Bro servers and the Dell Firewall Sandwich.  Since there was such a large amount of traffic to be analyzed, we load balanced the single stream of data across 24 interfaces on the six Reservoir Labs Bro servers and 12 interfaces on the 6 Dell Sonic Wall Super Massive Firewalls.

Each year one of the most powerful networks in the world, SCinet, is designed to be very fast and support high performance computing for one week out of the year.  Ensuring that SCinet is used for its intended purposes and not compromised or used for illegitimate purposes is an essential task.  The SCinet security team did an amazing job pulling together the best products and the best engineers to accomplish this goal without sacrificing performance.  Looking forward to doing it again next year.

The Ultimate Network

SC15ColorSoftShadowEarlier this year I learned that one of my favorite technology industry conference would be held in Austin, TX again.  I immediately contacted my company’s events team to request to be a part of it since I live just north of Austin.  Turns out Gigamon had already signed up to be a Gold Contributor to SCinet.

What I really like about the SC conference is that it is more than just a lot of experts giving talks, more than just vendors trying to sell products on the show floor, it is more of a collaboration among peers who are seeking to push the limits of computing.

SCinet

A great example of how different this conference is can be found by looking at the SCinet project.  Each year a one of a kind network is created for the conference, by volunteers.  Planning begins over a year in advance and requires around 150 volunteers to bring it to fruition.  While planning takes place for over a year, thenetwork actual implementation spans about 3 weeks.  The first week is staging, during this week the equipment is inventoried, unboxed, racked, cabled and powered on.  The goal is to get the network up and functioning at a basic level.  Then everyone goes home for a week.  The week prior to the conference everyone comes back and continues the work of configuring the equipment and software they are assigned.  As the beginning of the conference draws near, the days get longer and the work more intense.

The goal of SCinet is to support the SC conference; and showcase the best products, technology and research in high performance computing, networking, storage and analysis.  The best engineers from universities, industry and government research labs partner with vendors creating a unique environment that tests the boundaries of technology and engineering.  SCinet provides wireless connectivity to the conference attendees and high-speed links to exhibitor booths on the show floor.  Internet connectivity is provided by multiple high-speed networks like LEARN, ESnet, Internet2, and CenturyLink—aggregating more bandwidth than any other conference.  SCinet offers an opportunity for vendors to demonstrate their products in a computing environment where everyone is trying to demonstrate just how high performance their products are.  Vendors from all parts of the computing industry donate or loan their products towards building this one of a kind network.

IMG_5634Why Am I Here?

I am here because Gigamon is one of those vendors that is providing its products to SCinet.  As part of Gigamon’s involvement, they needed a technical resource to help get our product installed and configured to work with the rest of the network.  I happily volunteered.  I thought I would just be working with our products, but in reality I am an official member of SCinet security team, whose responsibility is to ensure that SCinet is not used for nefarious purposes.  Think of SCinet as a university-like network, there are very few limits on what can be done within SCinet.  Since this is a network designed to show off high performance computing, security could be viewed as a potential roadblock to that goal.  In the security plan for SCinet, great care was taken to ensure that the security monitoring was as unobtrusive as possible.  Gigamon is providing TAPs for the network, four 100 Gig and thirty-six 10 Gig TAPs, and two of our GigaVUE chassis (GigaVUE-HC2 and GigaVUE-HD8) that are clustered together.  Once we aggregate all these TAPs together, we will then deduplicate the data and send it to security devices from Reservoir Labs running Bro and to a Firewall sandwich featuring Dell’s SonicWALL Firewall.  In addition to providing wire data to these two products, I will also be generating 1 to 1 NetFlow data and sending it to Gigamon’s Fabric Manager running FabricVUE Traffic Analyzer as well as InMon and SplunkIMG_5624

So in addition to working with some pretty cool Gigamon products, I also have the opportunity to work with some outstanding individuals from academia, commercial, and government research labs utilizing some of the best in class security and performance tools.  What more could a systems engineer ask for?

SC, sponsored by IEEE Computer Society and ACM (Association for Computing Machinery) offers a complete technical education program and exhibition to showcase the many ways high performance computing, networking, storage and analysis lead to advances in scientific discovery, research, education and commerce. This premier international conference includes a globally attended technical program, workshops, tutorials, a world class exhibit area, demonstrations and opportunities for hands-on learning. For more information on SC15, please visit: http://sc15.supercomputing.org/

Flying Blind is Fun!

flying-blindWell maybe for some, but for a vast majority of people (pilots included), flying blind is not desirable. I am a private pilot and there is nothing quite like flying an airplane of a beautiful clear day when you have greater than 10 miles visibility and clear skies. You can see forever and most importantly you can see other airplanes, the landing field and all the beautiful sights as you fly along to your intended destination.

Poor Visibility

When the weather isn’t ideal, which happens often, you need special tools to help you fly safely and reach your destination. Since I wanted to be able to fly even under less than ideal conditions, I signed up to learn how to fly under Instrument Flight Rules (IFR), meaning that I would be able to fly without being able to see any visual references outside the airplane. IFR means that you rely solely on the instruments in your airplane and communication with Air Traffic Control (ATC) to navigate. Most of the time, I would take off under IFR conditions, then break out of the clouds a few thousand feet above ground level and I could see clearly once again. When making a decision whether or not to fly on a particularly dreary day, I would check the conditions along my route and most importably at the destination. The more information I was could gather prior to flight the better decision I was able to make on whether or not the flight could be safely made. I always preferred to fly under visual flight rules, but when a flight had to be made in order to make an appointment or a family event on time, it was nice to know that my airplane was certified for IFR flight, meaning it was equipped with working instruments and equipment that would provide visibility about where I was going when my eyes were not able to make contact with reference points outside the aircraft, I also had to be proficient in my ability to fly with the aid of only my instruments, which meant recent experience.

The Analogy

I really loved learning to fly, especially the instrument part. When I completed my instruction and achieved my instrument rating, I was a 10x better pilot. Even when I was flying VFR, my IFR skills were used, I was more confident in my flying abilities and more comfortable talking with ATC. In my professional career, I see quite a few similarities between network administration and aviation. In network design, we begin by flying under visual flight rules, we physically configure the network components, run ping tests and trace routes to ensure that data is flowing where it should and ask users to test their connections to critical resources on the network like their email server or access to various sites on the Internet. We planned out our deployment, had our network design reviewed by our peers and we carried out the implementation and test plan and everything passed and we accomplished our goal, just like when I got my private pilot’s license flying under VFR. I was a good pilot, but didn’t have great visibility, only visual reference points. Like obtaining an IFR rating, we can take our network to the next level, we can add all the instruments and radios necessary to gain better visibility into our network during the times when the fog is thick (like during a DDoS attack or a broadcast storm). And this increased visibility will make us better network admins during the clear times also. We will be able to fine tune our networks and adjust our heading to achieve a perfect course towards our destination of 100% uptime, rock solid security and outstanding performance. The Visibility Network (which is composed of Network Packet Broker products) is a conduit to collection points around our networks, collecting vital information about the network landscape and funneling that data to the tools that can analyze that data and provide visibility into the health, security and performance of one of the most critical components of our workplace environment. I wouldn’t fly without visibility, because you just never know what type of weather is on the other side of the mountain or the other side of the firewall.

The Network Just Works

NPB-Logo-with-bgAt least that is what most people think. I was talking with a college student this afternoon who is doing some work with my Father at his Electrical Service business. He is getting an EE degree and to him the network just works and he is not alone in that viewpoint.

When I started in the networking industry, the network didn’t just work, it worked most of the time, but problems arose often and the repair of the network was not always an easy task. Network troubleshooting and repair often stretched into multiple hours with engineers from the network group, the server group and the tools group all viewing their pieces of the puzzle to ensure that the problem was not with their stuff.

Luckily times have changed and now the components of the network are far more robust than they were 25 years ago. Not only are the network devices, servers and applications better designed and tested, they are also able to be monitored with much greater ease. Many advanced in quality assurance departments have helped hardware and software components to have a much better chance of success in doing their job and not disrupting the computing environment.

Monitoring the health of the computing environment has taken monumental leaps forward as well. Back in the day, monitoring was done in a very manual fashion with people toting around Network General sniffers (anyone remember the Dolch?) ready to hook up to the network to see what was going on in case of a catastrophic failure of the computing environment. What started out as purely a focus on Fault Management, using something like HP Openview, Syslog, or SNMP traps, has evolved into full blown computing environment health assessment, security and compliance monitoring, and performance tuning.

Where there were once only a dozen tools on the market to do fault, configuration, accounting, performance and security management, that number has now tripled or even quadrupled. I believe this shows the critical importance that companies and individuals place on the computing environment that facilitates their access to the information necessary to do their job, pay their bills, interact with family and friends, shop, and have their social life. The network is no longer a nice thing to have, it is a required tool for doing business much like the telephone, typewriter and notepads were years ago.

Which brings me to the point of this article, the network have evolved into a computing environment that is an integral part of most companies and now there is a plethora of network monitoring tools that exist to help those responsible for keeping a very important company asset in tip top shape. As our networks continue to get faster and process more and more data, we need to ensure that all the data required by these network monitoring tools gets to its intended destination and doesn’t overload the tool.

This is where Network Packet Brokers come into play, this is why this new market exists. Network switches and routers provide some of the information that these “Tools” require, but not all. That is because the primary purpose of networking devices is to move production traffic from clients to servers and providing a copy of that data to a network tool is of secondary importance and thus has a lower priority.

Network Packet Brokers provide a means to obtain the required data from various points throughout the network (via network taps, preferred or SPAN ports) and transport a copy of the data collected to the tools that need that information and not drop a single bit of data.

Network Packet Brokers are being deployed in large networks today with increasing frequency, because the network is a critical part of operating your business, and its health needs to be monitored to prevent a disruption in service, and the security of your computing environment needs to be ensured to keep your confidential data confidential and provide your clients with confidence that their data that they have entrusted to you is secure.

What do you think, are network packet brokers a technology whose time has come? Or is it a waste of precious IT resources?