Monthly Archives: March 2016

Covering The “Behind The Perimeter” Blind-Spot

Well, I cannot tell you what the next big breach will be, I CAN tell you that it will involve one critical system communicating to another system that it was/is not supposed to.  Whether that is ex filtration via secure ssh (SCP) to a server in Belarus or mounting a hidden drive share using a default service account that was never changed, behind the perimeter represents a rather large blind spot for many security endeavors.  In the video below, you are seeing a very quick and simple method for monitoring peer-to-peer communications using wire data with the ExtraHop Platform.  This is a somewhat painful process with logs due to the fact that logging build-up and tear-downs can impact the hardware being asked to do the logging and if you are licensed to pay by how much data you index, it can be expensive.  Grabbing Flow records directly off the wire positions your security practice to have this information readily available in real-time with no impact on the infrastructure as no system/switch is asked to run in debug mode.  Transactions are taken directly off the wire by the ExtraHop Discover Appliance and written directly to the Elastic ExtraHop Explorer Appliance to provide a large and holistic view of every transaction that occurs within your critical infrastructure.

This positions system-owners with the ability to (within minutes) to audit critical systems and account for transactions that are suspect.  In the video below, you will see how we can audit flow records by white listing expected communications and slowly leaving malicious communications no where to hide as you slowly reduce the surface area that you are trying to patrol.


I will follow this up with another post on how we can monitor Layer 7 metrics such as SQL Queries against critical databases.

Thanks for reading, please watch the video!

John Smith

My Evolution to Wire Data

A little over a week ago, Gartner Analysts Vivek Bhalla and Will Cappelli identified wire data as “the most critical source of availability and performance”, you can read the Gartner report here.  One of the reasons I started this blog in 2013 was because of the profound value I had discovered when using wire data as the source of truth in my environment. Today, I’d like to expand on a 2013 article “when uptime is not enough, the case for wire data” and discuss why I favor wire data as the best source of information that can even extend beyond availability and performance. I will discuss the value of evaluating transactions in-flight and some of the challenges I have found with other sources of Intelligence.

So why Wire Data?
Prior to 2013 and my first exposure to wire data I was a logging fiend. While at one of my previous employers, during the yearly holiday lunch we all had to put a headshot photo on a wall and our peers got to fill in the caption. My peers filled in “it’s not our fault, and I have the logs to prove it” under my picture. Not having the money for Splunk initially we had set up a Netscaler load balanced KIWI Syslog Farm that wrote to a SQL Database (we called it “SKUNK” for SQL-KIWI-Splunk-works). Later, at another employer I was able to purchase Splunk and used it extensively to collect logs and obtain operational awareness by leveraging the ad-hoc capabilities of the Splunk platform. Suffice it to say, I loved logs! While I am still an advocate of log data I started to find that anything that writes information to disk then evaluates the Intel AFTER it is written to disk becomes liable to the disk for performance as the database/PCAP/data set grows. We have seen this with some of the PCAP driven NPM vendors and logs can be liable to the same limitations. I have colleagues that have mid-high six figure investments in Indexing and Clustering technologies just to accommodate the size of their data store where they are keeping the logs. In the case of Splunk, they have done some innovating around hot, warm and cold disk storage which has assuaged some of the cost/performance burden but eventually, you end up with rack(s) of servers supporting your log infrastructure and if the data is not properly indexed or map-reduced, performance will suffer.

In Flight vs. On Disk
ExtraHop’s Wire Data analytics platform evaluates the transaction in transit. This provides some significant differences between Machine Data analytics (log data) and Wire Data Analytics. ExtraHop is NOT dependent on any other system for information. When given a span or tap, data is “taken” directly off the wire. Where a syslog solution depends on the end point to forward or syslog data off to them, ExtraHop observes the transaction in flight and evaluates it (uri, user-agent, process time, Database Query or Stored Procedure client ip, etc) at the time it happens. This ensures that transactions are quickly and accurately evaluated regardless of how much data is being written to disk. If a system is I/O bound, it could very easily impact its ability to send logs to a syslog server. However, since the ExtraHop appliance is basically providing a packet-level CCTV, transactions are observed and evaluated vs. waiting for them to be written to disk, then evaluated. If endpoints are I/O bound, the ExtraHop platform will not only continue to give you information/Intel about your degrading application performance, it will also tell you that your systems are I/O bound by reporting on zero windows observed. There are no agents or forwarders to install with ExtraHop as it is operating passively on the wire via a Tap or Span. As a security note, most hackers worth their weight in salt will shut off logging if they compromise a system. With ExtraHop they will not be able to hide as ExtraHop is not dependent on any syslog daemon or forwarder to get information. ExtraHop “TAKES” intelligence off the wire, whereas logging systems, while I still LOVE logs, depend on other systems to “GIVE” data.

Beyond Performance
Equally important to the Break-Fix/Performance value derived by Wire Data, there is also the Business Intelligence/Operational Analytics portion. Few people realize the level of value that can be derived from parsing payloads. In the retail space, I have seen Wire Data solutions that deliver real-time business analytics that had previously been a very long and costly OLAP/Data Warehousing job, now replace by parsing a few fields within the XML and delivering the data in real-time to the analytics teams as well then having the, now parsed and normalized, data forwarded to a back end database. With the Kerberos support introduced in 5.x, you can leverage the ExtraHop Memcached Session table to map users to IP Addresses then perform a lookup against it for every single transaction allowing you map ever single users transaction so that if they call and complain about performance, you can see every transaction that they have run or, if you suspect a user of malfeasance, you can audit their every move on your network and lastly, if you want to ensure that your CEO/CIO/CFO have a good experience or ensure that their credentials have not been phished, you can audit their transactions by user ID or Client IP Address. I tend to think of the ExtraHop appliance as a “prospectors pan” that they use to find gold, ExtraHop allows you to sift through the gravel and sand that can create a lot of noise in today’s 10GB, 40GB and soon 100GB networks and pull out those operational gold nuggets of Intel right off the wire. There’s a gold mine on every wire, ExtraHop is there to help you find it.

Conclusion
You will find numerous posts on this site showing integration between ExtraHop and Splunk, please understand I am NOT down on Splunk or any other logging technology, I am simply explaining the differences between using Machine data and Wire Data and why I use wire data as my first and best source of intelligence. The truth is, the ExtraHop platform is itself, an exceptionally good forwarder of data (you can read about it in other articles on this site) and can be used in conjunction with your machine data as it has the ability to “take” the Intelligence directly off the wire and “give” it to your back-end solution, if it’s Splunk/ELK, KAFKA, MongoDB or their own Big-Data appliance (ExtraHop EXA) . For my money, there isn’t a better forwarder than an ExtraHop appliance who is going to send you data regardless of the state of your servers/systems while at the same time, providing real-time visibility into transactions at the time they occur. I am very happy to see Gartner’s analysts starting to acknowledge the value that can be derived from Wire Data and recognize the hard work by a number amazing pioneers in Seattle.

What’s on your wire?

Thanks for reading, again if you want to read the Gartner report click here to download a copy.

John M. Smith