Category Archives: HIPPA

The Case for Wire Data: Security

During the 2nd week of February I had the honor to deliver two speaking sessions at the RSA Conference in San Francisco. One of them was on Ad Hoc threat intelligence and the 2nd was a Birds of a Feather round-table session called “Beyond Logs: Wire Data Analytics”. While it was a great conference, I found that you get some strange looks at a security conference when you are walking around with a badge that says “John Smith”. In both sessions, a key narrative was the effectiveness of wire data analytics and its ability to position security teams with the needed agility to combat today’s threats. In this post I would like to make the case for wire data analytics and demonstrate the effectiveness of using wire data as another tool along with your IDS/IPS and Log consolidation.

Wire Data Analytics:
Most security professionals are familiar with wire data already. Having used Intrusion protection and detection software for nearly 20 years now, concepts such as port mirroring and span aggregation are already native to them. INFOSEC professionals are some of the original wire data analytics professionals. We differ from IDS/IPS platforms in that we are not looking specifically at signatures rather we are rebuilding layer 2-7 flows. We have several areas where we can help INFOSEC teams by providing visibility into the SANS first two critical security controls, augmenting logs and increasing visibility as well as providing a catalyst for ongoing orchestration efforts.

SANS Top 2 Security Critical Controls:
From Wikipedia we have the following list making up the SANS top 20 Cyber Security Controls (Click image if you, like me, are middle aged and can’t see it)

In our conversations with practitioners we commonly hear that “if we could JUST get an inventory of what systems are on the network”. As virtualization and automation has matured over the years, the ability to mass-provision systems has made security teams’ job much harder as there can be as much as a 15% difference in the number of nodes on a single 24 bit CIDR block from one day to the next, hell from one hour to the next. Getting a consistent inventory with current technologies generally involves responding to an SNMP sweep, Ping response, WMI Mining or NMAP scan. As we have seen with IoT devices, many of them don’t have MIBs, WMI libraries and in most (all) cases logs. Most malicious doers will prefer to do their work in the dark, if detected, they will try to use approved or common ports to remain unseen.

“All snakes who wish to remain in Ireland, please raise your right hand….” Saint Patrick

The likelihood that a compromised system is going to respond to an SNMP walk, Ping, WMI connection or volunteer what they are doing may be about as likely as a snake raising their right hand.

How ExtraHop works with the top 2 SANS controls:
Most systems try to engineer down to this level of detail, technologies such as SNMP, Netflow, logs and the like to do a pretty good job of getting 80-90 percent of the hosts but there are still blind spots. When you are a passive wire data analytics solution, you aren’t dependent on someone to “give” data to you, we “take” the data off the wire. This means if someone shuts off logging, deletes /var/log/* they cannot hide. A senior security architect once told me, “if it has an IP address it can be compromised”. To that we at ExtraHop would answer “if it has an IP Address, it can’t hide from us”. I cannot tell you what the next big breach or vulnerability will be, but what I CAN say with certainty (and trust me, I coined the phrase “certainty is the enemy of reason”, I am NEVER certain) is that it will involve one host talking to another host it isn’t supposed to. With wire data, if you have an IP address and you talk to another node who ALSO has an IP address. Provided we have the proper surveillance in place….YOUR BUSTED!

ExtraHop creates an inventory as it “observes” packets and layer 7 transactions. This positions the security team to account for who is talking on their network regardless of the availability of agents, Netflow, MIBs or WMI libraries. To add to this, ExtraHop applies a layer of intelligence around it. Below, you see a collection of hosts and locations as well as a transaction count. What we have done is import a customer’s CIDR block mapping csv that will then allow us to geocode both RFC1918 addresses as well as external addresses so that you have a friendly name for the CIDR block. This is a process of reconciling which networks belong to which groups and/or functions. As you can see, we have a few IP Addresses, the workflow here is to identify every IP address and classify it’s CIDR block until you can fully account for who lives where. This takes a process of getting an accurate inventory from, what can be, a 3 month or longer task into a few hours. Once you have reconciled which hosts belong to which functions, you have taken the first step in building your Security Controls foundation securing the first control. The lack of this control is a significant reason why many security practices topple over. An accurate inventory is the foundation, to quote the defensivesecurity.org podcast “ya gotta know what you have first”.

Click Image:

SANS Control 2: Inventory of Authorized and Unauthorized Software:
While wire data cannot directly address this control, I tend to interpret (maybe incorrectly) this as being networked software. While something running on just one host could do significant damage to that one host. Most of us worry more about data exfiltration. This means that the malicious software HAS to do something on the Network. Here we look at both Layer 4 and Layer 7 to provide an inventory of what is actually being run on the systems that you have finally gathered an inventory for.

In the graphic below, you see one of our classified CIDR blocks. We have used the designation “Destination” (server) to get an accurate inventory of what ports and protocols are being served up by the systems on this CIDR block. (Or End Point Group if you are a Cisco ACI person). Given that I have filtered out for transactions being served up by our “Web Farm” the expected ports and protocols would be HTTP:8080, SSL:443, HTTP, etc. Sadly, what I am seeing below is someone SSHing into my system and that is NOT what I expected. While getting to this view too me only two clicks we can actually do better. We can trigger an alert letting the SOC or CSIRT know that there has been a violation. Later in this post, we will talk about how we could actually counter-punch this type of behavior using our API.

As far as SANS 2nd Control. If I look on a web server and I see that it is an FTP Client to a system in Belarus, I am generally left to conclude that the FTP is likely unauthorized. What ExtraHop gives you, in addition to an accurate inventory, is an accounting for what ports and protocols are in use by both the clients and servers using those segments. While this is not a literal solution for SANS 2nd control it does have significant value in that INFOSEC practitioners can see what is traversing their network and are positioned to respond with alerts or orchestrate remediation.

Layer 7 Monitoring:
In the video below, titled “Insider Hating”, you see our Layer 7 auditing capability. In this scenario we have set up an application inspection trigger to look for any queries of our EHR database. The fictitious scenario here is that we want to audit who is querying our EHR database to ensure that it is not improperly used or that someone does not steal PHI from it. When an attacker has stolen credentials or you have an insider, you now have an attack that is going to use approved credentials and approved ports/protocols. This is what keeps CIOs, CISOs and practitioners up at night. We can help and we HAVE helped on numerous occasions. Here we are setting up a L7 inspection trigger to look for any ad hoc like behavior. In doing so, we can position, not JUST the security team to engage in surveillance, but the system owners. This is an ABSOLUTE IMPARATIVE if we want to be able to stop insiders or folks with stolen credentials. We need to do away with the idea that security teams have a crystal ball. When someone runs a “Select * from ERH” from a laptop in the mail room, we can tell you that it came from the mail room and not the web server. We can also alert the DBA of this and get system owners to take some responsibility for their own security. This same query, to many security teams, will look like an approved set of creds using approved ports. This same information being viewed by the DBA or System owner may cause them to fall out of their chair and run screaming to the Security teams’ office. The of vigilance by system owners, in my opinion, is the single greatest reason breaches are worse than ever before in spite of the fact that we spend more money than ever.

 

Augmenting Logs:
I love logs, I love Splunk, LogRhythm and of course my old friend Kiwi!! But today’s threats and breaches happen so fast that using just logs positions you to operate in a largely forensic fashion. In many cases, by the time the log is written and noticed by the SOC the breach has already happened. Below you see a graphic from the Verizon DBIR that states that 93% of all compromises happen within minutes, 11% within seconds. Using just logs and batch processing to find these threats is great for rooting out patterns and malicious behavior but, as I stated previously, largely forensic. As a Wire Data Analytics platform we work/live in a world of microseconds and thus for us, seconds are hours and minutes are days. Current SIEM products, when not augmented with wire data analytics, simply don’t have the shutter speed to detect and notify or orchestrate a timely response.

 

Example:
I saw an amazing black-hat demo on how OpenDNS was using a hadoop cluster to root out C2 controllers and FastFlux domains. The job involved a periodic batch job using pic to extract domains with a TTL of 150. Through this process they were able to consistently root out “FastFluxy” domains to get a new block list.

We have had some success here collecting the data directly off the wire. I will explain how it works: (we are using a DNS Tunneling PCAP but C2 and Exfiltration will have similar behavior).

  • First we whitelist common CDNs and common domains such as Microsoft, Akamai, my internal intranet namespace, etc.
  • We collect root domains and we start adding the number of subdomains that we observe.
    • In the example below, we see pirate.sea and we start to increment each time we observe a subdomain
  • If a root domain has a count of over 50 subdomains within a 30 second period, we account for it. (thus the dashboard below)

The idea behind this inspection trigger is that if the root domain is NOT a CDN, not my internal namespace and not a common domain like Google or Microsoft, WHY THE HELL DOES THE CLIENT HAVE 24K lookups? Using logs, this is done via a batch process vs. using wire data, we uncover suspicious behavior in 30 seconds. Does that mean you don’t need logs or the ingenius work done by OpenDNS isn’t useful? Hell no, this is simply augmenting the log based approach to give you more agile tool to engage directly with an issue as it is happening. I am certain that even the folks at OpenDNS would find value in being able to get an initial screening within 30 seconds. In my experience, with good white listing, the number of positives is not overly high. Ultimately, if a single client makes 24500 DNS lookups for a domain that you don’t normally do business with, it’s worth investigating. We routinely see Malware, Adware as well as 3rd party, unapproved, apps that think they are clever by using DNS to phone home (yes YOU Dropbox) using this method.

Click Image:

SIEM products are a lynch pin for most security teams. For this reason, we support sending data to SIEM platforms such as LogRhythm and Splunk but we also provide a hand-to-hand combat tool for those SecOps (DevOps) focused teams who want to engage threats directly. In the hand-to-hand world of today’s threats, no platform gives you a sharper knife or a bigger stick than Wire Data Analytics with ExtraHop.

Automation and Orchestration (Digital counter-punching):
In an article in September of 2014 GCN asked “is automation security’s only hope?” With the emergence of the “human vector” what we have learned over the last 18 months is that you can spend ten million dollars in security software, tools and training only to have Fred in payroll open a malicious attachment and undo all of it within a few seconds. As stated earlier in this post, 11% of compromises happen within seconds. All, I hope, is not lost however, there have been significant improvements in orchestration and automation. At RSAC 2016 Phantom Cyber debuted their ability to counter-punch and won first prize in the innovation sandbox. You can go to my youtube channel and see several instances of integration with OctoBlu where we are using OctoBlu to query threat intelligence and warn us of malicious traffic. But we can go a lot further with this. I don’t think we have to settle for post-mortem detection (which is still quite valuable to restrict subsequent breach attempts) with logs and batched surveillance. Automation and orchestration will only be as effective as the visibility you can provide.

Enter Wire Data:
Using wire data analytics, keep in mind that ours is a world of microseconds, we have the shutter speed to observe and act on today’s threats and thread our observed intelligence into orchestration and automation platforms such as Phantom Cyber and/or OctoBlu and do more than just warn. ExtraHop Open Data Stream has the ability to securely issue an HTTP.post command whereby we send a JSON object with the parameters of who to block positioning INFOSEC teams to potentially stop malicious behavior BEFORE the compromise. Phantom Cyber supports REST based orchestration as does Citrix OctoBlu, most of your newer firewalls have API’s that can be accessed as does Cisco ACI. The important thing here to remember is that these orchestration tools and next generation hardware API’s need to partner with a platform that can not only observe the malicious behavior but thread the intel into these API’s positioning security teams for tomorrows’ threats.

My dream integrations include:

  • Upon observing FastFluxy behavior, sending OpenDNS an API call that resolves the offending domain to 127.0.0.1 or a warning page
  • Putting a mac address in an ACI “Penalty box” (quarantine endpoint group) when we see them accessing a system they are not supposed to
  • Sending an API call to the Cisco ASA API to create an ACL blocking a host that just nmapped your DMZ

As orchestration and automation continues to take shape within your own practices, please consider what kind of visibility available to them. How fast you can observe actionable intelligence will have a direct effect on how effective your orchestration and automation endeavors are. Wire Data analytics with ExtraHop has no peer when it comes to the ability to set conditions that make a transaction actionable and act on it. Orchestration and automation vendors will not find a better partner that will make their products better than ExtraHop.

Conclusion:
The threat landscape is drastically changing and the tools in the industry and rapidly trying to adapt. An orchestration tool is not effective without a good surveillance tool, a Wire Data analytics platform like ExtraHop is made better when coupled with an orchestration tool that can effectively receive REST based Intel. The solution to tomorrows’ threats will not involve a single vendor and the ability to integrate platforms using APIs will become key to implementing tomorrows’ solutions. The ExtraHop platform is the perfect visibility tool to add to your existing INFSEC portfolio. Whether you are looking to map out a Cisco ACI implementation or you want to thread wire data analytics into your Cisco Tetration investment, getting real-time analytics and visibility will make all of your security investments better. Wire Data Analytics will become a key part of any security team’s arsenal in the future and the days of closed platforms that cannot integrate with other platforms are coming to an end.

There is no security puzzle where ExtraHop’s Wire Data Analytics does not have a piece that fits.

If you’d like to see more, check out my YouTube channel:
https://www.youtube.com/playlist?list=PLPadIDS3iteYhQuFhWy2xZemdIFzMNtpr

Thanks for reading

John Smith

 

 

 

 

 

 

 

 

 

SACCP: Stream Analytics Critical Control Point

I left the enterprise approximately 30 months ago after being a cubicle drone for the last 18 years.  I now work for ExtraHop Networks, a software company that makes a wire data analytics platform for providing operational intelligence to organizations around their applications, the data that traverses their wire and basically shines light on the somewhat opaque world of packet analysis.

In the last few years, I can honestly say that I find myself getting a bit frustrated with the number of breaches that have occurred due, in my opinion, in large part to the lack of involvement by system owners in their own security. For my household alone, in the last 24 months, we are on our 5th credit card (in fact, I look at my expiration dates on most of my credit cards and chuckle on the inside knowing I will never make it.) I am also a former Federal Employee with a clearance so I also have the added frustration of knowing several Chinese hackers likely had access to my SF86 information (basically my personal and financial life story). In the last 15 years, we have added a range of regulatory framework, Security Operations Centers (SOC), I have watched INFOSEC budgets bulge while needing to justify my $300 purchase of Kiwi Syslog server. I have concluded that maybe the time has come for the industry to try a new approach. The breaches seem to get bigger and no matter what we put in place, insiders or hackers just move around it. At times I wonder if a framework I learned in my career prior to Information Technology may be just what the industry needs?

My first job out of College was with Maricopa County Environmental Health (I was the health inspector) and I was introduced to a concept called HACCP (Hazard Analysis Critical Control Point) and I think some of what I learned from it can be very relevant in analyzing today’s distributed and often problematic environments.


HACCP, pronounced “hassup”, is a methodology of ensuring food safety by the development of a series of processes that ensure, in most cases, that no one gets sick from eating your food.  It involves evaluating the ingredients of each dish and determining which food is potentially hazardous and what steps need to be taken to ensure that quality is ensured/maintained from food prep to serving.

While working as the health inspector, I was required to visit every permit holder twice a year and perform a typical inspection that involved taking temperatures, making sure they had hot water, employees washed hands and stayed home when they were sick, etc. But in most if not all of the restaurants I inspected, the process of checking temperatures, ensuring there is soap at the hand wash station and making sure there is hot water did not JUST happen during an inspection, I knew that in most cases it went on even when I was not on the premises. Sadly, in today’s enterprise, generally systems are only checked and/or monitored when an application team is being audited. An incumbent INFOSEC team cannot be responsible for the day to day security of a shared services or hosting team’s applications any more than I could be in every single restaurant every single day. The operator has to take responsibility; I am proposing the same framework for today’s enterprise. Share services and hosting teams need to take responsibility for their own security and use INFOSEC as an auditing and escalation solution. I will attempt to parallel how ExtraHop’s Stream analytics solution can provide an easy way to accomplish this even in today’s skeleton crew enterprise environments.

Let’s start with some parallels.

An example of a HACCP based SOP would be:

  • The cooling of all pre-cooked foods will ensure that foods are cooled from 135 degrees to 70 degrees within two hours
  • The entire cooling process from 135 degrees to 41 degrees will not take more than 6 hours.

So, I am taking away the “H” and putting in an “S” for SACCP I am proposing that we do the same for our applications and systems that we support at the packet level.  Just as ingredients may have chicken, cheese and other potentially hazardous ingredients applications may have SSO logins, access tokens, PII being transferred between DB and Middle or Front End tiers. We need to understand each part of an infrastructure that represents risk to an application and what an approved baseline is, what mitigation steps to take and who is responsible for maintaining it.  Let’s take a look at the 7 HACCP/SACCP principles.

Principle 1 – Conduct a Hazard Stream Analysis
The application of this principle involves listing the steps in the process and identifying where there may be significant risk. Stream analytics will focus on hazards that can be prevented, eliminated or controlled by the SACCP plan. A justification for including or excluding the hazard is reported and possible control measures are identified.

Principle 2 – Identify the Critical Control Points
A critical control point (CCP) is a point, transaction or process at which control (monitoring) can be applied to ensure compliance and, if needed, a timely response to a breach.

Principle 3 – Establish Critical Limits
A full understanding of acceptable thresholds, ports and protocols of specific transactions will help with identifying when CCP is outside an acceptable use.

Principle 4 – Monitor Critical Control Point
Monitor compliance with CCPs using ExtraHop’s Stream analytics Discover and Explorer appliances to ensure that communications are within the expected and approved ports and protocols established in each CCP.

Principle 5 – Establish Corrective Action
Part of this is not only understanding what to do when a specific critical control point is behaving outside the approved limits but to also establish who owns the systems involved in each CCP.  For example, if a Critical Control Point for a server in the middle-tier of an application is suddenly SCP-ing files out to a server in Russia, establish who is responsible for ensuring that this is reported and escalated as soon as possible as well as establish what will be done in the event a system appears to be compromised.

Principle 6 – Record Keeping
Using the ExtraHop Explorer appliance, custom queries can be set up and saved to ensure that there is proper compliance with established limits. Also integration with an external SIEM for communications outside the established limits can be enabled as well as HTTP push and Alerting.

Principle 7 – Establish Verification
Someone within the organization, either the INFOSEC team or team lead/manager must verify that the SACCP plan is being executed and that it is functioning as expected.

So what would a SACCP strategy look like?

Lets do a painfully simple exercise using both the ExtraHop Discover Appliance and ExtraHop Explorer Appliance to create a Stream Analytics Critical Control Point profile.

Scenario: We have a Network that we want to call “Prod”.

Principal 1: Analysis
Any system with an IP Address starting with “172.2” is a member of the Prod network and there should ONLY be INGRESS sourcing from the outside (The Internet) and Peer-to-Peer communications between Prod Hosts. No system on the Prod network should establish a connection OUTSIDE Prod.

Principal 2: Identify CCPs
In this case, the only Critical Control Point (CCP) is the Prod network.

Principal 3: Limits
As stated, the limits are that Prod hosts can accept connections from the outside BUT they should not establish any sessions outside the Prod network.

Principal 4: Monitoring

Using the ExtraHop Discover Appliance (EDA) we will create a trigger that identifies transactions based on the logical network names of their given address space and monitor both the ingress and egress of these networks.

In the figure below, we will outline how we are setting a logical boundary to monitor communications. In this manor we can lay the groundwork for monitoring the environment by first identifying which traffic belongs to which network.

  • You see on line 5 in the trigger below we are establishing which IP blocks belong to the source (egress) networks.
  • You then see on line 11 we are identifying the prod network as a destination (ingress).

*Important, you DO NOT have to learn to write triggers as we will write them for you but we are an open platform and we do provide an empty canvas to our customers should they want to paint their own masterpiece thus we are showing you how we do it.

 

Next we will leverage the ExtraHop Explorer Appliance (EXA) to demonstrate where the traffic is going. You will see on line 28 (although commented out) we are committing several metrics to the EXA such as source, destination, protocol, bytes, etc. This completes principal 4 and allows us to monitor the Prod network. In the figure below, you will see that we are grouping by “Sources”. You will note that Prod has successfully been classified and it has over one million transactions.

 

 

Principal 5: Establish Corrective Action
Well, in our hypothetical prod network, we have noted that there are some anomalies. As you can see below, when we filter on Prod as the source and we group by the Destinations we see that 15 of our nearly 1.3 million transactions were External. In most situations, this would go largely unnoticed by several tools however using SACCP and the ExtraHop’s Stream Analytics platform, the hosting team or SOC are positioned to easily see that there is an issue and begin the process of escalating it or remedying the issue with further investigation.

*Note, we can easily create an alert that can warn teams of when a transaction occurs outside the expected set of transactions. We also have a RESTful API that can be interrogated by existing equipment to see anomalies.

 

 

Digging Deeper:
As we dig a little deeper by pivoting to the Layer 7 communications (demonstrated in the video below) you will note that someone has uploaded a file to an external site at 14.146.24.124. Depending on what was in that file and existing policies, the mitigation could involve a cardboard box and a visit from the security guard.

 

Principal 6: Establish Record Keeping
The ExtraHop Discover Appliance has the ability to send a syslog to an incumbent SIEM system as well as a RESTFUL push. There is also a full alerting suite that can alert via email or SNMP Trap. In most enterprises, there is already an incumbent record keeping system, the ExtraHop platform has a variety of ways to integrate with the incumbent solution.

 

Principal 7: Verification
Someone should provide oversight of the SACCP plan and ensure that it is being executed and that it is having the desired results. This can either be the INFOSEC team management or hosting team management but someone should be responsible for ensuring that the shared services team(s) is (are) following the plan.

 

Conclusion:
The time has come for a new strategy, in several other industries where there is a regulatory framework for safety, compliance and responsibility there exists a culture of the operators taking responsibility for ensuring that they are compliant. The Enterprise is over 30 years old and just as the Health Inspector cannot be in every restaurant every day or a policeman cannot be on every street corner, the time has come for the IT industry to ask that system owners take some of the responsibility for their own security.

 

Thanks for reading and please check out the video below.

John

ADDENDUM!!!  (PUNKBUSTER OPTION!)

I wanted to take the time to show the next iteration of this, I call it precision punk busting…”err”..I mean Packet Capture.

The ExtraHop Discover Appliance has a feature called Precision Packet Capture.  Within the same narrative described above, I have edited my trigger to include taking a packet capture any time a policy is violated.  If you recall, I wanted to ensure that my Prod network ONLY communicated within the Prod network.  I added the following javascript to my trigger and you will see that I have instructed the appliance to kick off a packet capture in the event the policy is violated.

Busta_PCAP_Trigger

As a result of the FTP Traffic out to the internet we notice that we have a PCAP waiting for us indicating that a system has violated the Prod policy.

Busta_PCAP

 

We can also alert you that you have a PCAP waiting for you either via Syslog, SNMP or Email.  This PCAP can be used as forensics, digital evidence against an insider or a way to verify just wha the “F” just happened.

Having this information readily available and alerting either a system owner or SOC team that a policy was violated is a much easier surveillance method than sorting through Terabytes of logs or sifting through a huge PCAP file to get what you want.  Here we are ONLY writing PCAPs for those instances that violate the policy.

Thanks for reading!

Happy punk busting!!!

Thanks

John

Advanced Persistent Surveillance: SSH Connectivity

Today I read about a Juniper announcement that unauthorized code in Juniper firewalls can allow an attacker to listen in on conversations, even decrypting communications by using your firewall as a MITM. A second, unrelated according to the company, announcement concerned a pair of exploits, one that allows an attacker telnet or ssh access into the device and that a “knowledgeable” user could also decrypt vpn traffic once the firewall had been compromised. While they say that there is no way to tell if you have been victim of this exploit, there are some ways you can check to see if there is any malicious activity on your devices and you CAN do so without sifting through a terabyte of log files.

Most Juniper customers will shut telnet off in favor of ssh so I will focus on how to use wire data analytics to monitor for potential malicious behavior over ssh.

First, I am not what you would call an INFOSEC expert. I worked in IT Security for a few years handling event correlation and some perimeter stuff but I firmly believe that anyone that is responsible for ANYTHING with an IP address should consider themselves a security practitioner, at least for those systems under their purview. I would consider myself more of a “packet jockey”. I am a Solutions Architect for a Wire Data analytics company, ExtraHop Networks. I have spent the better part of the last two years sitting at the core of several large enterprises looking at packets and studying behavior. Today I will go over why it is important to monitor any and all ssh connections and I will discuss why logs aren’t enough.

Monitoring SSH:
While the article states “There is no way to detect that this vulnerability was exploited.”, I would say that if you see a non RFC1918 address SSH-ing into your firewall, something needs to be explained. Currently, most teams monitor ssh access by syslogging all access attempts to a remote syslog server where they can be observed, hopefully, in time to notify someone that there has been unauthorized activity. The issue here is that once someone compromises the system, if they are worth their weight in salt, the first thing they do is turn off logging. In addition, the act of sifting through syslogs can be daunting and time consuming and at times does not deliver the type of agility needed to respond to an imminent threat.

Enter ExtraHop Wire Data Analytics:


What I like about wire data analytics is that you are not dependent on a system to self-report that it has been compromised. Simply put you cannot hide from ExtraHop, we will find you! Yes, you can breach the Juniper firewall (or any other ssh enabled device) and shut logging off but you cannot prevent us from seeing what you are doing.

*(I am assuming you can shut logging off, I know you can on the ASA’s but I have never administered a Juniper firewall so don’t quote me on that but most devices have to be told what and where to log).

On the wire, there is nowhere to hide, if you are coming from an IP address and you are connecting to another IP address, you’re busted. Whether you are running a “select * from …” on the production database server, SCPing the company leads to your home FTP server or compromising a firewall. ExtraHoop offers several ways to monitor ingress and egress traffic, today I am going to discuss how we can monitor ssh via the Discover Appliance as well as how to leverage our new big data platform, our Explorer Appliance.

Using the Discover Appliance to monitor SSH Traffic:

One of the first and easiest ways to check and see if you have had anyone ssh into your firewall is to simply browse to it in the UI and go to L7 protocols and look for SSH.

Click to enlarge

 

Click to enlarge

You can also build a quick dashboard showing who has ssh’d into the box and make it available for your SOC to view and alert you on. The dashboard below is showing the number of inbound SSH packets. You see the source IP of 172.16.243.1 as well as 23 inbound packets. We can also show you outbound packets as well.

This can all be done literally within 5 minutes and you can have total visibility into any ssh session that connects to your Juniper firewall, or ANY ssh enabled device, or ANY device over ANY port or protocol.

Can we have an alert? Yes, ExtraHop has a full alerting system that allows you to alert on any ssh connection to your gateway devices.

Monitoring SSH via the ExtraHop Explorer Appliance:

A few weeks ago, ExtraHop introduced their Explorer Appliance. This is an accompanying appliance that allows you to write flows and layer 7 metrics to a big data back end as part of a searchable index. In the example I am going to show you I will be speaking specifically about “Flow” records. ExtraHop can surgically monitor any port for any device and write them out to the explorer appliance. For Flow records, since they are very intense, we do not automatically log them, we recommend that you set them on a per host basis from the Admin console. Once added, any and all communications will be audited and searchable for that host.

To audit ssh connectivity of our Juniper Firewall we will go to the discovered device and select the parent node. From there on the right hand side you will see an “ExtraHop ID:” (Note the Search Textbox above it)

Click to enlarge

 

You will past the device ID into the search box and click “Global Records Query”

Click to enlarge

This will be the initial default filter, you will then add a 2nd Filter as seen below by setting the receiver port to 22

Click to enlarge

Now that you have the ExtraHop Device ID and Port 22 set as a filter, you can see below that you are able to audit, both in real-time and in the past, any/all ssh sessions to your Juniper firewall or any other device that you wish to monitor on any other port. You can save this query and come back to it periodically as a method of ongoing auditing of your firewall and ssh traffic.

Click to enlarge

What am I looking for here?
For me, I would be interested in any non-RFC1918 addresses, the number of bytes and the source host. If you notice that it is a laptop from the guest wireless network (or the NATed IP of the Access Point) then that may be something to be concerned with. As I stated earlier, while the announcement stated that you cannot tell if the exploit has been used, I think consistent auditing using wire data gives them no place to hide if they do compromise your ssh-enabled appliance and it is generally a good idea to monitor ssh access. In the real-time grid above, you can see the sender “oilrig.extrahop.com” is ssh’d into our Juniper Firewall. Does not matter if the first thing they do is shut of logging or if it is an insider who controls it. They can’t hide on the wire.

ExtraHop offers a full alerting suite that can whitelist specific jump boxes and hosts and provide visibility into just those hosts who you do not expect to see ssh’d into any system you have as well as the ability to monitor any other ingress or egress traffic that may look out of the ordinary. (Example: A SQL Server FTPing to a site in China or someone accessing a hidden share that is not IPC$).

Conclusion:
At the end of the day, the next big breach will involve one host talking to another host that they were not supposed to, weather that is my credit card number being SCP’d to Belarus or my SF86 form being downloaded by China. Advanced Persistent Surveillance of critical systems is the best way to prepare yourself and your system owners for tomorrow’s breach. While I am very thankful to the INFOSEC community for all that they do, for a lot of us, by the time a CVE is published, it is too late. The next generation of digital vigilance will involve hand-to-hand combat and no one will give you a sharper knife or a bigger stick than Wire Data Analytics with ExtraHop.

Thank you for Reading!

John