Category Archives: Layered Security

The Money is in the Mash-Up: RESTFUL mash-ups to help under-staffed INFOSEC teams

In this post, we will couple ExtraHop’s wire data analytics, Anomali STAXX, a leading threat intelligence solution and Slack, a cloud-based collaboration platform to demonstrate how we can use orchestration and automation in a manner that helps today’s under-(wo)manned security teams meet today’s threats with the level of agility needed!

I was fortunate enough to be selected to speak at RSAC 2017 and it was surely a career highlight for me. As several analysts pointed out post-show, automation and orchestration seemed to be the flavor of the year. Over the last 36 months it has become glaringly obvious that we simply cannot keep bad actors and malicious software off of our networks. I have been preaching the folly of perimeter (only) based security since 2010. The speed with which systems are now compromised and the emergence of the “human vector” through phishing has all but assured us that the horde is behind the wall and needs to be directly engaged. The reliance on logs, SIEM products will give you a forensic view of what is going on but will do little to be effective against today’s threats where a system could be compromised by the time the log is written.

While the idea of automation and orchestration is a great one, there are issues with it and will not be the first time “self-defending networks” have been brought to market. Bruce Schneier makes a very good point in his “Schneier on Security” blog post when he states the following:

You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cyber security”. He also makes one of the greatest quotes in INFOSEC history when he states “Data does equal information and information does not equal understanding”.


Perhaps the battle here is to get to a place of certainty, I too was once an advocate of “log everything and sort it out later” but the process of sorting through the data become extremely tedious and the amount of work it took to get to “certainty” I believe, gave bad actors time to operate while I wrote SQL queries, batch processes and parsing scripts for my context-starved data sets.  Couple this with the fact that teams are digitally bludgeoned to death with alerts and warnings that the “INFOSEC death sentence” starts to take root as people get desensitized to the alerts.

So where do we find certainty and how do we use it?
While the industry is still developing, there have been great strides in Threat Intelligence. ISACs around the world are working together to build shared intelligence around specific threats and making the information readily available via TAXII, STIXX and CIF. There is even a confidence level associated with each record that we are able to use as a guide to determine if a specific action is needed. The challenge with good threat intelligence is how we make it usable. Currently most threat Intel is leveraged in conjunction with a SIEM or logging product. While I certainly advocate for logs, there are some limitations with them.

  • Not everything logs properly (IoT Systems normally have NO logging at all)
  • You have a data gravity issue (you have to move the data into the cloud to be evaluated or you have to store petabytes of data to evaluate)
  • In some cases, only a small portion of the log is usable (but you pay to index the entire log with most platforms)
  • Their use is largely forensic with many of today’s threats

The case for Wire Data Analytics:
The key difference that I want to point out here is that using Wire Data Analytics with ExtraHop you can perform quite a bit of analysis in flight. ExtraHop “takes” data off the wire and is not dependent on another system to “give” the data to it. The only prerequisite for ExtraHop is an IP address. Examples of how I have made a SIEM more effective using wire data include:

  • Reducing Logging by 5000% by looking at logins by IP and calculating the total THEN sending a syslog message to the SIEM for those IPs with more than 100 logins vs. sending tens of thousands of logs per minute to the SIEM and checking on the back end
  • Checking an EGRESS transaction to against threat intelligence THEN sending the syslog if there is a match
  • In an enterprise with tens of thousands of employees, rather than logging EVERY failed login, aggregate records into five minute increments then send those with more than 5 login failures to the SIEM.

The point here is that you can deliver some context when you leverage wire data analytics with your SIEM workflows. Using SIEM-only, you must achieve context by aggregating the logs and looking at them after they are written. Using ExtraHop with your SIEM, you are able to achieve context (and more importantly, get closer to Mr. Schneier’s certainty) BEFORE sending the data to the SIEM. You can keep all the workflows that are tied to the incumbent SIEM system, you are just getting better, and fewer, logs. Should I disable an account that has 50 login failures in the last five minutes (Locked out or not)…..HELL YES! While I don’t think that automation and orchestration are a panacea, I think there are SOME cases where the certainty level is high enough to orchestrate a response. Also, I believe that automation and orchestration is not just for responding but can be used to make your SOC more effective.

Now that I have, hopefully, established the merits of using Wire Data Analytics, let’s keep in mind orchestration does NOT have to be a specific action or response. Orchestration can also be used to make your team more agile and hopefully, more effective. Most security teams I come across have at least one, two and in some cases, three open positions. The fact is, at a time when threats are becoming more complex, finding people with the needed skills to confront them is harder than ever. The situation has gotten so bad that the other day I typed “Human Capital Crisis” in Google and it auto-filled “in cybersecurity”. The job is getting tougher and there are fewer of us doing it, what I am going to show you in this post will never replace a human being but it might ease some of the heavy lifting that goes into achieving situational awareness.


PHISHING: “PHUCK YOU, YOU PHISHING PHUCKS!!!”
Anyone who has ever been phished or worked in an organization that is experiencing a phishing/spear phishing campaign has felt exactly as the section title says.  Lets have a look at how we can help our security teams get better data by leveraging the API’s of three unique platforms to warn them when a known phishing site has been accessed.

For those of us who are working too hard to bring context to the deluge of data, my suggestion…get some REST!!! Below I am going to walk you through how I can monitor activity to known phishing sites by doing a mash up of three technologies using the RESTFUL API of all three platforms.

Solution Roster:

  • ExtraHop Discovery/Explorer appliance
    ExtraHop provides wire data analytics and surveillance by working from a mirror of the traffic. Think of it as a CCTV for packets/transactions.
  • Anomali STAXX Virtual Machine
    Anomali STAXX provides me lists of current threat intelligence. Think of this as equipping the CCTV operator with a list of suspicious characters to look for.
  • Slack Collaboration Community
    Slack provides me a community at packetjockey.slack.com where my #virtualsoc team operations from anywhere in the world.
  • A python peer (Windows or Linux)
    This is the peer system that accesses the threat intelligence and pulls it off of the STAXX system and uploads the threat intelligence to the ExtraHop appliance.

How it works:
As you can see in the drawing below, the Linux peer uses the REST API to get a list of known phishing sites then executes a Python script to upload the data into the memcache on the ExtraHop appliance equipping it with the threat intelligence it needs. The ExtraHop appliance uses an application inspection trigger that checks every outgoing URI to see if it is a known phishing site. If there is a match, an alert is sent to Slack, Email/SMS in addition to being logged on their own internal dashboards and search appliance.

 

EH_SLACK_STAXX

Click Image

 

What the final product looks like:
From my Linux box, (I don’t dare go to these sites on my Windows or Mac laptop) I do a “wget” on one of the known phishing sites and within milliseconds (Yes milliseconds, watch the video if you don’t believe me). We get the client IP, Server IP and the site that they went to. From here we can find out who owns that client machine and get them to change their password immediately as well as issue an ACL for the server in case this is a spear phishing campaign and they are targeting specific uses. Also, before you ask, “Yes” we can import the list of known malicious email addresses and monitor key executive recipients in case one of them gets an email from a known malicious address. We can also check HTTP referrers against the phish_url threat intelligence.

In the screenshot below, you see my “wget” command and the result at 11:23:53 and you can see that the Slack warning came in at 11:26.  If you watch the video you will see it takes milliseconds.

I believe that by using slack you can also color code certain messages and program in that awesome “WTF” emoji (if one exists) for specific messages ExtraHop sends. Also, if you are not comfortable with specific information being sent to slack, we can configure the appliance to send you a link to the LOCAL URI that ONLY you and your team can access.

Conclusion:
While there is a lot of buzz around Orchestration and Automation I believe the pessimism around it is justified. Security teams have been promised a lot over the last few years and what we have found, especially lately, is that a lot of tried-and-true solutions either lack the shutter-speed or context to be effective. Here we are doing some orchestration and automation but we are doing so in order to give the HUMAN BEING better information. Our security director made a very good point to me the other day when he said the last thing a security team wants is more data. What we have hopefully shown in this post is that if you have open platforms like Anomali, SLACK and ExtraHop, you can craft an automation and orchestration solution that can actively help security teams in a manner that still leverages the nuance and rationalization that only exists in a human being. While there will be solutions that will effectively automatically block certain traffic, issue ACLs, Disable accounts, etc. We can also use automation to do some of the heavy lifting for today’s out(wo)manned security teams. To get where I think the Cyber Security space needs to be, it is going to take more than one product/tool/platform. If you have a solution that is closed and does not support any kind of RESTFUL API or open architecture, unless it fulfills a specific niche, get rid of it. If you are a vendor and you are selling a solution that is closed, do so at your own peril as I believe closed systems are destined to go the way of the dinosaur. By leveraging wire data with existing workflows, you can drastically reduce your TTWTF (time to WTF!??) and be better positioned to trade punches with tomorrow’s threats.

Thanks so much for reading, please watch the video.

John M. Smith

 

 

 

 

 

 

HOLY SHA1-T!!! Google engineers successful SHA1 Collision attack! Will release source code to the public in late May

Not exactly new news here but in case you have been in a coma the last two weeks. Google managed to engineer a successful SHA1 attack and intends to release the source code somewhere on or around the May 24th time frame. According to BusinessWire.com 21% of websites are still using SHA1 certificates. Basically over 1/5 of the sites on the internet are using a woefully weak cipher suite and if they are still doing so near the end of May, they will be doing so with the source code for how to exploit them. A colleague of mine once told me, as I was lamenting my frustration at apathy in the enterprise, “Sheep hate the sheep dog more than the wolf”. In this case, I see it more a matter of the sheep dog being so fed up that he or she is basically warning them that they will not only be left behind, but they are going to tell the wolves how to get at them. You may or may not agree with this methodology but regardless, those who do not heed the warning and fall in with the rest of the flock may find themselves being part of the “thinning the herd” as, most assuredly, the wolves will gather. One of the challenges in many enterprises is understanding what your exposure is. There are tools that will let you scan systems, etc. but the process is two-fold. You could spend hundreds of hours securing your servers only to be breached by a B2B Partner or an IoT device that has a weak cipher. While over 1/5 of the internet is still using SHA1, I am betting that internally it is much worse. If we have learned anything over the last 36 months it’s that the perimeter won’t keep folks out and while the wolves may gather in the DMZ, they will work just as easily in the dark when Fred in payroll opens that Email attachment or clicks that picture. As enterprise folks, we own the responsibility for thinning our own flock and keeping our own strays in line.

You may be wondering how you can do this, both internal, external and B2B? It may not go over well if you called your B2B partners and told them you were going to start scanning their systems. A solution that allows you to engage in careful surveillance of all SSL transactions and determine the cipher suites used will position you to be able to determine your entire exposure without scanning or crawling yours or any other’s network. Using ExtraHop’s wire data analytics you can observe your SSL Transactions and will be able to start the process of getting out of technical debt by fixing the issues one system, network and B2B partner at a time.

From the same cited article above, getting visibility into what your exposure is can be difficult:

“The results of our most recent analysis are not surprising” said Kevin Bocek, chief security strategist for Venafi. “Even though most organizations have worked hard to migrate away from SHA-1, they don’t have the visibility and automation necessary to complete the transition. We’ve seen this problem before when organizations had a difficult time making coordinated changes to keys and certificates in response to Heartbleed, and unfortunately I’m sure we are going to see it again.”

If you leverage our wire data analytics platform you can easily audit your exposure by using one of our canned reports on Cipher auditing or set up a quick trigger to audit it yourself.

How it works:
ExtraHop sits back passively and observes network traffic via a mirror (Tap, Span, etc). So within this Application Inspection Trigger I am doing the following:

  • Checking to see if the Certificate signature has “SHA1″ anywhere.
  • If it exists, I write the record to our EXA Elastic appliance where I can get a quick look at what my exposure is to SHA1 both from clients and servers. I can also see who issued the weak certificate (be ready to call you IoT vendors and shrink wrapped software)

Click Image:

Once we have started to write the information to the ExtraHop Explorer Appliance (EXA) we can get an idea of what our exposure is in two clicks (allowing sufficient time to build the data set)

Click 1: Select “SHA1 Audit” from the Record Type combo box.

Click Image:

 

Click 2: In the “Group By” combo box, select “Server IP Address”
Below you see a list of every server using SHA1. Some of them are internet servers and some of them are internal systems. If we want, we can separate internal systems within the trigger using the .isRFC1918 function and only look at our internal systems.

Click Image:

Conclusion:
Over the next two months, it will be important that we are able to determine our exposure to SHA1, not only for the servers we have on the internet, but internally and within the cloud providers that we are using. Moore’s law has dictated the hardware and computing power to break cipher suites will continue to get better and cheaper. SHA1 had a run of over 20 years (although it has been considered week for the last few years). Cipher suites becoming obsolete is part of the digital cycle of life. It took me a few minutes to write the trigger you see here and we already have canned auditing tools in the ExtraHop bundle gallery.

Getting visibility this quickly and getting an idea of our risk as easily as this is why we “wire data”.

Thanks for reading!

John

The Case for Wire Data: Security

During the 2nd week of February I had the honor to deliver two speaking sessions at the RSA Conference in San Francisco. One of them was on Ad Hoc threat intelligence and the 2nd was a Birds of a Feather round-table session called “Beyond Logs: Wire Data Analytics”. While it was a great conference, I found that you get some strange looks at a security conference when you are walking around with a badge that says “John Smith”. In both sessions, a key narrative was the effectiveness of wire data analytics and its ability to position security teams with the needed agility to combat today’s threats. In this post I would like to make the case for wire data analytics and demonstrate the effectiveness of using wire data as another tool along with your IDS/IPS and Log consolidation.

Wire Data Analytics:
Most security professionals are familiar with wire data already. Having used Intrusion protection and detection software for nearly 20 years now, concepts such as port mirroring and span aggregation are already native to them. INFOSEC professionals are some of the original wire data analytics professionals. We differ from IDS/IPS platforms in that we are not looking specifically at signatures rather we are rebuilding layer 2-7 flows. We have several areas where we can help INFOSEC teams by providing visibility into the SANS first two critical security controls, augmenting logs and increasing visibility as well as providing a catalyst for ongoing orchestration efforts.

SANS Top 2 Security Critical Controls:
From Wikipedia we have the following list making up the SANS top 20 Cyber Security Controls (Click image if you, like me, are middle aged and can’t see it)

In our conversations with practitioners we commonly hear that “if we could JUST get an inventory of what systems are on the network”. As virtualization and automation has matured over the years, the ability to mass-provision systems has made security teams’ job much harder as there can be as much as a 15% difference in the number of nodes on a single 24 bit CIDR block from one day to the next, hell from one hour to the next. Getting a consistent inventory with current technologies generally involves responding to an SNMP sweep, Ping response, WMI Mining or NMAP scan. As we have seen with IoT devices, many of them don’t have MIBs, WMI libraries and in most (all) cases logs. Most malicious doers will prefer to do their work in the dark, if detected, they will try to use approved or common ports to remain unseen.

“All snakes who wish to remain in Ireland, please raise your right hand….” Saint Patrick

The likelihood that a compromised system is going to respond to an SNMP walk, Ping, WMI connection or volunteer what they are doing may be about as likely as a snake raising their right hand.

How ExtraHop works with the top 2 SANS controls:
Most systems try to engineer down to this level of detail, technologies such as SNMP, Netflow, logs and the like to do a pretty good job of getting 80-90 percent of the hosts but there are still blind spots. When you are a passive wire data analytics solution, you aren’t dependent on someone to “give” data to you, we “take” the data off the wire. This means if someone shuts off logging, deletes /var/log/* they cannot hide. A senior security architect once told me, “if it has an IP address it can be compromised”. To that we at ExtraHop would answer “if it has an IP Address, it can’t hide from us”. I cannot tell you what the next big breach or vulnerability will be, but what I CAN say with certainty (and trust me, I coined the phrase “certainty is the enemy of reason”, I am NEVER certain) is that it will involve one host talking to another host it isn’t supposed to. With wire data, if you have an IP address and you talk to another node who ALSO has an IP address. Provided we have the proper surveillance in place….YOUR BUSTED!

ExtraHop creates an inventory as it “observes” packets and layer 7 transactions. This positions the security team to account for who is talking on their network regardless of the availability of agents, Netflow, MIBs or WMI libraries. To add to this, ExtraHop applies a layer of intelligence around it. Below, you see a collection of hosts and locations as well as a transaction count. What we have done is import a customer’s CIDR block mapping csv that will then allow us to geocode both RFC1918 addresses as well as external addresses so that you have a friendly name for the CIDR block. This is a process of reconciling which networks belong to which groups and/or functions. As you can see, we have a few IP Addresses, the workflow here is to identify every IP address and classify it’s CIDR block until you can fully account for who lives where. This takes a process of getting an accurate inventory from, what can be, a 3 month or longer task into a few hours. Once you have reconciled which hosts belong to which functions, you have taken the first step in building your Security Controls foundation securing the first control. The lack of this control is a significant reason why many security practices topple over. An accurate inventory is the foundation, to quote the defensivesecurity.org podcast “ya gotta know what you have first”.

Click Image:

SANS Control 2: Inventory of Authorized and Unauthorized Software:
While wire data cannot directly address this control, I tend to interpret (maybe incorrectly) this as being networked software. While something running on just one host could do significant damage to that one host. Most of us worry more about data exfiltration. This means that the malicious software HAS to do something on the Network. Here we look at both Layer 4 and Layer 7 to provide an inventory of what is actually being run on the systems that you have finally gathered an inventory for.

In the graphic below, you see one of our classified CIDR blocks. We have used the designation “Destination” (server) to get an accurate inventory of what ports and protocols are being served up by the systems on this CIDR block. (Or End Point Group if you are a Cisco ACI person). Given that I have filtered out for transactions being served up by our “Web Farm” the expected ports and protocols would be HTTP:8080, SSL:443, HTTP, etc. Sadly, what I am seeing below is someone SSHing into my system and that is NOT what I expected. While getting to this view too me only two clicks we can actually do better. We can trigger an alert letting the SOC or CSIRT know that there has been a violation. Later in this post, we will talk about how we could actually counter-punch this type of behavior using our API.

As far as SANS 2nd Control. If I look on a web server and I see that it is an FTP Client to a system in Belarus, I am generally left to conclude that the FTP is likely unauthorized. What ExtraHop gives you, in addition to an accurate inventory, is an accounting for what ports and protocols are in use by both the clients and servers using those segments. While this is not a literal solution for SANS 2nd control it does have significant value in that INFOSEC practitioners can see what is traversing their network and are positioned to respond with alerts or orchestrate remediation.

Layer 7 Monitoring:
In the video below, titled “Insider Hating”, you see our Layer 7 auditing capability. In this scenario we have set up an application inspection trigger to look for any queries of our EHR database. The fictitious scenario here is that we want to audit who is querying our EHR database to ensure that it is not improperly used or that someone does not steal PHI from it. When an attacker has stolen credentials or you have an insider, you now have an attack that is going to use approved credentials and approved ports/protocols. This is what keeps CIOs, CISOs and practitioners up at night. We can help and we HAVE helped on numerous occasions. Here we are setting up a L7 inspection trigger to look for any ad hoc like behavior. In doing so, we can position, not JUST the security team to engage in surveillance, but the system owners. This is an ABSOLUTE IMPARATIVE if we want to be able to stop insiders or folks with stolen credentials. We need to do away with the idea that security teams have a crystal ball. When someone runs a “Select * from ERH” from a laptop in the mail room, we can tell you that it came from the mail room and not the web server. We can also alert the DBA of this and get system owners to take some responsibility for their own security. This same query, to many security teams, will look like an approved set of creds using approved ports. This same information being viewed by the DBA or System owner may cause them to fall out of their chair and run screaming to the Security teams’ office. The of vigilance by system owners, in my opinion, is the single greatest reason breaches are worse than ever before in spite of the fact that we spend more money than ever.

 

Augmenting Logs:
I love logs, I love Splunk, LogRhythm and of course my old friend Kiwi!! But today’s threats and breaches happen so fast that using just logs positions you to operate in a largely forensic fashion. In many cases, by the time the log is written and noticed by the SOC the breach has already happened. Below you see a graphic from the Verizon DBIR that states that 93% of all compromises happen within minutes, 11% within seconds. Using just logs and batch processing to find these threats is great for rooting out patterns and malicious behavior but, as I stated previously, largely forensic. As a Wire Data Analytics platform we work/live in a world of microseconds and thus for us, seconds are hours and minutes are days. Current SIEM products, when not augmented with wire data analytics, simply don’t have the shutter speed to detect and notify or orchestrate a timely response.

 

Example:
I saw an amazing black-hat demo on how OpenDNS was using a hadoop cluster to root out C2 controllers and FastFlux domains. The job involved a periodic batch job using pic to extract domains with a TTL of 150. Through this process they were able to consistently root out “FastFluxy” domains to get a new block list.

We have had some success here collecting the data directly off the wire. I will explain how it works: (we are using a DNS Tunneling PCAP but C2 and Exfiltration will have similar behavior).

  • First we whitelist common CDNs and common domains such as Microsoft, Akamai, my internal intranet namespace, etc.
  • We collect root domains and we start adding the number of subdomains that we observe.
    • In the example below, we see pirate.sea and we start to increment each time we observe a subdomain
  • If a root domain has a count of over 50 subdomains within a 30 second period, we account for it. (thus the dashboard below)

The idea behind this inspection trigger is that if the root domain is NOT a CDN, not my internal namespace and not a common domain like Google or Microsoft, WHY THE HELL DOES THE CLIENT HAVE 24K lookups? Using logs, this is done via a batch process vs. using wire data, we uncover suspicious behavior in 30 seconds. Does that mean you don’t need logs or the ingenius work done by OpenDNS isn’t useful? Hell no, this is simply augmenting the log based approach to give you more agile tool to engage directly with an issue as it is happening. I am certain that even the folks at OpenDNS would find value in being able to get an initial screening within 30 seconds. In my experience, with good white listing, the number of positives is not overly high. Ultimately, if a single client makes 24500 DNS lookups for a domain that you don’t normally do business with, it’s worth investigating. We routinely see Malware, Adware as well as 3rd party, unapproved, apps that think they are clever by using DNS to phone home (yes YOU Dropbox) using this method.

Click Image:

SIEM products are a lynch pin for most security teams. For this reason, we support sending data to SIEM platforms such as LogRhythm and Splunk but we also provide a hand-to-hand combat tool for those SecOps (DevOps) focused teams who want to engage threats directly. In the hand-to-hand world of today’s threats, no platform gives you a sharper knife or a bigger stick than Wire Data Analytics with ExtraHop.

Automation and Orchestration (Digital counter-punching):
In an article in September of 2014 GCN asked “is automation security’s only hope?” With the emergence of the “human vector” what we have learned over the last 18 months is that you can spend ten million dollars in security software, tools and training only to have Fred in payroll open a malicious attachment and undo all of it within a few seconds. As stated earlier in this post, 11% of compromises happen within seconds. All, I hope, is not lost however, there have been significant improvements in orchestration and automation. At RSAC 2016 Phantom Cyber debuted their ability to counter-punch and won first prize in the innovation sandbox. You can go to my youtube channel and see several instances of integration with OctoBlu where we are using OctoBlu to query threat intelligence and warn us of malicious traffic. But we can go a lot further with this. I don’t think we have to settle for post-mortem detection (which is still quite valuable to restrict subsequent breach attempts) with logs and batched surveillance. Automation and orchestration will only be as effective as the visibility you can provide.

Enter Wire Data:
Using wire data analytics, keep in mind that ours is a world of microseconds, we have the shutter speed to observe and act on today’s threats and thread our observed intelligence into orchestration and automation platforms such as Phantom Cyber and/or OctoBlu and do more than just warn. ExtraHop Open Data Stream has the ability to securely issue an HTTP.post command whereby we send a JSON object with the parameters of who to block positioning INFOSEC teams to potentially stop malicious behavior BEFORE the compromise. Phantom Cyber supports REST based orchestration as does Citrix OctoBlu, most of your newer firewalls have API’s that can be accessed as does Cisco ACI. The important thing here to remember is that these orchestration tools and next generation hardware API’s need to partner with a platform that can not only observe the malicious behavior but thread the intel into these API’s positioning security teams for tomorrows’ threats.

My dream integrations include:

  • Upon observing FastFluxy behavior, sending OpenDNS an API call that resolves the offending domain to 127.0.0.1 or a warning page
  • Putting a mac address in an ACI “Penalty box” (quarantine endpoint group) when we see them accessing a system they are not supposed to
  • Sending an API call to the Cisco ASA API to create an ACL blocking a host that just nmapped your DMZ

As orchestration and automation continues to take shape within your own practices, please consider what kind of visibility available to them. How fast you can observe actionable intelligence will have a direct effect on how effective your orchestration and automation endeavors are. Wire Data analytics with ExtraHop has no peer when it comes to the ability to set conditions that make a transaction actionable and act on it. Orchestration and automation vendors will not find a better partner that will make their products better than ExtraHop.

Conclusion:
The threat landscape is drastically changing and the tools in the industry and rapidly trying to adapt. An orchestration tool is not effective without a good surveillance tool, a Wire Data analytics platform like ExtraHop is made better when coupled with an orchestration tool that can effectively receive REST based Intel. The solution to tomorrows’ threats will not involve a single vendor and the ability to integrate platforms using APIs will become key to implementing tomorrows’ solutions. The ExtraHop platform is the perfect visibility tool to add to your existing INFSEC portfolio. Whether you are looking to map out a Cisco ACI implementation or you want to thread wire data analytics into your Cisco Tetration investment, getting real-time analytics and visibility will make all of your security investments better. Wire Data Analytics will become a key part of any security team’s arsenal in the future and the days of closed platforms that cannot integrate with other platforms are coming to an end.

There is no security puzzle where ExtraHop’s Wire Data Analytics does not have a piece that fits.

If you’d like to see more, check out my YouTube channel:
https://www.youtube.com/playlist?list=PLPadIDS3iteYhQuFhWy2xZemdIFzMNtpr

Thanks for reading

John Smith

 

 

 

 

 

 

 

 

 

Advanced Persistent Surveillance: Threat Intelligence and Wire Data equals Real-time Wire Intelligence

Please watch the Video!!

As the new discipline of Threat Intelligence takes shape, Cyber Security teams will need to take a hard look at their existing tool sets if they want to take advantage of the dynamic, ever changing threat intelligence feeds providing them with information on which hosts are malicious and whether or not any of their corporate nodes have engaged in any sort of communications with any of the malicious hosts, DNS names or hashes that you are collecting from your CTI (Cyber Threat Intelligence) feeds. Currently the most common way that I see this accomplished is through the use of logs. Innovative products like Alienvault and Splunk have the ability to check the myriad of log files that they collect and cross reference them with CTI fees and check to see there have been any IP based correspondence with any known malicious actors called out by such feeds.

Today I want to talk about a different, and in my opinion, better way of integrating with Cyber Threat Intelligence using Wire Data and the ExtraHop Platform featuring the Discover and Explorer Appliances respectively.

How does it work? Well let’s first start with our ingredients.

  1. A threat analytics feed (open source, subscription, Bro or CIF created text file)
  2. A peer Unix-based system to execute a python script (that I will provide)
  3. An ExtraHop Discover Appliance
  4. An ExtraHop Explorer Appliance

Definitions:

  • ExtraHop Discover Appliance:
    An appliance that can passively (no agents) read data at speeds from 1GB to 40GB. It can also scale horizontally to handle large environments.
  • ExtraHop Explore Appliance:
    ExtraHop’s Elastic appliance that allows for grouping and string searching INTEL gathered off the wire.
  • Session Table: ExtraHop’s memcache that allows for instant lookup of known malicious hosts.

The solutions works by using the Unix peer to execute a python script that will collect the threat intelligence data. It then uploads the malicious hosts into the Discover Appliance’s Session Table (up to 32K records). The Discover appliance then waits to observe a session that connects with one of these known malicious sites. If it sees a session with a known site from the TI feed activities include, but are not limited to the following:

  • Updates a Threat Intelligence dashboard
  • Triggers an alert that warns the appropriate Incident Response team(s) about the connection to the malicious host
  • Writes a record to the ExtraHop Explorer Device
  • Triggers a Precision PCAP capturing the entire session/transaction to a PCAP file to be leveraged as digital evidence in the event that “Chet” the security guard needs to issue someone a cardboard box! (not sure if any of you are old enough to remember “Chet” from weird science)

Please Click Image:

ThreatIntel

Below you see the ExtraHop Threat Intelligence Monitoring Dashboard (last 30 minutes) showing the Client/Server and Protocol as well as the Alert and a running count of violations: (this is all 100% customizable)

Please Click Image:

On the Explorer Appliance, we see the custom data format for Malicious Host Access and we can note the regularity of the offense
Please Click Image:

And finally we have the Precision Packet Capture showing a PCAP file for forensics, digital evidence and if needed, punk busting.
Please Click Image:

Conclusion:
The entire process that I have outlined above took less than one minute to complete every single task (Dashboard, Alert, EXA, PCAP). According to Security Week, the average time to detect a breach has “Improved” to 146 days in their 2015 report. Cyber Threat Intelligence has a chance to drastically reduce the amount of time it takes to detect a breach but it needs a way to interact with existing data.  ExtraHop positions your Threat Intelligence investment to interact directly with the network, and in real time.  Many incumbent security tools are not built to accommodate solutions like CTI feeds via API or do not have an open architecture to leverage Threat Intelligence, much less use memcache to do quick lookups. The solution outlined above using ExtraHop with a Threat Intelligence feed positions INFOSEC teams to be able to perform Advanced Persistent Surveillance without the cost of expensive log indexing SIEM solutions. Since the data is analyzed in flight and in real time, you have a chance to greatly reduce your time to detection of a breach, maybe even start the Incident Response process within a few minutes!

What you have read here is not a unicorn, this exists today, you just need to open your mind to leveraging the network as a data source (in my opinion the richest) that can work in conjunction with your log consolidation strategy and maximize your investment in Cyber Threat Intelligence.

Incidentally, the “Malicious Host” you see in my logs is actually wiredata.net.  I did NOT want to browse any of the hosts on the blacklist so I manually added my host to the blacklist the accessed it.  Rest assured, WireData.net is not on any blacklists that I am aware of!

Thanks for reading!

John M. Smith

SACCP: Stream Analytics Critical Control Point

I left the enterprise approximately 30 months ago after being a cubicle drone for the last 18 years.  I now work for ExtraHop Networks, a software company that makes a wire data analytics platform for providing operational intelligence to organizations around their applications, the data that traverses their wire and basically shines light on the somewhat opaque world of packet analysis.

In the last few years, I can honestly say that I find myself getting a bit frustrated with the number of breaches that have occurred due, in my opinion, in large part to the lack of involvement by system owners in their own security. For my household alone, in the last 24 months, we are on our 5th credit card (in fact, I look at my expiration dates on most of my credit cards and chuckle on the inside knowing I will never make it.) I am also a former Federal Employee with a clearance so I also have the added frustration of knowing several Chinese hackers likely had access to my SF86 information (basically my personal and financial life story). In the last 15 years, we have added a range of regulatory framework, Security Operations Centers (SOC), I have watched INFOSEC budgets bulge while needing to justify my $300 purchase of Kiwi Syslog server. I have concluded that maybe the time has come for the industry to try a new approach. The breaches seem to get bigger and no matter what we put in place, insiders or hackers just move around it. At times I wonder if a framework I learned in my career prior to Information Technology may be just what the industry needs?

My first job out of College was with Maricopa County Environmental Health (I was the health inspector) and I was introduced to a concept called HACCP (Hazard Analysis Critical Control Point) and I think some of what I learned from it can be very relevant in analyzing today’s distributed and often problematic environments.


HACCP, pronounced “hassup”, is a methodology of ensuring food safety by the development of a series of processes that ensure, in most cases, that no one gets sick from eating your food.  It involves evaluating the ingredients of each dish and determining which food is potentially hazardous and what steps need to be taken to ensure that quality is ensured/maintained from food prep to serving.

While working as the health inspector, I was required to visit every permit holder twice a year and perform a typical inspection that involved taking temperatures, making sure they had hot water, employees washed hands and stayed home when they were sick, etc. But in most if not all of the restaurants I inspected, the process of checking temperatures, ensuring there is soap at the hand wash station and making sure there is hot water did not JUST happen during an inspection, I knew that in most cases it went on even when I was not on the premises. Sadly, in today’s enterprise, generally systems are only checked and/or monitored when an application team is being audited. An incumbent INFOSEC team cannot be responsible for the day to day security of a shared services or hosting team’s applications any more than I could be in every single restaurant every single day. The operator has to take responsibility; I am proposing the same framework for today’s enterprise. Share services and hosting teams need to take responsibility for their own security and use INFOSEC as an auditing and escalation solution. I will attempt to parallel how ExtraHop’s Stream analytics solution can provide an easy way to accomplish this even in today’s skeleton crew enterprise environments.

Let’s start with some parallels.

An example of a HACCP based SOP would be:

  • The cooling of all pre-cooked foods will ensure that foods are cooled from 135 degrees to 70 degrees within two hours
  • The entire cooling process from 135 degrees to 41 degrees will not take more than 6 hours.

So, I am taking away the “H” and putting in an “S” for SACCP I am proposing that we do the same for our applications and systems that we support at the packet level.  Just as ingredients may have chicken, cheese and other potentially hazardous ingredients applications may have SSO logins, access tokens, PII being transferred between DB and Middle or Front End tiers. We need to understand each part of an infrastructure that represents risk to an application and what an approved baseline is, what mitigation steps to take and who is responsible for maintaining it.  Let’s take a look at the 7 HACCP/SACCP principles.

Principle 1 – Conduct a Hazard Stream Analysis
The application of this principle involves listing the steps in the process and identifying where there may be significant risk. Stream analytics will focus on hazards that can be prevented, eliminated or controlled by the SACCP plan. A justification for including or excluding the hazard is reported and possible control measures are identified.

Principle 2 – Identify the Critical Control Points
A critical control point (CCP) is a point, transaction or process at which control (monitoring) can be applied to ensure compliance and, if needed, a timely response to a breach.

Principle 3 – Establish Critical Limits
A full understanding of acceptable thresholds, ports and protocols of specific transactions will help with identifying when CCP is outside an acceptable use.

Principle 4 – Monitor Critical Control Point
Monitor compliance with CCPs using ExtraHop’s Stream analytics Discover and Explorer appliances to ensure that communications are within the expected and approved ports and protocols established in each CCP.

Principle 5 – Establish Corrective Action
Part of this is not only understanding what to do when a specific critical control point is behaving outside the approved limits but to also establish who owns the systems involved in each CCP.  For example, if a Critical Control Point for a server in the middle-tier of an application is suddenly SCP-ing files out to a server in Russia, establish who is responsible for ensuring that this is reported and escalated as soon as possible as well as establish what will be done in the event a system appears to be compromised.

Principle 6 – Record Keeping
Using the ExtraHop Explorer appliance, custom queries can be set up and saved to ensure that there is proper compliance with established limits. Also integration with an external SIEM for communications outside the established limits can be enabled as well as HTTP push and Alerting.

Principle 7 – Establish Verification
Someone within the organization, either the INFOSEC team or team lead/manager must verify that the SACCP plan is being executed and that it is functioning as expected.

So what would a SACCP strategy look like?

Lets do a painfully simple exercise using both the ExtraHop Discover Appliance and ExtraHop Explorer Appliance to create a Stream Analytics Critical Control Point profile.

Scenario: We have a Network that we want to call “Prod”.

Principal 1: Analysis
Any system with an IP Address starting with “172.2” is a member of the Prod network and there should ONLY be INGRESS sourcing from the outside (The Internet) and Peer-to-Peer communications between Prod Hosts. No system on the Prod network should establish a connection OUTSIDE Prod.

Principal 2: Identify CCPs
In this case, the only Critical Control Point (CCP) is the Prod network.

Principal 3: Limits
As stated, the limits are that Prod hosts can accept connections from the outside BUT they should not establish any sessions outside the Prod network.

Principal 4: Monitoring

Using the ExtraHop Discover Appliance (EDA) we will create a trigger that identifies transactions based on the logical network names of their given address space and monitor both the ingress and egress of these networks.

In the figure below, we will outline how we are setting a logical boundary to monitor communications. In this manor we can lay the groundwork for monitoring the environment by first identifying which traffic belongs to which network.

  • You see on line 5 in the trigger below we are establishing which IP blocks belong to the source (egress) networks.
  • You then see on line 11 we are identifying the prod network as a destination (ingress).

*Important, you DO NOT have to learn to write triggers as we will write them for you but we are an open platform and we do provide an empty canvas to our customers should they want to paint their own masterpiece thus we are showing you how we do it.

 

Next we will leverage the ExtraHop Explorer Appliance (EXA) to demonstrate where the traffic is going. You will see on line 28 (although commented out) we are committing several metrics to the EXA such as source, destination, protocol, bytes, etc. This completes principal 4 and allows us to monitor the Prod network. In the figure below, you will see that we are grouping by “Sources”. You will note that Prod has successfully been classified and it has over one million transactions.

 

 

Principal 5: Establish Corrective Action
Well, in our hypothetical prod network, we have noted that there are some anomalies. As you can see below, when we filter on Prod as the source and we group by the Destinations we see that 15 of our nearly 1.3 million transactions were External. In most situations, this would go largely unnoticed by several tools however using SACCP and the ExtraHop’s Stream Analytics platform, the hosting team or SOC are positioned to easily see that there is an issue and begin the process of escalating it or remedying the issue with further investigation.

*Note, we can easily create an alert that can warn teams of when a transaction occurs outside the expected set of transactions. We also have a RESTful API that can be interrogated by existing equipment to see anomalies.

 

 

Digging Deeper:
As we dig a little deeper by pivoting to the Layer 7 communications (demonstrated in the video below) you will note that someone has uploaded a file to an external site at 14.146.24.124. Depending on what was in that file and existing policies, the mitigation could involve a cardboard box and a visit from the security guard.

 

Principal 6: Establish Record Keeping
The ExtraHop Discover Appliance has the ability to send a syslog to an incumbent SIEM system as well as a RESTFUL push. There is also a full alerting suite that can alert via email or SNMP Trap. In most enterprises, there is already an incumbent record keeping system, the ExtraHop platform has a variety of ways to integrate with the incumbent solution.

 

Principal 7: Verification
Someone should provide oversight of the SACCP plan and ensure that it is being executed and that it is having the desired results. This can either be the INFOSEC team management or hosting team management but someone should be responsible for ensuring that the shared services team(s) is (are) following the plan.

 

Conclusion:
The time has come for a new strategy, in several other industries where there is a regulatory framework for safety, compliance and responsibility there exists a culture of the operators taking responsibility for ensuring that they are compliant. The Enterprise is over 30 years old and just as the Health Inspector cannot be in every restaurant every day or a policeman cannot be on every street corner, the time has come for the IT industry to ask that system owners take some of the responsibility for their own security.

 

Thanks for reading and please check out the video below.

John

ADDENDUM!!!  (PUNKBUSTER OPTION!)

I wanted to take the time to show the next iteration of this, I call it precision punk busting…”err”..I mean Packet Capture.

The ExtraHop Discover Appliance has a feature called Precision Packet Capture.  Within the same narrative described above, I have edited my trigger to include taking a packet capture any time a policy is violated.  If you recall, I wanted to ensure that my Prod network ONLY communicated within the Prod network.  I added the following javascript to my trigger and you will see that I have instructed the appliance to kick off a packet capture in the event the policy is violated.

Busta_PCAP_Trigger

As a result of the FTP Traffic out to the internet we notice that we have a PCAP waiting for us indicating that a system has violated the Prod policy.

Busta_PCAP

 

We can also alert you that you have a PCAP waiting for you either via Syslog, SNMP or Email.  This PCAP can be used as forensics, digital evidence against an insider or a way to verify just wha the “F” just happened.

Having this information readily available and alerting either a system owner or SOC team that a policy was violated is a much easier surveillance method than sorting through Terabytes of logs or sifting through a huge PCAP file to get what you want.  Here we are ONLY writing PCAPs for those instances that violate the policy.

Thanks for reading!

Happy punk busting!!!

Thanks

John

Covering The “Behind The Perimeter” Blind-Spot

Well, I cannot tell you what the next big breach will be, I CAN tell you that it will involve one critical system communicating to another system that it was/is not supposed to.  Whether that is ex filtration via secure ssh (SCP) to a server in Belarus or mounting a hidden drive share using a default service account that was never changed, behind the perimeter represents a rather large blind spot for many security endeavors.  In the video below, you are seeing a very quick and simple method for monitoring peer-to-peer communications using wire data with the ExtraHop Platform.  This is a somewhat painful process with logs due to the fact that logging build-up and tear-downs can impact the hardware being asked to do the logging and if you are licensed to pay by how much data you index, it can be expensive.  Grabbing Flow records directly off the wire positions your security practice to have this information readily available in real-time with no impact on the infrastructure as no system/switch is asked to run in debug mode.  Transactions are taken directly off the wire by the ExtraHop Discover Appliance and written directly to the Elastic ExtraHop Explorer Appliance to provide a large and holistic view of every transaction that occurs within your critical infrastructure.

This positions system-owners with the ability to (within minutes) to audit critical systems and account for transactions that are suspect.  In the video below, you will see how we can audit flow records by white listing expected communications and slowly leaving malicious communications no where to hide as you slowly reduce the surface area that you are trying to patrol.


I will follow this up with another post on how we can monitor Layer 7 metrics such as SQL Queries against critical databases.

Thanks for reading, please watch the video!

John Smith

Advanced Persistent Surveillance: SSH Connectivity

Today I read about a Juniper announcement that unauthorized code in Juniper firewalls can allow an attacker to listen in on conversations, even decrypting communications by using your firewall as a MITM. A second, unrelated according to the company, announcement concerned a pair of exploits, one that allows an attacker telnet or ssh access into the device and that a “knowledgeable” user could also decrypt vpn traffic once the firewall had been compromised. While they say that there is no way to tell if you have been victim of this exploit, there are some ways you can check to see if there is any malicious activity on your devices and you CAN do so without sifting through a terabyte of log files.

Most Juniper customers will shut telnet off in favor of ssh so I will focus on how to use wire data analytics to monitor for potential malicious behavior over ssh.

First, I am not what you would call an INFOSEC expert. I worked in IT Security for a few years handling event correlation and some perimeter stuff but I firmly believe that anyone that is responsible for ANYTHING with an IP address should consider themselves a security practitioner, at least for those systems under their purview. I would consider myself more of a “packet jockey”. I am a Solutions Architect for a Wire Data analytics company, ExtraHop Networks. I have spent the better part of the last two years sitting at the core of several large enterprises looking at packets and studying behavior. Today I will go over why it is important to monitor any and all ssh connections and I will discuss why logs aren’t enough.

Monitoring SSH:
While the article states “There is no way to detect that this vulnerability was exploited.”, I would say that if you see a non RFC1918 address SSH-ing into your firewall, something needs to be explained. Currently, most teams monitor ssh access by syslogging all access attempts to a remote syslog server where they can be observed, hopefully, in time to notify someone that there has been unauthorized activity. The issue here is that once someone compromises the system, if they are worth their weight in salt, the first thing they do is turn off logging. In addition, the act of sifting through syslogs can be daunting and time consuming and at times does not deliver the type of agility needed to respond to an imminent threat.

Enter ExtraHop Wire Data Analytics:


What I like about wire data analytics is that you are not dependent on a system to self-report that it has been compromised. Simply put you cannot hide from ExtraHop, we will find you! Yes, you can breach the Juniper firewall (or any other ssh enabled device) and shut logging off but you cannot prevent us from seeing what you are doing.

*(I am assuming you can shut logging off, I know you can on the ASA’s but I have never administered a Juniper firewall so don’t quote me on that but most devices have to be told what and where to log).

On the wire, there is nowhere to hide, if you are coming from an IP address and you are connecting to another IP address, you’re busted. Whether you are running a “select * from …” on the production database server, SCPing the company leads to your home FTP server or compromising a firewall. ExtraHoop offers several ways to monitor ingress and egress traffic, today I am going to discuss how we can monitor ssh via the Discover Appliance as well as how to leverage our new big data platform, our Explorer Appliance.

Using the Discover Appliance to monitor SSH Traffic:

One of the first and easiest ways to check and see if you have had anyone ssh into your firewall is to simply browse to it in the UI and go to L7 protocols and look for SSH.

Click to enlarge

 

Click to enlarge

You can also build a quick dashboard showing who has ssh’d into the box and make it available for your SOC to view and alert you on. The dashboard below is showing the number of inbound SSH packets. You see the source IP of 172.16.243.1 as well as 23 inbound packets. We can also show you outbound packets as well.

This can all be done literally within 5 minutes and you can have total visibility into any ssh session that connects to your Juniper firewall, or ANY ssh enabled device, or ANY device over ANY port or protocol.

Can we have an alert? Yes, ExtraHop has a full alerting system that allows you to alert on any ssh connection to your gateway devices.

Monitoring SSH via the ExtraHop Explorer Appliance:

A few weeks ago, ExtraHop introduced their Explorer Appliance. This is an accompanying appliance that allows you to write flows and layer 7 metrics to a big data back end as part of a searchable index. In the example I am going to show you I will be speaking specifically about “Flow” records. ExtraHop can surgically monitor any port for any device and write them out to the explorer appliance. For Flow records, since they are very intense, we do not automatically log them, we recommend that you set them on a per host basis from the Admin console. Once added, any and all communications will be audited and searchable for that host.

To audit ssh connectivity of our Juniper Firewall we will go to the discovered device and select the parent node. From there on the right hand side you will see an “ExtraHop ID:” (Note the Search Textbox above it)

Click to enlarge

 

You will past the device ID into the search box and click “Global Records Query”

Click to enlarge

This will be the initial default filter, you will then add a 2nd Filter as seen below by setting the receiver port to 22

Click to enlarge

Now that you have the ExtraHop Device ID and Port 22 set as a filter, you can see below that you are able to audit, both in real-time and in the past, any/all ssh sessions to your Juniper firewall or any other device that you wish to monitor on any other port. You can save this query and come back to it periodically as a method of ongoing auditing of your firewall and ssh traffic.

Click to enlarge

What am I looking for here?
For me, I would be interested in any non-RFC1918 addresses, the number of bytes and the source host. If you notice that it is a laptop from the guest wireless network (or the NATed IP of the Access Point) then that may be something to be concerned with. As I stated earlier, while the announcement stated that you cannot tell if the exploit has been used, I think consistent auditing using wire data gives them no place to hide if they do compromise your ssh-enabled appliance and it is generally a good idea to monitor ssh access. In the real-time grid above, you can see the sender “oilrig.extrahop.com” is ssh’d into our Juniper Firewall. Does not matter if the first thing they do is shut of logging or if it is an insider who controls it. They can’t hide on the wire.

ExtraHop offers a full alerting suite that can whitelist specific jump boxes and hosts and provide visibility into just those hosts who you do not expect to see ssh’d into any system you have as well as the ability to monitor any other ingress or egress traffic that may look out of the ordinary. (Example: A SQL Server FTPing to a site in China or someone accessing a hidden share that is not IPC$).

Conclusion:
At the end of the day, the next big breach will involve one host talking to another host that they were not supposed to, weather that is my credit card number being SCP’d to Belarus or my SF86 form being downloaded by China. Advanced Persistent Surveillance of critical systems is the best way to prepare yourself and your system owners for tomorrow’s breach. While I am very thankful to the INFOSEC community for all that they do, for a lot of us, by the time a CVE is published, it is too late. The next generation of digital vigilance will involve hand-to-hand combat and no one will give you a sharper knife or a bigger stick than Wire Data Analytics with ExtraHop.

Thank you for Reading!

John

 

No End in Sight: Cyber Security and the Digital Maginot Line

whackamalware1

Yesterday my spouse was informed by a laboratory company where she was having some blood work done that she needed to provide them a credit card number that they could put on file in case our insurance company could not pay or did not pay the bill for the lab costs. This after showing our insurance card and providing proof that we are insured. Having lived with me the last 7 years she asked the woman at the counter for a copy of the InfoSec strategy asking them to “please include information on encryption ciphers, key lengths as well as information on how authentication and authorization is managed by their system and if her credit card information would be encrypted at rest”. Needless to say, they had no idea what she was talking about much to the exasperation of the people waiting behind her in line as well as the front office staff. She ended up getting her tests done but was told she would not be welcomed back if she was going to continue to be unwilling to surrender her Credit Card number to their front office for them to, digitally, keep on file.

Between the two of us, we have replaced 4 or 5 cards in the last 3 years due to various breaches, I have had to replace two and, I believe, she has had to replace 3 of them. In my case, each incident cost me around $800 that I had to wait weeks to get back and only after I went into the bank and filled out forms to attest that I did not make the charges. Each incident was about 4 hours of my time by the time all was said and done. Yes, there were lawsuits and lawyers were paid six figure sums as a result and I am sure they deserved it but at the end of the day, I was without my $800-$1600 for an extended period of time and I had to run through a regulatory maze just to get back what I had lost. No…..I never got any settlement money, I hope they spent it well. Fortunately for me, I am 46 years old now and have a great job, if this had happened to 26 year-old (still a screw-up) John, it would have been utterly devastating as I likely would have been evicted from my apartment and had bill collectors calling me. I can’t imagine calamity this creates for some folks.

I am somewhat dumbfounded that any company at any level would seek to get people to surrender their information digitally given the egregious levels of retail breaches that have plagued the industry the last few years. Forget that consumer advocacy is non-existent, while some retailers have been very forward in understanding the impact to their consumers, I simply do not see things getting better, EVER. The current method by which Cyber Security is practiced today is broken and there seems to be no motivation to fix it. This in spite of extremely costly settlements and damage to brands, the way we practice security today is deeply flawed and it’s not the Security team’s fault. Until system owners start taking some responsibility for their own security, these breaches will simply never end.

Bitching about the lack of responsibility of system owners isn’t new to me, my first “documented” rant on it was back in early 2010. As a system owner, I, almost compulsively, logged everything that went on and wrote the metrics to a centralized console. In a way, it was a bit of a poor-man’s DevOps endeavor. In doing so, I was able automate reporting so that when I came into work each morning, I would spend 15 minutes sipping my coffee and looking at all of the non-standard communications that went on the previous day (basically all internet traffic that did not use a web browser and all traffic outside the US). No, it wasn’t full IDS/IPS production but on two separate occasions, I was able to find malware before several seven figure investments in malware detection software. That is two instances in four years or 2/1000 mornings (approximately 4 years’ worth of work minus vacations, holidays etc.) where I noted actionable intelligence. That may not have been a lot but if you are one of the dozens of retailers who have had breaches in the last few years, I think it is plausible to assume the systems teams could have had an impact on the success of a breach had they been a little more involved in their own security. Don’t underestimate the value of human observation.

Why the INFOSEC is not enough?
Short of a crystal ball, I am not sure how we expect INFOSEC teams to be able to know what communication is acceptable and what communications are not. In the last few years “sophisticated persistent advanced super-duper complex malware” generally means that someone compromised a set of credentials and ran amuck for months on end stealing the digital crown jewels. Even if a policeman is parked outside my house, if they see someone walk up, open the door with a key and walk out with my safe, 60 TV (Actually, I don’t have a 60 inch TV) and other valuables how the hell are they supposed to know they should or should not be doing that. In most cases, this is the digital equivalent of what is happening in some of these breaches accept that digitally, I am sitting at my couch while all of this is going on in front of me. If an attacker has gotten credentials or has compromised a system and is stealing, expecting the security team to see this before extensive damage is done is unrealistic. With some of the social engineering techniques that exist and some of the service accounts used with elevated privileges, you don’t always have the 150 login failures to warn you. If I am actually paying attention, I can actually say, “Hey, what the hell are you doing, put that TV down before I call the cops!” (Or, my step-daughter is a foodie and she has some cast iron skillets that could REALLY leave a lump on someone’s head).

The presence of an INFOSEC team does not absolve system owners of their own security any more than the presence of a police department in my city means I don’t have to lock my doors or pay attention to who comes and goes from my house.

Police: “911 operator what is your emergency?”

Me: “I’ve been burgled, someone came into my house and stole from me”

Police: “When did this happen? Are they still in your house?”

Me: “It happened six months ago but I don’t know if they are still in my house stealing from me or not”

Police: “Ugh!!”

If someone has made a copy of the keys to my house it is not the police’s fault if they don’t catch them illegally entering my home in the same manor that the police cannot be everywhere, all the time, you INFOSEC team cannot inspect every digital transaction all the time.

Thought Exercise:
If someone has compromised a set of credentials or, say a server in your REST/SOAP tier and they are running ad hoc queries against your back end database, let’s evaluate how that would look to the system owner vs. the INFOSEC practitioner.

To the INFOSEC Practitioner: They see approved credentials over approved ports, since they are not the DBA or the Web Systems owner so this, likely, does not trigger any responses because the INFOSEC resource is not privy to the day to day behavior or design.
The DBA: The DBA should notice that the types of queries have changed and fall out of their chair.
Web Properties team: They should have a similar “WTF!?!?” moment as they note that the change from what is normally stored procedures or even recognizable SQL statements to custom ad hoc queries of critical data.

In this scenario, one in which I covered on wiredata.net in May of 2014, it is obvious that the INFOSEC professional is not as well positioned to detect the breach as he or she does not manage the system on a day to day basis and while several processes have INFOSEC involved during the architecture the idea that your INFOSEC team is going to know everything about every application is neither practical or reasonable. It is imperative that system owners take part in making sure their own systems are secure by engaging in a consistent level of intelligence gathering and surveillance. In my case, it was 15 minutes of every morning. Ask yourself, do you know every nonstandard communication that sourced from your server block? Will you find out within an hour, 8 hours, a single day? These are things that are easily accomplished with wire data or even log mongering but to continue to be utterly clueless of who your systems are talking to outside of normal communications (DNS, A/D, DB, HTTP) to internal application partners is to perpetuate the existing paradigm of simply waiting for your company to get breached. While we give the INFOSEC team the black eye, they are the least likely group to be able to see an issue in spite of the fact that they are probably going to be held accountable for it.

There are services from companies like FireEye and BeyondTrust that offer innovative threat analytics and offer a number of “non-charlatan” solutions to today’s security threats. I’ve struggled to avoid calling Cyber Security an abject failure but we are reaching the point where the Maginot line was more successful than today’s Cyber Security efforts. I am not a military expert and won’t pretend to be one but as I understanding, the Maginot line, the French solution to the German invasion during WWI, was built on the strategies of the previous war (breach) and was essentially perimeter centric and the enemy simply went around it (sound familiar?). So perimeter centric was it that apparently upon being attacked from behind they were unable do defend themselves as the turrets were never designed to turn all the way around. The thought of what to do once an enemy force got inside was apparently never considered. I find the parallels between today’s Cyber Security efforts and the Maginot line to be somewhat surprising. I am not down on perimeter security but a more agile solution is needed to augment perimeter measures. One might even argue that there really isn’t a perimeter anymore. The monitoring of peer-to-peer communications by individual system owners is an imperative. While these teams are stretched thin already (don’t EVEN get me started on morale, workload and all around BS that exists in today’s Enterprise IT) what is the cost of not doing it? In every high profile breach we have noted in the last three years, all of these “sophisticated persistent threats” could have been prevented by a little diligence on the part of the system owners and better integration with the INFOSEC apparatus.

Cyber Insurance Policies could change things?
Actually, we are starting to see insurance providers force companies to purchase a separate rider for cyber breach insurance. I can honestly say, this may bring about some changes to the level of cyber responsibility shown by different companies. I live in Florida where we are essentially the whipping boys for the home owners insurance industry and I have actually received notification that if I did not put a hand rail on my back porch that they would cancel my policy. (The great irony being that I fell ass over teakettle on that very back porch while moving in). While annoyed, I had a hand rail installed post haste as I did not want to have my policy cancelled since, at the time, we only had one choice for insurance in Florida and it was the smart thing to do.

Now imagine I call that same insurance company with the following claim:
“Hello, yes, uh, I am being sued by the Girl Scouts of America because one of them came to my door to sell me cookies and she fell through my termite eaten front porch and landed on the crushed beer bottles that are strewn about my property cutting herself and then she was mauled by my five semi-feral pit bulls that I just let run around my property feeding them occasionally”.

Sadly, this IS Florida and that IS NOT an entirely unlikely phone call for an adjuster to get, however, even more sad is the fact that this analogy likely UNDERSTATES the level of cyber-responsibility taken by several Enterprises when it comes to protecting critical information and preventing a breach. If you are a Cyber Insurance provider and your customer cannot prove to you that they are monitoring peer-to-peer communications, I would think twice about writing the policy at all.

In the same manor that insurance agents drive around my house, expect auditors to start asking questions about how your enterprise audits peer-to-peer communications. If you cannot readily provide a list of ALL non-standard communications within a few minutes, you have a problem!! These breaches are now into the 7-8 digit dollar amounts and those companies who do not ensure proper diligence do so at their own peril.

Conclusion:
As an IT professional and someone who cares about IT Security, I am somewhat baffled at the continued focus on yesterday’s breach. I can tell you what tomorrow’s breach will be, it will involve someone’s production system or servers with critical information on them having a conversation with another system that it shouldn’t. This could mean a compromised web tier server running ad hoc queries; this could be a new FTP Server that is suddenly stood up and sending credit card information to a system in Belarus. This could be a pissed of employee emailing your leads to his gmail account. The point is, there ARE technologies and innovations out there that can help provide visibility into non-standard communications. While I would agree that today’s attacks are more complex, in many cases, they involve several steps to stage the actual breach itself. With the right platform, vigilant system owners can spot these pieces being put into place before they start or at least maybe detect the breach within minutes, hours or days instead of months. Let’s accept the fact that we are going to get breached and build a strategy on quelling it sooner. As a consumer who looks at his credit card expiration date and thinks to himself “Yeah right!” basically betting it gets compromised before it expires. I see apathy prevailing and companies who really don’t understand what a pain in the ass it is when I have to, yet again, get another Debit or Credit card due to a breach and while they think it is just their breach, companies need to keep in mind that your breach may be the 3rd or 4th time your customer has had to go through this and it is your brand that will suffer disproportionately as a result. Your consumers are already fed up and companies need to assume that the margin of error was already eaten up by whichever vendor previously forced your customers through post-breach aftermath. I see system owners continuing to get stretched thin and kept out of the security process and not taking part in the INFOSEC initiatives at their companies, either due to apathy or workload. And unfortunately, I see no end in sight….

Thanks for reading

John M. Smith

 

 

 

 

 

 

 

 

 

 

 

 

Advanced Persistent Surveillance: Re-thinking Lateral Communications with Wire Data Analytics

Several high profile compromises, involving breaches of trusted systems working over trusted ports, has – once again – raised the issue of lateral communications between internal hosts.  Breaches will continue as hackers evolve and learn to work around existing countermeasures that are, at times, overly based on algorithms and not based enough on surveillance.

So what is an infosec practitioner to do?

How practical is monitoring lateral communications?

Do we assign a person to look at every single build-up and tear-down?

Do we set all of our networking equipment to debug level 7 and pay to index petabytes of logs with a big data platform?

Do we assign a SecOps resource to watch every single conversation on our network?

Answer: Maybe…or maybe not.

Most of our critical systems (Cardholder Data Environment, CRM Databases, EMRs and HIS) are made up of a group of systems, some are client-server some are tiered with web services or MTS (Microsoft Transaction Services) acting as middleware and some are legacy socket driven solutions.  All of them have a common set of expected communications that can be monitored.

What if we could separate the millions of packets, and hopefully lion’s share, of expected communication from that communication which is unexpected?

What if we could do it at layer 7?

Using ExtraHop’s Wire Data Analytics Platform INFOSEC teams and application owners are positioned to be able to see non-standard lateral communications that would otherwise go unnoticed by incumbent IPS/Anti-malware/Anti-Virus tools.  The fact is, while we need the existing tools set, today’s complicated breaches tend to hide in the shadows communicating over approved ports and using trusted internal hosts.  ExtraHop shines light on this behavior leaving them exposed and positioning teams to “get their ‘stomp’ on” and stamp out these threats like cockroaches.

How we do it: 
Most INFOSEC practitioners have worked with Wire Data before though their IPS and IDS systems. ExtraHop’s platform is similar in that we work off of a span but instead of looking for specific signatures we observe and rebuild Layer 4-7 flows supporting speeds of up to a sustained 20 Gb per second. We also use a technology called triggers to support specific conditions we want to monitor and alert on (such as anomalies in lateral communications) This is a contrast from most of our perimeter defenses that scale into the megabit/single gigabit range, we are able to work up to the tens of gigabits range. The same innovation that allows us to collect Operational Intelligence and Performance metrics directly off a span port can be used to provide layer 4-7 digital surveillance of critical systems within your or your customer’s infrastructure.

While it is not practical to monitor every single transaction between your critical systems and other components of your enterprise applications, it is now possible, to monitor the unexpected traffic.  For instance, if we have a tiered application that has a web front end, RESTFUL web service tier and a back end database tier.  We can define the expected traffic that we should see between the tiers (what should be the lions share) and ignore that traffic while reporting on the traffic that is NOT expected.

Figure 1: We can write a trigger that ignores expected communications and reports on unexpected communications.

 

 

We can accomplish this task by writing two triggers, the first trigger is a basic Layer 4 detail surveillance trigger where we white-list selected hosts and protocols and only report on communications outside our expected conversations.  The second trigger is a layer 7 trigger that leverages ExtraHop’s layer 7 database fluency of SQL Server communications.  The layer 7 trigger will white-list specific stored procedures that are run from our REST tier that make up our Layer 7 expected communications.  Anything outside of these will be accounted for.

The first trigger makes the following assumptions:
The Web/Public tier comes in from hosts 192.168.1.10/20  and the REST tier is at 192.168.1.30.  The approved ports are 8000, 8080 and 1433.

The first trigger monitors client communications from specific clients and over specific ports.  Unlike NAC (Network Access Control), we are simply monitoring unexpected communications and we are not blocking anything.  In many cases we white list the Active Directory Controllers (if a Windows environment) and we would likely white list the WSUS server for Windows environments.  With the trigger below, you would be alerted of every RDP connection, any direct CIFS access or any exfiltration attempts that utilize ports not previously approved.  This simple trigger could have warned any number of breach victims of the staging that was going on prior to data being exfiltrated.  This trigger took less than one day to write (Don’t be intimidated by the javascript, I knew none when I started and we have people who can help you with it)

Leveraging layer 7 fluency:
Layer 7 surveillance is also a critical part of preventing tomorrow’s sophisticated breaches. In the trigger below, we are watching for expected layer 7 communications. A common occurrence in many high profile breaches is the compromise of a trusted system that is allowed to traverse sensitive networks. In the example above, if the REST tier were to become compromised it is particularly dangerous due to it’s being a trusted host of the Database environment (likely an open ACL). Using the trigger below, we can monitor which stored procedures we should be seeing connecting to the database. Stopthehacker.com states that there is a 156 day laps between the time a computer is compromised and the time it is detected. That is nearly six months that, in the event of a SOAP/REST tier breach, they have to run ad hoc queries against my back end database. For this reason, properly identifying anomaly’s at layer 7 (What SQL Statements are being run?) will be key in preventing/mitigating data loss and just might keep you out of the news.

So using the trigger we have created, if I run the following command (Example: emulate a breached Middleware server)

We are able to increment the counters in the ExtraHop dashboard:

Clicking on the “3” shows us the Offending IP and the Query that they ran:

 

And here we see that we can, in fact, send the data to Splunk or any other SIEMS product:

Digital Evidence:

You can also assign a precision packet capture to a trigger that will create a pcap file that you can use as digital evidence after the fact.

 

Sample Scenario: Middleware Breach
To show how a ExtraHop can detect a middleware breach (see Figure 2) using the two triggers above you would first catch the rogue queries being run with the layer 7 surveillance while ignoring the common stored procedures.  ExtraHop’s Wire Data Analytics will also catch the communications with the Exfiltration/staging server because the communications are outside of those we set in the trigger.    ExtraHop sees this communication and implements the steps in the triggers, from the first trigger it sees the REST tier communicating with an unknown host over an unknown port.  In the second trigger it sees the ad hoc queries mixed in with the normal stored procedures

Figure 2: SOAP/REST Tier Breached:
In figure 2 below you see how an ExtraHop Wire Data analytics appliance can be written to ignore expected traffic and only report on unexpected traffic.

 

Conclusion:

There has been a lot of talk about the need to monitor lateral communications but there has been little practical information on how to do it. Your digital surveillance strategy with Wire Data Analytics will be a living process that in which you periodically update and evaluate your connectivity profile. This is a crucial process often overlooked in INFOSEC.

Using ExtraHop’s platform to build out a surveillance strategy makes the once daunting prospect of monitoring lateral communications a workable process that can provide peace of mind.   We need to accept the fact that we cannot keep all Malware from getting inside; there are two types of systems, breached and about-to-get breached.  I think we need to look at our INFOSEC practice from the WHEN perspective and not the IF perspective.

As infiltration attempts become more sophisticated the ability to spot the building blocks of a data breach will become increasingly valuable.  ExtraHop can provide that extra set of eyes to do the heavy lifting for you while reporting on communications beyond those you expect, reducing the burden of additional auditing.  The reporting/surveillance framework also allows for application owners and shared services teams to get involved in INFOSEC by reviewing the periodic reports and, hopefully in most cases, deleting them.

Wire Data Analytics with the ExtraHop platform can provide your organization with a daily, weekly and monthly digital police blotter giving you the information you need to stay vigilant in the face of this new breed of threat.

Thank you for reading

 

John M. Smith