The Case for Wire Data: Save the SIEM

I recently attended Black Hat and one of the key narratives that I overheard while meeting with INFOSEC practitioners was the need to have better/smarter data. Attendees voiced some frustration with current traditional tools that are delivering too much data and the time it takes to respond to an incident leaves attackers enough time to do their dirty work. Over the last year we have heard several criticisms of SIEM based solutions and some of the limitations that they have in dealing with today’s more agile threat landscape. My induction into security was based in SIEM and I even ran a blog dedicated to Syslogs at http://Xen-trifuge.com where I detailed work I had done around my “Skunk” project. It stemmed from my desperate, but unheeded, pleas to my former manager to purchase Splunk saying “Can I have $20,000?” and being told “No” then seeing Kiwi Syslog support SQL Server connections and I then made a second plea, “Can I have $300 dollars?” which was accepted. SKUNK stood for SQL, KIWI to make it like Splunk. We used SQL, SSRS and KIWI’s ODBC connector with some parsing engines. As SIEM products go, Splunk was ABSOLUTELY my first love! I still know a few brilliant engineers that work there and I have a great deal of respect for what that company has done to revolutionize security.

Over the last 5 years I had another epiphany, I was introduced by an SE named Matt Cauthorn to the concept of Wire Data Analytics. So why am talking about a SIEM on a wire data analytics blog? Well, first, I feel like a lot of the criticism of SIEM’s aren’t necessarily 100% fair. My experience with the SIEM is that it is only as good as the data you put into it. Even more importantly, the investment you make in back-end processes in terms of how you parse, interpret and report with back end processes so that you can “set context”. By “set context” I mean find actionable data which is the subject of some intense criticisms of SIEM products today.

In an article from Dark Reading citing a Ponemon institute study:

Today, those same products “barely work at all,” says Exabeam CMO Rick Caccia. Older systems aren’t built to capture credential or identity-based threats, hackers impersonating people on corporate networks, or rogue employees trying to steal data. A recent report by the Ponemon Institute, commissioned by Cyphort, discovered 76% of SIEM users across 559 businesses view SIEM as a strategically important security tool. However, only 48% were satisfied with the actionable intelligence their SIEMs generate.

 

With that I’d like to start writing about how Wire Data Analytics with ExtraHop can set context in flight, reduce the cost of your SIEM investment and bring to bear an entirely new set of metrics and provide security teams with better data instead of more data.

Setting Context in Flight:
ExtraHop’s wire data analytics capabilities enable you to set context in flight by interrogating the wire for specific events then applying logic to them in milliseconds so that your logs have considerably more value and a much higher intelligence yield.

Example: Auditing your PCI Environment.
You have a PCI environment that you want to set up network based auditing for. The rules are as follows:

  • Alert on ANY external non-RFC1918 access and report it as an Egress Violation
  • Alert on any client or server based traffic that has not been pre-defined.

Using the SIEM only approach you must perform the following:

– Audit/Log every single build-up and tear-down action which could result in thousands and potentially millions of logs via Syslog or Netflow
– Index, parsed these logs
– Build/run back end batch processes to root out the few suspect transactions from the, potentially, millions of logs that you already have.

Now let’s consider what that would look like using an ExtraHop appliance.
– Create a rule that sets the appropriate communications
– Acceptable client/server traffic (Fill out the pre-defined application inspection trigger with the appropriate protocols)
– Tell the ExtraHop appliance to alert on any non-RFC1918 connection.
– Send ONLY actionable intelligence to the SIEM relieving both the CSIRT team and the back-end SIEM of the burden of indexing/parsing and sorting millions of logs.

 

Click Image:trigger

After setting the criteria, aka “casting the web” we need only lie in wait for something to run across it. In the video below you will see examples of how we have integrated with Splunk Cloud to “set context in flight” by sending ONLY logs that have violated the criteria cast in the application inspection trigger above.

Now instead of leaving the Threat hunter to sort through thousands or millions of logs on the back end we are sending data that is actionable because we set the rules prior to sending the Syslog message. As you can see below in the Splunk Cloud instance, every transaction sent to the SIEM is actionable vs. the madness of sending thousands and thousands of logs every second to your SIEM. This will make the bill for indexing much cheaper both from a licensing standpoint as well as a hardware scaling standpoint. (Please Watch the Video on Youtube)

New Concept: Intelligence Yield
In my time as the Event Correlation guru for my security team one of the more frustrating things I would run into is the fact that I consistently needed about 30% of what was in a log file but I would pay to store and/or index 100% of the data. When you use ExtraHop as a forwarder you have the ability to actually pick and choose what parts of a log/payload you want to forward and you can even customize the delimiter if you like. This means that there is no leftover ASCII that needs to be stored/indexed. While this may not seem like a lot, at scale it can actually get expensive! Another way we provide better intelligence yield is, as you noted in the example above, we set the conditions under which we would like to send Syslog data and ignore transactions that you may already be logging via whatever daemon you are running (apache for HTTP, etc). Why log HTTP network connections when you are already doing it in /var/log/apache/.

 

Possible Licensing Cost Savings:
We actually had a scenario with a customer where they wanted to find out if there were excessive logins from a single client. The traffic was sending thousands of messages per minute to their SIEM. We looked what was happening and we did the following:

  • Kept a running ticker of the number of logins per client IP
  • Sent actionable data to the SIEM by sending just those client IPs that had more than 5 logins in a ten minute period reducing the message count from thousands per minute to between 5-7.

In the fictitious scenario below, we are using Splunk’s list price to show the difference in savings when you use ExtraHop as a forwarder and give the SIEM a break on processing messages. Keep in mind, while this is an overly simple example, there may be parts of your logging regimen that ExtraHop can provide in-flight context as well as a reduction in the amount of work, licensing costs and an increase in the quality of the data you are receiving in your SIEM.

Scenario: Reducing your licensing costs as well as your Mean Time To “WTF!?” (MTTWTF)
A customer has 500 clients with each Client node sending around 2500 logs per minute (this is HARDLY out of the ordinary for a large enterprise).  So if you have say 500 clients sending 2500 events per minute you are looking at 1.25 million events being indexed every minute.  

Let’s say we use the SESSION_EXPIRE event, we are sending ONE event that has the Client:Server and a count of 500,  In terms of “Intelligence  Yield” it has the same value but it has an overall impact of .04 percent (not four percent, “point zero four” percent).  I would argue that the intelligence yield is actually higher because you have delivered a level of context (the count) in the syslog messages vs. leaving it to some algorithm or batch process on the back end to deliver context.  Five events….”meh”……500, 5000 events….”WTF!”.

Our “MTTWTF” (Mean Time To “WHAT THE F***….”) is potentially MUCH faster.  

So if I take the overly simplistic view of a 50GB Splunk license (it will NOT be this easy for you but I think most customers will get the value prop here)

From Splunk’s Website:

A 50GB Splunk license is $38K Annual or $95K perpetual WITH $19K in support.  If we can proportionally reduce the impact of their SIEM product you get the improved “MTTWTF” with a 2GB license which would cost $1500 Annually and $4500 perpetual w/$1500 in maintenance.


As I said earlier, the view here is simplistic but there is WITHOUT QUESTION logging regimens within customers that we can look to make more efficient using ExtraHop’s Wire Data Analytics and the Session table to replace logging every transaction. Also, please credit Splunk for publishing their sticker price.

This is not a knock on Splunk!

In this model you get the following:

  • On perpetual a 2100% decrease in initial costs
  • On Annual a 2500% decrease in initial costs
  • Better Intelligence Yield
  • No forwarders or debugging levels to enable on the clients themselves.

Keep in mind, the larger the license the smaller the savings will be as Splunk rewards customers for the larger GB license but the point here is, there is significant savings to be made in addition to having all around better logs.

What’s on YOUR wire!!?
When you leverage ExtraHop as a log forwarder you actually get access to the best source of data on your network. Not only do you get access to it, but you get application inspection triggers that will allow you to actively interact with it. When you are using ExtraHop, unlike logging based solutions, you are not dependent on someone to “opt-in” to logging. You will NEVER have to go to another team and ask them to install forwarders, agents or send data to a remote system. If you have an IP address, you have already opted-in and if you have an IP address, there is NO opting out. If a system is rooted and /var/log is deleted…we will still log. If logging is shut off on a system, we, like a closed caption TV, will continue surveillance and logging.

No agents, no logs….NO PROBLEM!
As previously stated, ExtraHop works from a mirror of the Network so if you have IoT devices that cannot log, we can log it for them. If you have a systems that are “certified” by a vendor and cannot be patched or have forwarders installed on them, not a problem, we can log for them. If you have a malicious raspberry pi plugged into the MDF and have ACL’d yourself off so you cannot be discovered….not a problem, we’ll log everything you do!! (We also send a New Device alert when your mac shows up). What the ExtraHop Discover appliance does is allow you to “log the un-logable” if that makes sense. Adopting a passive, surveillance strategy is a perfect complement to any SIEM regimen.

Conclusion:
As I stated near the beginning of the post, INFOSEC teams do NOT need more data, they need better data. I am not saying you no longer need a SIEM but I am absolutely saying that we need to send better data to our SIEM. Using ExtraHop can greatly enhance the agility and certainty of any CSIRT team or SOC. Evaluating transactions BEFORE you send them to the SIEM provides the level of certainty needed to take that next step toward orchestration and automation. As Threat Hunting continues to evolve as a discipline, no one will provide you a more intelligent and scalable web to cast as we move from playing whack-a-mole to a role more consistent with a trap-door spider. Several INFOSEC workflows are currently tied to the SIEM and let’s not throw the baby out with the bathwater. The SIEM can still serve us well, we just need to take steps to send it better data, there is no better source of data than the network and there is no solution more capable of letting you mine data directly off the network than ExtraHop.

Thanks for reading

 

John M. Smith
Security Systems Engineer
ExtraHop Networks.

 

Advanced Persistent Dysfunction: Organizational “Air Gaps”

One of the more frustrating things about Wannacry, Petya, notPetya is that they would have been made significantly less effective had organizations applied MS17-014. The fact that we still see SMBv1 is utterly staggering to myself and my colleagues. Why is it that we live in a world where we have automation, vulnerability scanning, patching solutions and spend billions on security that our organizations are routinely compromised by what are, in many cases, patchable or at least significantly mitigable. (Admittedly, Petya/NotPetya exploited LSASS.EXE as well which is pretty brutal!)

There is another vector that I think is being exploited and as digital threats become more organized (2017 Cisco ASR states that it is not uncommon for malicious hashes to be less than 24 hours old) is Organizational Air Gaps. While great for protecting integrity and accountability, I believe that organizational “air gaps” are part of the issue and there are numerous instances where security teams have warned system owners about vulnerabilities and were ignored thus the sad attempt at an INFOSEC meme below.

How did we get here?
The meme above is meant to lend humor to what is a frustrating situation. I am certain that most system owners do not mock their CISO office nor are they unconcerned about their security. The issue here is that post 9/11 we started to form Cyber security teams. While INFOSEC existing prior to 2001 the size and breadth was not nearly at the level it is today where there are nearly as many INFOSEC roles as IT roles. The point is, we started the process of decoupling system owners from their own security. Cyber security teams started to form and eventually (and probably justifiably) the “Office of the Chief Information Security Officer” (OCISO) was created and, in my opinion, this is where the new risk of organizational air gaps was born. While having the CISO report to the CEO does allow for IT to be held accountable and prevents a CIO from brow-beating the security team from reporting issues with security, it has created a difficult, albeit fixable, organizational challenge where the individuals responsible for addressing reported vulnerabilities have their agenda, budgeting and staffing levels set by an entirely different organization. The security apparatus is tasked with deriving a posture/strategy for an organization and it could easily be received by the IT department as an unfunded mandate. I have worked both in security and within IT departments and throughout my career when I wasn’t working in INFOSEC, the security of the systems under my purview were never a criteria in my performance evaluation. I was very security conscious but it had more to do with not wanting to be in the news or embarrassed. The time has come to evaluate the effect and cause, systems will always have vulnerabilities and vendors will always have patches. The real vulnerability we need to address might be within our own organizations.

How do we fix this?
Well, let me start with one of my patented insufferable analogies. The fact that my city has a police department doesn’t mean that I don’t lock my door and that I am not vigilant about my own property. Sadly, the workloads, staff shortages and overall culture of today’s enterprise has system owners worrying about everything BUT security. When I first moved into my neighborhood it was VERY sketchy but I LOVED the old bungalow style houses and my wife and I decided to fix one up. Over the next few years, more people moved into the neighborhood who did not want to tolerate flop houses and crack houses and eventually things got considerably better. Crime went down, property values went up and an area that was very costly to the city was not less costly and paying higher property taxes.

So what changed?
As I stated previously, people in my neighborhood made a conscious decision not to allow fringe activity to continue. When we saw people behaving suspiciously, we called the police and generally got involved in the security of our own neighborhoods. Contrast this with a neighborhood 20 blocks south of me where the relationship with law enforcement was strained. This neighborhood was less safe, cost the city more money and had considerably lower property values. I am not going to get into the reasons for the strained relationship with the law enforcement (some legit, some not) but the point/parallel here is that the better your system owner’s relationship is with the CISO’s organization, the more functional and safer your CIDR block is going to be. You have to ask yourself what you have between you and security, a wall, a bridge or a moat with alligators in it. As a federal employee while I did not work for the OCISO my group assigned a daily “pit boss” and we built a bridge between our team and our peers on the OCISO side.

Back in 2010 I made the following statement out of frustration with INFOSEC and the way it was functioning on my Edgesight under the Hood blog. “Unless you can buy your INFOSEC team a crystal ball or get them an enterprise license to Dionne Warwick’s Psychic Friend’s Network system owners are going to HAVE to start taking some responsibility for their own security”. Security teams cannot be responsible for knowing suspect behavior on systems that they don’t oversee on a day to day basis. When we factor in things like phishing or credential stealing then we basically have a bad actor using approved, albeit stolen, credentials coming in over approved ports. If someone had stolen a key to my house and walked up to the front door, opened it and started leaving with my property, even if a cop was standing right there it would not look suspicious. When I chase them out of my house with a Mossberg THAT looks suspicious. Sadly once most systems are compromised, the last people to know are the actual system owners. At ExtraHop we pride ourselves in the visibility we provide both security teams AND system owners. As you evaluate solutions, think of how you can get system owners involved and include IT in the process of implementing them and make them a stake holder.

Conclusion:
INFOSEC can be a lonely job, when I worked in IT security, generally the only friends I had in the organization were other security folks. The professional barrier with your IT colleagues is fine but there doesn’t need to be an air gap. In my old neighborhood, yes, the local police there could end up needing to arrest me one day (luckily I have yet to ascend beyond the occasional “suspicious character” in the police blotter) but the professional barrier should not prevent me from working hand and hand with him as he is working to protect me. The people who build, support and architect our digital products pay all of our salaries, including INFOSEC. I think we need to ask ourselves if there are any organizational air-gaps between the CIO and CISO’s organizations and what steps can we take to build bridges to ensure everyone is working together?

Thanks for reading!

John M. Smith
Solutions Architect
ExtraHop networks

 

 

 

 

 

 

 

 

 

 

The next big breach will be……

Most of the circles I run in are at the point of rolling their eyes when they hear me say “I can’t tell you what the next big breach will be other than that it will involve one host talking to another host it’s not supposed to”. One of the challenges I come across, even in the Federal space occasionally, is that due to staff shortages, the system sprawl facilitated by virtualization and ridicules workloads that some operations teams have, the ability to distill your security posture into who is talking to who is next to impossible. The top two critical controls of the SANS 20 critical controls are An inventory of Authorized and Unauthorized systems as well as an inventory on the Apps and Software running on said systems. In our conversations with practitioners these top two controls are consistently mentioned as being extremely difficult to wrangle. I believe that some of this is due to the top down nature of most security tools that perform tasks like SNMP/Ping sweeps or WMI sweeps. An individual looking to work in the dark will, if they are worth their weight in salt, effectively ACL themselves off and hide from being discovered. The fix for this is wire data analytics which does not depend on discovering data by having open ports or having a system respond. With ExtraHop’s wire data analytics platform if you have an IP address and you engage in a transaction with another host that ALSO has an IP address, you are pretty much made. We will see the port/protocol, IP, client and server of the conversation as well as numerous performance metrics. When this feature is paired with Application inspection triggers, you are then positioned to take back your enterprise and get control of those conversations that you don’t expect or don’t know about. The type of stuff that keeps your CISO up at night.

Enter the ExtraHop Segment Auditing:
Using the ExtraHop platform to audit critical segments of your infrastructure has a two-fold function. First, you are positioned to be alerted immediately when an unauthorized protocol or port has been accessed by a client or one of your servers in that segment has engaged in unauthorized traffic to Belarus or China. The second function is to allow Architecture, Security and System owners to reclaim their enterprise by getting a grip on what the exact communication landscape looks like. As previously stated, the combination of staff turnover, system-sprawl and workload have left teams with little to no time to spend auditing communications. With the ExtraHop platform as a fulcrum, much of the heavy lifting is already done drastically reducing the analytics burden.

How it works:
Within the ExtraHop platform you create a device group, you then use the Template Trigger to assign to the device group (Example: PCI) and edit a few simple variables that allow you to declare your expected communications. If a transaction that is outside the white list of expected/permitted communications the Discover Appliance will take action in the form of alerts, slack updates, dashboard updates and Explorer (our search appliance) updates. The alerted team will have five minutes to investigate the incident before they will receive another alert. The idea here is you investigate and either white list or suppress transactions that are not allowed/expected. In doing so, you should have a full map of communications within an hour of deploying the trigger to an audited segment/environment.

Declare Expected Communications:
In the trigger we have one declared variable and three white lists that can be used to reduced alert fatigue as well as root out unauthorized transactions.

-segment:

Here we set the segment that we are auditing, this is what will show up in the dashboard.
-proto_white_list:
Here we set the protocols that are approved for the specific device group we have
-cidr_white_list:
This variable is used to set the CIDR blocks that you wish to ignore. I generally only use broadcast-type addressing as there are risks with white listing an entire CIDR block.
-combo_white_list:
For this variable I am using 24 bit blocks from the cidr_port variable. An example of this white list could be the need to alert on CIFS traffic but you want to remove false positives for accessing the sysvol share on your Active Directory controllers. Let’s say your AD environment lives on 10.1.3.0/24 than we would white list “10.1.3.0_CIFS” specifically allowing us to continue to monitor for CIFS while not being alerted on normal Active Directory policy downloads.

Below is a sample of the trigger used that you assign to each device group you would like to audit.

(Click Image)

This same trigger can also be edited to white list client based activity (Egress) as well as server based activity (Ingress).

Results:
The results are you can methodically peel the onion back in the event you have a worm infecting your system(s). Additionally, you can systematically begin the process of understanding who is talking to who within your critical infrastructure. Below you see a dashboard that shows the unauthorized activity both as a server and as a client. You also have a ticker showing a count of the offending transactions that includes the client/server/protocol/role as well as a rate of protocol violations. Ideally after a few hours, and working out unexpected communications, you would expect this dashboard to be blank. Beyond the dashboards is where the real money is, let’s talk about some of the potential workflows that are available leveraging the ExtraHop ODS feature and our partners.

(Click Image)

Possible Workflows:

Export results to Excel and ask system owners what the HELL is going on:
The ExtraHop platform includes a search appliance that allows you to export the results of the segmentation audit to a spreadsheet. This can be attached to an email to the system owners or CSIRT team to find out what is going on with those unauthorized transactions. In the search grid below, what you see is a mapping of all transactions that were not previously declared as safe.

(FYI, the “Other” protocol is typically tunnel based traffic such as ICMP or GRE)

(Click Image)

 

SIEM Integration:
The ODS feature of the ExtraHop platform can send protocol violations to your SIEM workflow. As most CSRIT responses are tied to some sort of SIEM and ExtraHop can thread wire data surveillance into those workflows seamlessly.

Slack updates:
If you have a distributed SECOPS team or you want the flexibility of creating a Slack channel and assigning resource to watch it, the ability to leverage RESTFUL API’s to allow integration with other tools can greatly enhance the agility and effectiveness of your security incident response teams. Below you see an example of sending a link to the alert or the actual alert itself into a slack channel. In our example above, if you are a member of the PCI team or on the governance side of the house (or both for that matter) you can easily collaborate here. In the scenario below, the INFOSEC resource can actually chat with the system owner to find out if this is, in fact, suspicious activity. The majority of crimes that result in arrest do so as a result of a citizen calling the police and the two working together to determine if a crime has been committed. Sadly this dynamic doesn’t exist in IT today, we are creating it for you below (Alerts are sent within a few milliseconds).

(Click Image)


Tetration Nation:
One big announcement last week at Cisco Live was the ExtraHop integration with Cisco’s Tetration product. Below you see an example of how the ExtraHop platform handles a Ransomware outbreak. The workflow for protocol violations is the same, should the Discover appliance observe unauthorized communications, the traffic can be tagged and sent to the Cisco Security Policy Management engine where policies can be enforced.

Conclusion:
One of the battle-cry’s for security in 2017 has been the need to simplify security. Top-down device discovery simply does not work and leaves room for bad actors as well as insider threats to work in the dark. A foundational security practice that includes passive device discovery provides the ground-up approach to security that can then lay the ground work for building a much more stable security practice. Distilling communications down to who is talking to who and is it authorized or not has been impossible for far too long. Leveraging ExtraHop’s segment auditing capabilities positions you to know, within milliseconds, when a system is operating outside its normal pre-defined parameters. When coupled with ExtraHop Addy you can obtain full-circle visibility 24×7.

Thanks for reading

John M. Smith

The Case for Wire Data: GONE in 60….”err”…8 seconds! (Counter-punching DNS jackassery)

A few days ago SANS wrote an article about the importance of tracking DNS Query length stating emphatically “Size Does Matter”. It’s an excellent article and certainly worth a read, you can find it here

The article demonstrates how easy it is to exfiltrate a file using the DNS Channel. They ran a script that encoded the /etc/passwd file into base32 chunks and exfiltrated the file to an external DNS Server. Since the subdomain limit is 63 characters or less they used all 63 characters to append an encoded text string onto the subdomains allowing them to push the data externally using the internal corporate DNS server as the mule in the process.

Just before showing us the Splunk Histograms they did something VERY unique, they showed you the following command:

# tcpdump -vvv -s 0 -i eth0 -l -n port 53 | egrep “A\? .*\.data\.rootshell\.be”

As you are aware, tcpdump is a wire analysis sniffing tool that shows you packets as they are being observed on the wire. If ONLY there were a way to take action against this behavior as it was being observed on the wire. What if I told you there was one? And you probably didn’t even know you could leverage it in your security practice!

Enter Wire Data:
In a way, the SANS article provided a Segway for me to demonstrate the power of wire data. The ExtraHop platform provides full, in flight stream analysis and has the capability to interact with external APIs that may be able to actually STOP DNS exfiltration instead of telling you that it had already happened.

So to do a similar test, I ran something similar attempting to use base64 to exfiltrate a much larger filer called blockedIPs.txt using the following script named dnsjackassery.sh. I then used an online stopwatch to count the time from when I executed the script to when it finished. The entire text file was exfiltrated in 8.01 seconds. Basically by the time it is rendered in Splunk, the data has already been exfiltrated. That doesn’t mean that the splunk scenario isn’t valuable but leveraging wire data we can do A LOT more than just tell your SOC that they have been breached.

Below you see that the script was able to exfiltrate the entire file in 8.01 seconds.

Stopwatch

So in looking at the time it took to exfiltrate the entire BlockedIPs.txt file, 8.01 seconds isn’t really a lot to work with as your SOC does not have a crystal ball.  BUT, in the world of wire data analytics where you deal in microseconds, seconds are hours! Below is a diagram of how we have set up the ExtraHop appliance to alert us when DNS exfiltration takes place. Since I don’t have an active OpenDNS account I am using the Slack API to demonstrate how the ExtraHop platform can integrate with intelligent services. For this test we set the following criteria using ExtraHop’s Application Inspection Triggers.

  • DNS length of greater 63 or greater
  • Not part of B2B partners’, or Internal Namespace
  • Not a common CDN like Akamai or amazonaws.com

There will always be SOME white listing that will need to occur to avoid digital collateral damage. If the site is a .ru, .cn or in this case, a .be and I am a hospital in Minneapolis than I doubt that I have an abundance of business with those name spaces and I feel pretty comfortable bit-bucketing them via OpenDNS or another next generation DNS product.

EH_SLACK_DNS

So upon executing the dnsjackassery.sh script we begin to see results populating our Slack channel within a few milliseconds. This could just as easily be an API call to your DNS service to “black hole” the suspect domain.

Transaction speed: In the two images below, you can note the performance of the slack transactions. You can see an average round trip time of 41ms with an average processing time of 58ms. I would expect an API call to OpenDNS to be similarly fast basically meaning that you have plenty of time to stop a file from being fully transferred using DNS exfiltration. The point here is, unlike many SIEM based solutions, you are well positioned to counter-punch using ExtraHop and Wire Data Analytics.

Slack Performance Metrics: Taken from the ExtraHop Appliance

The article also made a point to suggest that you also monitor the subdomain counts (they are one of the few to do that, tip of the hat to ya!). Using the ExtraHop platform, we also keep track of the number of subdomains by Client/Domain combo. If you look below, you see the number of lookups for a specific domain from a single IP. Unless it is a NATed address with a few hundred users behind it, it is pretty safe that a large number of the metrics below are NOT human metrics but some sort of program doing the lookups. Even the fastest internet users cannot perform more than a few DNS queries in a 30 second period.

Also noted in the SANS article was the need to pay attention to query lengths. Here we have written a trigger to give you the average query length by root domain. This can, as well as the subdomain count, metrics can be leveraged with an API to actually orchestrate a response via OpenDNS or another product.

Conclusion:
There is a big push to add automation and orchestration to most security practices. This is more difficult than it reads/sounds. In the case of DNS Exfiltration, many SIEM based products, while still quite valuable, lack the shutter-speed to successfully observe and react to DNS based exfiltration. However, when you leverage the power of ExtraHop’s Wire Data Analytics and the Open Data Stream (ODS) technology allowing you to orchestrate responses the job becomes almost trivial. As I stated, in our world, seconds are hours and minutes are days. The number of orchestration products hitting the cyber security market are going to make show floors look like a middle-school science fair where practitioners are going to feel like they are looking at a myriad of baking-soda volcanos! Orchestration is only as good as the visibility integrated into it, good visibility starts with wire data, good visibility starts with the ExtraHop platform.

*PS anyone running OpenDNS and familiar with the API, I’d LOVE to try counter punching using the techniques described here!!

The Money is in the Mash-Up: RESTFUL mash-ups to help under-staffed INFOSEC teams

In this post, we will couple ExtraHop’s wire data analytics, Anomali STAXX, a leading threat intelligence solution and Slack, a cloud-based collaboration platform to demonstrate how we can use orchestration and automation in a manner that helps today’s under-(wo)manned security teams meet today’s threats with the level of agility needed!

I was fortunate enough to be selected to speak at RSAC 2017 and it was surely a career highlight for me. As several analysts pointed out post-show, automation and orchestration seemed to be the flavor of the year. Over the last 36 months it has become glaringly obvious that we simply cannot keep bad actors and malicious software off of our networks. I have been preaching the folly of perimeter (only) based security since 2010. The speed with which systems are now compromised and the emergence of the “human vector” through phishing has all but assured us that the horde is behind the wall and needs to be directly engaged. The reliance on logs, SIEM products will give you a forensic view of what is going on but will do little to be effective against today’s threats where a system could be compromised by the time the log is written.

While the idea of automation and orchestration is a great one, there are issues with it and will not be the first time “self-defending networks” have been brought to market. Bruce Schneier makes a very good point in his “Schneier on Security” blog post when he states the following:

You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cyber security”. He also makes one of the greatest quotes in INFOSEC history when he states “Data does equal information and information does not equal understanding”.


Perhaps the battle here is to get to a place of certainty, I too was once an advocate of “log everything and sort it out later” but the process of sorting through the data become extremely tedious and the amount of work it took to get to “certainty” I believe, gave bad actors time to operate while I wrote SQL queries, batch processes and parsing scripts for my context-starved data sets.  Couple this with the fact that teams are digitally bludgeoned to death with alerts and warnings that the “INFOSEC death sentence” starts to take root as people get desensitized to the alerts.

So where do we find certainty and how do we use it?
While the industry is still developing, there have been great strides in Threat Intelligence. ISACs around the world are working together to build shared intelligence around specific threats and making the information readily available via TAXII, STIXX and CIF. There is even a confidence level associated with each record that we are able to use as a guide to determine if a specific action is needed. The challenge with good threat intelligence is how we make it usable. Currently most threat Intel is leveraged in conjunction with a SIEM or logging product. While I certainly advocate for logs, there are some limitations with them.

  • Not everything logs properly (IoT Systems normally have NO logging at all)
  • You have a data gravity issue (you have to move the data into the cloud to be evaluated or you have to store petabytes of data to evaluate)
  • In some cases, only a small portion of the log is usable (but you pay to index the entire log with most platforms)
  • Their use is largely forensic with many of today’s threats

The case for Wire Data Analytics:
The key difference that I want to point out here is that using Wire Data Analytics with ExtraHop you can perform quite a bit of analysis in flight. ExtraHop “takes” data off the wire and is not dependent on another system to “give” the data to it. The only prerequisite for ExtraHop is an IP address. Examples of how I have made a SIEM more effective using wire data include:

  • Reducing Logging by 5000% by looking at logins by IP and calculating the total THEN sending a syslog message to the SIEM for those IPs with more than 100 logins vs. sending tens of thousands of logs per minute to the SIEM and checking on the back end
  • Checking an EGRESS transaction to against threat intelligence THEN sending the syslog if there is a match
  • In an enterprise with tens of thousands of employees, rather than logging EVERY failed login, aggregate records into five minute increments then send those with more than 5 login failures to the SIEM.

The point here is that you can deliver some context when you leverage wire data analytics with your SIEM workflows. Using SIEM-only, you must achieve context by aggregating the logs and looking at them after they are written. Using ExtraHop with your SIEM, you are able to achieve context (and more importantly, get closer to Mr. Schneier’s certainty) BEFORE sending the data to the SIEM. You can keep all the workflows that are tied to the incumbent SIEM system, you are just getting better, and fewer, logs. Should I disable an account that has 50 login failures in the last five minutes (Locked out or not)…..HELL YES! While I don’t think that automation and orchestration are a panacea, I think there are SOME cases where the certainty level is high enough to orchestrate a response. Also, I believe that automation and orchestration is not just for responding but can be used to make your SOC more effective.

Now that I have, hopefully, established the merits of using Wire Data Analytics, let’s keep in mind orchestration does NOT have to be a specific action or response. Orchestration can also be used to make your team more agile and hopefully, more effective. Most security teams I come across have at least one, two and in some cases, three open positions. The fact is, at a time when threats are becoming more complex, finding people with the needed skills to confront them is harder than ever. The situation has gotten so bad that the other day I typed “Human Capital Crisis” in Google and it auto-filled “in cybersecurity”. The job is getting tougher and there are fewer of us doing it, what I am going to show you in this post will never replace a human being but it might ease some of the heavy lifting that goes into achieving situational awareness.


PHISHING: “PHUCK YOU, YOU PHISHING PHUCKS!!!”
Anyone who has ever been phished or worked in an organization that is experiencing a phishing/spear phishing campaign has felt exactly as the section title says.  Lets have a look at how we can help our security teams get better data by leveraging the API’s of three unique platforms to warn them when a known phishing site has been accessed.

For those of us who are working too hard to bring context to the deluge of data, my suggestion…get some REST!!! Below I am going to walk you through how I can monitor activity to known phishing sites by doing a mash up of three technologies using the RESTFUL API of all three platforms.

Solution Roster:

  • ExtraHop Discovery/Explorer appliance
    ExtraHop provides wire data analytics and surveillance by working from a mirror of the traffic. Think of it as a CCTV for packets/transactions.
  • Anomali STAXX Virtual Machine
    Anomali STAXX provides me lists of current threat intelligence. Think of this as equipping the CCTV operator with a list of suspicious characters to look for.
  • Slack Collaboration Community
    Slack provides me a community at packetjockey.slack.com where my #virtualsoc team operations from anywhere in the world.
  • A python peer (Windows or Linux)
    This is the peer system that accesses the threat intelligence and pulls it off of the STAXX system and uploads the threat intelligence to the ExtraHop appliance.

How it works:
As you can see in the drawing below, the Linux peer uses the REST API to get a list of known phishing sites then executes a Python script to upload the data into the memcache on the ExtraHop appliance equipping it with the threat intelligence it needs. The ExtraHop appliance uses an application inspection trigger that checks every outgoing URI to see if it is a known phishing site. If there is a match, an alert is sent to Slack, Email/SMS in addition to being logged on their own internal dashboards and search appliance.

 

EH_SLACK_STAXX

Click Image

 

What the final product looks like:
From my Linux box, (I don’t dare go to these sites on my Windows or Mac laptop) I do a “wget” on one of the known phishing sites and within milliseconds (Yes milliseconds, watch the video if you don’t believe me). We get the client IP, Server IP and the site that they went to. From here we can find out who owns that client machine and get them to change their password immediately as well as issue an ACL for the server in case this is a spear phishing campaign and they are targeting specific uses. Also, before you ask, “Yes” we can import the list of known malicious email addresses and monitor key executive recipients in case one of them gets an email from a known malicious address. We can also check HTTP referrers against the phish_url threat intelligence.

In the screenshot below, you see my “wget” command and the result at 11:23:53 and you can see that the Slack warning came in at 11:26.  If you watch the video you will see it takes milliseconds.

I believe that by using slack you can also color code certain messages and program in that awesome “WTF” emoji (if one exists) for specific messages ExtraHop sends. Also, if you are not comfortable with specific information being sent to slack, we can configure the appliance to send you a link to the LOCAL URI that ONLY you and your team can access.

Conclusion:
While there is a lot of buzz around Orchestration and Automation I believe the pessimism around it is justified. Security teams have been promised a lot over the last few years and what we have found, especially lately, is that a lot of tried-and-true solutions either lack the shutter-speed or context to be effective. Here we are doing some orchestration and automation but we are doing so in order to give the HUMAN BEING better information. Our security director made a very good point to me the other day when he said the last thing a security team wants is more data. What we have hopefully shown in this post is that if you have open platforms like Anomali, SLACK and ExtraHop, you can craft an automation and orchestration solution that can actively help security teams in a manner that still leverages the nuance and rationalization that only exists in a human being. While there will be solutions that will effectively automatically block certain traffic, issue ACLs, Disable accounts, etc. We can also use automation to do some of the heavy lifting for today’s out(wo)manned security teams. To get where I think the Cyber Security space needs to be, it is going to take more than one product/tool/platform. If you have a solution that is closed and does not support any kind of RESTFUL API or open architecture, unless it fulfills a specific niche, get rid of it. If you are a vendor and you are selling a solution that is closed, do so at your own peril as I believe closed systems are destined to go the way of the dinosaur. By leveraging wire data with existing workflows, you can drastically reduce your TTWTF (time to WTF!??) and be better positioned to trade punches with tomorrow’s threats.

Thanks so much for reading, please watch the video.

John M. Smith

 

 

 

 

 

 

HOLY SHA1-T!!! Google engineers successful SHA1 Collision attack! Will release source code to the public in late May

Not exactly new news here but in case you have been in a coma the last two weeks. Google managed to engineer a successful SHA1 attack and intends to release the source code somewhere on or around the May 24th time frame. According to BusinessWire.com 21% of websites are still using SHA1 certificates. Basically over 1/5 of the sites on the internet are using a woefully weak cipher suite and if they are still doing so near the end of May, they will be doing so with the source code for how to exploit them. A colleague of mine once told me, as I was lamenting my frustration at apathy in the enterprise, “Sheep hate the sheep dog more than the wolf”. In this case, I see it more a matter of the sheep dog being so fed up that he or she is basically warning them that they will not only be left behind, but they are going to tell the wolves how to get at them. You may or may not agree with this methodology but regardless, those who do not heed the warning and fall in with the rest of the flock may find themselves being part of the “thinning the herd” as, most assuredly, the wolves will gather. One of the challenges in many enterprises is understanding what your exposure is. There are tools that will let you scan systems, etc. but the process is two-fold. You could spend hundreds of hours securing your servers only to be breached by a B2B Partner or an IoT device that has a weak cipher. While over 1/5 of the internet is still using SHA1, I am betting that internally it is much worse. If we have learned anything over the last 36 months it’s that the perimeter won’t keep folks out and while the wolves may gather in the DMZ, they will work just as easily in the dark when Fred in payroll opens that Email attachment or clicks that picture. As enterprise folks, we own the responsibility for thinning our own flock and keeping our own strays in line.

You may be wondering how you can do this, both internal, external and B2B? It may not go over well if you called your B2B partners and told them you were going to start scanning their systems. A solution that allows you to engage in careful surveillance of all SSL transactions and determine the cipher suites used will position you to be able to determine your entire exposure without scanning or crawling yours or any other’s network. Using ExtraHop’s wire data analytics you can observe your SSL Transactions and will be able to start the process of getting out of technical debt by fixing the issues one system, network and B2B partner at a time.

From the same cited article above, getting visibility into what your exposure is can be difficult:

“The results of our most recent analysis are not surprising” said Kevin Bocek, chief security strategist for Venafi. “Even though most organizations have worked hard to migrate away from SHA-1, they don’t have the visibility and automation necessary to complete the transition. We’ve seen this problem before when organizations had a difficult time making coordinated changes to keys and certificates in response to Heartbleed, and unfortunately I’m sure we are going to see it again.”

If you leverage our wire data analytics platform you can easily audit your exposure by using one of our canned reports on Cipher auditing or set up a quick trigger to audit it yourself.

How it works:
ExtraHop sits back passively and observes network traffic via a mirror (Tap, Span, etc). So within this Application Inspection Trigger I am doing the following:

  • Checking to see if the Certificate signature has “SHA1″ anywhere.
  • If it exists, I write the record to our EXA Elastic appliance where I can get a quick look at what my exposure is to SHA1 both from clients and servers. I can also see who issued the weak certificate (be ready to call you IoT vendors and shrink wrapped software)

Click Image:

Once we have started to write the information to the ExtraHop Explorer Appliance (EXA) we can get an idea of what our exposure is in two clicks (allowing sufficient time to build the data set)

Click 1: Select “SHA1 Audit” from the Record Type combo box.

Click Image:

 

Click 2: In the “Group By” combo box, select “Server IP Address”
Below you see a list of every server using SHA1. Some of them are internet servers and some of them are internal systems. If we want, we can separate internal systems within the trigger using the .isRFC1918 function and only look at our internal systems.

Click Image:

Conclusion:
Over the next two months, it will be important that we are able to determine our exposure to SHA1, not only for the servers we have on the internet, but internally and within the cloud providers that we are using. Moore’s law has dictated the hardware and computing power to break cipher suites will continue to get better and cheaper. SHA1 had a run of over 20 years (although it has been considered week for the last few years). Cipher suites becoming obsolete is part of the digital cycle of life. It took me a few minutes to write the trigger you see here and we already have canned auditing tools in the ExtraHop bundle gallery.

Getting visibility this quickly and getting an idea of our risk as easily as this is why we “wire data”.

Thanks for reading!

John

The Case for Wire Data: Security (Interacting with the wire)

Quick post today, I want to go over what I noticed over the weekend after reading up on Quantum Insert and the way Quantum Insert works to infect users with a MOTS (Man-On-The-Side) attack.

On Saturday, I watched a pretty interesting Bro-Con 2015 presentation from Yun Zheng Hu of Fox-IT.com

In the presentation, Yun details how you can use Bro to detect Quantum Insert activity by looking on the wire at Layer 4 sequence numbers and at Layer 7 HTTP headers. While I am still working on the Layer 4 surveillance what I saw on the HTTP headers and payloads were pretty interesting. Basically, the “shooter” has to send back a 302 redirect with a content-length of zero to avoid a malformed HTTP response. As a thought exercise I set up an Application Inspection Trigger to look for this behavior. Using the PCAP they provided we found the following:

First we set up the inspection trigger looking for a status code of 302 and a content-length of zero on the HTTP_RESPONE.

Results:
In looking at the PCAP from their website where they are injecting a redirect from LinkedIn to Fox-IT.com. You note in the results that we see the redirect and we have the ability to report on this type of behavior.

So for a thought exercise, I thought I would take a look at my “hackrificial” VM that I know has some malware/adware on it and did some browsing. What I noticed was that at least 2/3 of the sites that had a 302 redirect code coupled with a content-length of zero. Here are a few examples:

tags.bluekai.com: (Using POSH VirusTotal script)

Pixel.quantserve.com:

Tap.rubiconproject.com:


r2—sn-a8au-5uae.gvt1.com:


uipglob.semasio.net:
No webutation data but we did not that it had a malicious file observed in December of 2016

Now, this does not necessarily mean that 302 with a content-length of zero means malware, adware or anything like that but I think it is worth looking into if you have an ExtraHop Discover appliance. More importantly, what I am trying to point out is how ExtraHop allows you to interact directly with the wire to look for specific scenarios. From here you have the following workflows available:

  • Automate a threat intelligence feed that checks these domains and alerts/orchestrates a response to them.
  • Track them in our Session table and keep a count of them and report them in one minute blocks (instead of each observance) to give you a better idea of your exposure
  • Send them to Splunk or your INFOSEC CSIRT team

Other Wire Data Intelligence scenarios:

  • Banking login failure URI
    • How often does it get hit (thus how often do users fail to log in)
    • Geolocaiton of the IPs that failed to log in (I have a small bank in North Carolina that has ten login failures from China?????)
    • Which usernames are consistently failing?
  • Password Reset/New Cookie Banking Login URI:
    • Who was the referrer (has this user been phished?)
    • Geolocation of the IP Address (is it appropriate)
    • Did the user just log in a few days/hours ago? Why do we see a new cookie after they just recently logged in?

The wire will present you with a deluge of data, what a product like ExtraHop does is allow you to set conditions you want to observe and thread that intelligence into your security practice.

Thanks for reading!

John

 

 

 

The Case for Wire Data: Security

During the 2nd week of February I had the honor to deliver two speaking sessions at the RSA Conference in San Francisco. One of them was on Ad Hoc threat intelligence and the 2nd was a Birds of a Feather round-table session called “Beyond Logs: Wire Data Analytics”. While it was a great conference, I found that you get some strange looks at a security conference when you are walking around with a badge that says “John Smith”. In both sessions, a key narrative was the effectiveness of wire data analytics and its ability to position security teams with the needed agility to combat today’s threats. In this post I would like to make the case for wire data analytics and demonstrate the effectiveness of using wire data as another tool along with your IDS/IPS and Log consolidation.

Wire Data Analytics:
Most security professionals are familiar with wire data already. Having used Intrusion protection and detection software for nearly 20 years now, concepts such as port mirroring and span aggregation are already native to them. INFOSEC professionals are some of the original wire data analytics professionals. We differ from IDS/IPS platforms in that we are not looking specifically at signatures rather we are rebuilding layer 2-7 flows. We have several areas where we can help INFOSEC teams by providing visibility into the SANS first two critical security controls, augmenting logs and increasing visibility as well as providing a catalyst for ongoing orchestration efforts.

SANS Top 2 Security Critical Controls:
From Wikipedia we have the following list making up the SANS top 20 Cyber Security Controls (Click image if you, like me, are middle aged and can’t see it)

In our conversations with practitioners we commonly hear that “if we could JUST get an inventory of what systems are on the network”. As virtualization and automation has matured over the years, the ability to mass-provision systems has made security teams’ job much harder as there can be as much as a 15% difference in the number of nodes on a single 24 bit CIDR block from one day to the next, hell from one hour to the next. Getting a consistent inventory with current technologies generally involves responding to an SNMP sweep, Ping response, WMI Mining or NMAP scan. As we have seen with IoT devices, many of them don’t have MIBs, WMI libraries and in most (all) cases logs. Most malicious doers will prefer to do their work in the dark, if detected, they will try to use approved or common ports to remain unseen.

“All snakes who wish to remain in Ireland, please raise your right hand….” Saint Patrick

The likelihood that a compromised system is going to respond to an SNMP walk, Ping, WMI connection or volunteer what they are doing may be about as likely as a snake raising their right hand.

How ExtraHop works with the top 2 SANS controls:
Most systems try to engineer down to this level of detail, technologies such as SNMP, Netflow, logs and the like to do a pretty good job of getting 80-90 percent of the hosts but there are still blind spots. When you are a passive wire data analytics solution, you aren’t dependent on someone to “give” data to you, we “take” the data off the wire. This means if someone shuts off logging, deletes /var/log/* they cannot hide. A senior security architect once told me, “if it has an IP address it can be compromised”. To that we at ExtraHop would answer “if it has an IP Address, it can’t hide from us”. I cannot tell you what the next big breach or vulnerability will be, but what I CAN say with certainty (and trust me, I coined the phrase “certainty is the enemy of reason”, I am NEVER certain) is that it will involve one host talking to another host it isn’t supposed to. With wire data, if you have an IP address and you talk to another node who ALSO has an IP address. Provided we have the proper surveillance in place….YOUR BUSTED!

ExtraHop creates an inventory as it “observes” packets and layer 7 transactions. This positions the security team to account for who is talking on their network regardless of the availability of agents, Netflow, MIBs or WMI libraries. To add to this, ExtraHop applies a layer of intelligence around it. Below, you see a collection of hosts and locations as well as a transaction count. What we have done is import a customer’s CIDR block mapping csv that will then allow us to geocode both RFC1918 addresses as well as external addresses so that you have a friendly name for the CIDR block. This is a process of reconciling which networks belong to which groups and/or functions. As you can see, we have a few IP Addresses, the workflow here is to identify every IP address and classify it’s CIDR block until you can fully account for who lives where. This takes a process of getting an accurate inventory from, what can be, a 3 month or longer task into a few hours. Once you have reconciled which hosts belong to which functions, you have taken the first step in building your Security Controls foundation securing the first control. The lack of this control is a significant reason why many security practices topple over. An accurate inventory is the foundation, to quote the defensivesecurity.org podcast “ya gotta know what you have first”.

Click Image:

SANS Control 2: Inventory of Authorized and Unauthorized Software:
While wire data cannot directly address this control, I tend to interpret (maybe incorrectly) this as being networked software. While something running on just one host could do significant damage to that one host. Most of us worry more about data exfiltration. This means that the malicious software HAS to do something on the Network. Here we look at both Layer 4 and Layer 7 to provide an inventory of what is actually being run on the systems that you have finally gathered an inventory for.

In the graphic below, you see one of our classified CIDR blocks. We have used the designation “Destination” (server) to get an accurate inventory of what ports and protocols are being served up by the systems on this CIDR block. (Or End Point Group if you are a Cisco ACI person). Given that I have filtered out for transactions being served up by our “Web Farm” the expected ports and protocols would be HTTP:8080, SSL:443, HTTP, etc. Sadly, what I am seeing below is someone SSHing into my system and that is NOT what I expected. While getting to this view too me only two clicks we can actually do better. We can trigger an alert letting the SOC or CSIRT know that there has been a violation. Later in this post, we will talk about how we could actually counter-punch this type of behavior using our API.

As far as SANS 2nd Control. If I look on a web server and I see that it is an FTP Client to a system in Belarus, I am generally left to conclude that the FTP is likely unauthorized. What ExtraHop gives you, in addition to an accurate inventory, is an accounting for what ports and protocols are in use by both the clients and servers using those segments. While this is not a literal solution for SANS 2nd control it does have significant value in that INFOSEC practitioners can see what is traversing their network and are positioned to respond with alerts or orchestrate remediation.

Layer 7 Monitoring:
In the video below, titled “Insider Hating”, you see our Layer 7 auditing capability. In this scenario we have set up an application inspection trigger to look for any queries of our EHR database. The fictitious scenario here is that we want to audit who is querying our EHR database to ensure that it is not improperly used or that someone does not steal PHI from it. When an attacker has stolen credentials or you have an insider, you now have an attack that is going to use approved credentials and approved ports/protocols. This is what keeps CIOs, CISOs and practitioners up at night. We can help and we HAVE helped on numerous occasions. Here we are setting up a L7 inspection trigger to look for any ad hoc like behavior. In doing so, we can position, not JUST the security team to engage in surveillance, but the system owners. This is an ABSOLUTE IMPARATIVE if we want to be able to stop insiders or folks with stolen credentials. We need to do away with the idea that security teams have a crystal ball. When someone runs a “Select * from ERH” from a laptop in the mail room, we can tell you that it came from the mail room and not the web server. We can also alert the DBA of this and get system owners to take some responsibility for their own security. This same query, to many security teams, will look like an approved set of creds using approved ports. This same information being viewed by the DBA or System owner may cause them to fall out of their chair and run screaming to the Security teams’ office. The of vigilance by system owners, in my opinion, is the single greatest reason breaches are worse than ever before in spite of the fact that we spend more money than ever.

 

Augmenting Logs:
I love logs, I love Splunk, LogRhythm and of course my old friend Kiwi!! But today’s threats and breaches happen so fast that using just logs positions you to operate in a largely forensic fashion. In many cases, by the time the log is written and noticed by the SOC the breach has already happened. Below you see a graphic from the Verizon DBIR that states that 93% of all compromises happen within minutes, 11% within seconds. Using just logs and batch processing to find these threats is great for rooting out patterns and malicious behavior but, as I stated previously, largely forensic. As a Wire Data Analytics platform we work/live in a world of microseconds and thus for us, seconds are hours and minutes are days. Current SIEM products, when not augmented with wire data analytics, simply don’t have the shutter speed to detect and notify or orchestrate a timely response.

 

Example:
I saw an amazing black-hat demo on how OpenDNS was using a hadoop cluster to root out C2 controllers and FastFlux domains. The job involved a periodic batch job using pic to extract domains with a TTL of 150. Through this process they were able to consistently root out “FastFluxy” domains to get a new block list.

We have had some success here collecting the data directly off the wire. I will explain how it works: (we are using a DNS Tunneling PCAP but C2 and Exfiltration will have similar behavior).

  • First we whitelist common CDNs and common domains such as Microsoft, Akamai, my internal intranet namespace, etc.
  • We collect root domains and we start adding the number of subdomains that we observe.
    • In the example below, we see pirate.sea and we start to increment each time we observe a subdomain
  • If a root domain has a count of over 50 subdomains within a 30 second period, we account for it. (thus the dashboard below)

The idea behind this inspection trigger is that if the root domain is NOT a CDN, not my internal namespace and not a common domain like Google or Microsoft, WHY THE HELL DOES THE CLIENT HAVE 24K lookups? Using logs, this is done via a batch process vs. using wire data, we uncover suspicious behavior in 30 seconds. Does that mean you don’t need logs or the ingenius work done by OpenDNS isn’t useful? Hell no, this is simply augmenting the log based approach to give you more agile tool to engage directly with an issue as it is happening. I am certain that even the folks at OpenDNS would find value in being able to get an initial screening within 30 seconds. In my experience, with good white listing, the number of positives is not overly high. Ultimately, if a single client makes 24500 DNS lookups for a domain that you don’t normally do business with, it’s worth investigating. We routinely see Malware, Adware as well as 3rd party, unapproved, apps that think they are clever by using DNS to phone home (yes YOU Dropbox) using this method.

Click Image:

SIEM products are a lynch pin for most security teams. For this reason, we support sending data to SIEM platforms such as LogRhythm and Splunk but we also provide a hand-to-hand combat tool for those SecOps (DevOps) focused teams who want to engage threats directly. In the hand-to-hand world of today’s threats, no platform gives you a sharper knife or a bigger stick than Wire Data Analytics with ExtraHop.

Automation and Orchestration (Digital counter-punching):
In an article in September of 2014 GCN asked “is automation security’s only hope?” With the emergence of the “human vector” what we have learned over the last 18 months is that you can spend ten million dollars in security software, tools and training only to have Fred in payroll open a malicious attachment and undo all of it within a few seconds. As stated earlier in this post, 11% of compromises happen within seconds. All, I hope, is not lost however, there have been significant improvements in orchestration and automation. At RSAC 2016 Phantom Cyber debuted their ability to counter-punch and won first prize in the innovation sandbox. You can go to my youtube channel and see several instances of integration with OctoBlu where we are using OctoBlu to query threat intelligence and warn us of malicious traffic. But we can go a lot further with this. I don’t think we have to settle for post-mortem detection (which is still quite valuable to restrict subsequent breach attempts) with logs and batched surveillance. Automation and orchestration will only be as effective as the visibility you can provide.

Enter Wire Data:
Using wire data analytics, keep in mind that ours is a world of microseconds, we have the shutter speed to observe and act on today’s threats and thread our observed intelligence into orchestration and automation platforms such as Phantom Cyber and/or OctoBlu and do more than just warn. ExtraHop Open Data Stream has the ability to securely issue an HTTP.post command whereby we send a JSON object with the parameters of who to block positioning INFOSEC teams to potentially stop malicious behavior BEFORE the compromise. Phantom Cyber supports REST based orchestration as does Citrix OctoBlu, most of your newer firewalls have API’s that can be accessed as does Cisco ACI. The important thing here to remember is that these orchestration tools and next generation hardware API’s need to partner with a platform that can not only observe the malicious behavior but thread the intel into these API’s positioning security teams for tomorrows’ threats.

My dream integrations include:

  • Upon observing FastFluxy behavior, sending OpenDNS an API call that resolves the offending domain to 127.0.0.1 or a warning page
  • Putting a mac address in an ACI “Penalty box” (quarantine endpoint group) when we see them accessing a system they are not supposed to
  • Sending an API call to the Cisco ASA API to create an ACL blocking a host that just nmapped your DMZ

As orchestration and automation continues to take shape within your own practices, please consider what kind of visibility available to them. How fast you can observe actionable intelligence will have a direct effect on how effective your orchestration and automation endeavors are. Wire Data analytics with ExtraHop has no peer when it comes to the ability to set conditions that make a transaction actionable and act on it. Orchestration and automation vendors will not find a better partner that will make their products better than ExtraHop.

Conclusion:
The threat landscape is drastically changing and the tools in the industry and rapidly trying to adapt. An orchestration tool is not effective without a good surveillance tool, a Wire Data analytics platform like ExtraHop is made better when coupled with an orchestration tool that can effectively receive REST based Intel. The solution to tomorrows’ threats will not involve a single vendor and the ability to integrate platforms using APIs will become key to implementing tomorrows’ solutions. The ExtraHop platform is the perfect visibility tool to add to your existing INFSEC portfolio. Whether you are looking to map out a Cisco ACI implementation or you want to thread wire data analytics into your Cisco Tetration investment, getting real-time analytics and visibility will make all of your security investments better. Wire Data Analytics will become a key part of any security team’s arsenal in the future and the days of closed platforms that cannot integrate with other platforms are coming to an end.

There is no security puzzle where ExtraHop’s Wire Data Analytics does not have a piece that fits.

If you’d like to see more, check out my YouTube channel:
https://www.youtube.com/playlist?list=PLPadIDS3iteYhQuFhWy2xZemdIFzMNtpr

Thanks for reading

John Smith

 

 

 

 

 

 

 

 

 

Advanced Persistent Surveillance: Threat Intelligence and Wire Data equals Real-time Wire Intelligence

Please watch the Video!!

As the new discipline of Threat Intelligence takes shape, Cyber Security teams will need to take a hard look at their existing tool sets if they want to take advantage of the dynamic, ever changing threat intelligence feeds providing them with information on which hosts are malicious and whether or not any of their corporate nodes have engaged in any sort of communications with any of the malicious hosts, DNS names or hashes that you are collecting from your CTI (Cyber Threat Intelligence) feeds. Currently the most common way that I see this accomplished is through the use of logs. Innovative products like Alienvault and Splunk have the ability to check the myriad of log files that they collect and cross reference them with CTI fees and check to see there have been any IP based correspondence with any known malicious actors called out by such feeds.

Today I want to talk about a different, and in my opinion, better way of integrating with Cyber Threat Intelligence using Wire Data and the ExtraHop Platform featuring the Discover and Explorer Appliances respectively.

How does it work? Well let’s first start with our ingredients.

  1. A threat analytics feed (open source, subscription, Bro or CIF created text file)
  2. A peer Unix-based system to execute a python script (that I will provide)
  3. An ExtraHop Discover Appliance
  4. An ExtraHop Explorer Appliance

Definitions:

  • ExtraHop Discover Appliance:
    An appliance that can passively (no agents) read data at speeds from 1GB to 40GB. It can also scale horizontally to handle large environments.
  • ExtraHop Explore Appliance:
    ExtraHop’s Elastic appliance that allows for grouping and string searching INTEL gathered off the wire.
  • Session Table: ExtraHop’s memcache that allows for instant lookup of known malicious hosts.

The solutions works by using the Unix peer to execute a python script that will collect the threat intelligence data. It then uploads the malicious hosts into the Discover Appliance’s Session Table (up to 32K records). The Discover appliance then waits to observe a session that connects with one of these known malicious sites. If it sees a session with a known site from the TI feed activities include, but are not limited to the following:

  • Updates a Threat Intelligence dashboard
  • Triggers an alert that warns the appropriate Incident Response team(s) about the connection to the malicious host
  • Writes a record to the ExtraHop Explorer Device
  • Triggers a Precision PCAP capturing the entire session/transaction to a PCAP file to be leveraged as digital evidence in the event that “Chet” the security guard needs to issue someone a cardboard box! (not sure if any of you are old enough to remember “Chet” from weird science)

Please Click Image:

ThreatIntel

Below you see the ExtraHop Threat Intelligence Monitoring Dashboard (last 30 minutes) showing the Client/Server and Protocol as well as the Alert and a running count of violations: (this is all 100% customizable)

Please Click Image:

On the Explorer Appliance, we see the custom data format for Malicious Host Access and we can note the regularity of the offense
Please Click Image:

And finally we have the Precision Packet Capture showing a PCAP file for forensics, digital evidence and if needed, punk busting.
Please Click Image:

Conclusion:
The entire process that I have outlined above took less than one minute to complete every single task (Dashboard, Alert, EXA, PCAP). According to Security Week, the average time to detect a breach has “Improved” to 146 days in their 2015 report. Cyber Threat Intelligence has a chance to drastically reduce the amount of time it takes to detect a breach but it needs a way to interact with existing data.  ExtraHop positions your Threat Intelligence investment to interact directly with the network, and in real time.  Many incumbent security tools are not built to accommodate solutions like CTI feeds via API or do not have an open architecture to leverage Threat Intelligence, much less use memcache to do quick lookups. The solution outlined above using ExtraHop with a Threat Intelligence feed positions INFOSEC teams to be able to perform Advanced Persistent Surveillance without the cost of expensive log indexing SIEM solutions. Since the data is analyzed in flight and in real time, you have a chance to greatly reduce your time to detection of a breach, maybe even start the Incident Response process within a few minutes!

What you have read here is not a unicorn, this exists today, you just need to open your mind to leveraging the network as a data source (in my opinion the richest) that can work in conjunction with your log consolidation strategy and maximize your investment in Cyber Threat Intelligence.

Incidentally, the “Malicious Host” you see in my logs is actually wiredata.net.  I did NOT want to browse any of the hosts on the blacklist so I manually added my host to the blacklist the accessed it.  Rest assured, WireData.net is not on any blacklists that I am aware of!

Thanks for reading!

John M. Smith

ExtraBlu: Checking Flash Content to see if it is from a malicious source using ExtraHop and OctoBlu

The Ransomware epidemic has spread on the internet like a plague in the last 18 months. In fact, Ransomware netted $200 million in Q1 of this year. One might think with numbers like that they would get acquired or start an IPO in the next year!! I predict “Ransomware” makes it into Webster’s dictionary by 2017! (You read it here first)As many (well…to the extent that I could call my readers “many”) of you have read in my earlier posts, I believe that the lack of surveillance, due to budget, staffing or just good ole fashion apathy, is the primary reason for most security breaches. However, in the case of Ransomware, I believe that wire data offers the only way to truly combat this. Ransomware plays out in the blind spot behind the perimeter, in a day and age when your credit report can keep you from renewing your clearance or even getting a job. This coupled with the COMPLETE AND UTTER LACK of advocacy for the consumer or accountability when the information is wrong , an email stating that you are delinquent on a bill is always taken VERY seriously and to think that people will just not open it isn’t necessarily practical. That said, the phishing attempts will continue to evolve and as we block one, they will program another. This is okay! When you use ExtraHop’s wire data analytics, you can pivot too. I know that threat intelligence is still developing but with the availability of restful API’s and an open platform, you have a great shot at keeping your malware/Ransomware exposure to a minimum.

In this post, I want to demonstrate one of those methods. Today I am going to walk through how I set up ExtraHop to integrate with one of our partners (Citrix) OctoBlu platform to just give you an example of how who open platforms can integrate with one another and provide unparalleled visibility as well as access to automatic workflows that can be used to corral infected systems and decrease exposure. I have been studying attack vectors for Cryptowall and in many cases, a user is redirected to a .SWF URI that contains the malicious software or directs them to download/install it. I want to demonstrate how you can leverage ExtraHop and OctoBlu to be able to audit access to these files and ensure that they are, in fact, not from malicious sources.

Materials: (and VERY special thanks to both VirusTotal and Malware-traffic-analysis.net who perform an utterly invaluable service to the community with their sites!!!)

  • 3 PCAP’s from Malware Analysis showing Angler Exploit Kit delivering Ransomware
  • A VirusTotal API key
  • An ExtraHop VM
  • An Ubuntu box running TCPReplay

Definitians:
There is some overlapping nomenclature with OctoBlu and ExtraHop so I want to take the time to define it.

  • ExtraHop Triggers: ExtraHop’s triggers are programmable (javascript) objects that allow you to interact directly with your Network and use your Network as a data source. They allow you to set conditions and initiate outcomes. In this case, we are initiating an OctoBlu trigger as well as a precision PCAP to perform a packet capture.
  • OctoBlu Trigger: (I am still learning about OctoBlu but…) An OctoBlu trigger is the item within an OctoBlu flow that initiates the actual workflow that is being performed.

 

On the ExtraHop System:
We set up a trigger that looks for externally accessed SWF files.

  • Online 14 we are looking for any uri that has .swf in it.
  • On line 15 we indicate that we are looking for non RFC1918 addresses (no 10,192 or 172 networks, just external) serving up the .swf file.
  • Then on lines 23 – 26 we access the OctoBlu trigger URI location to kick off the OctoBlu Flow.
  • Line 24 calls out my specific OctoBlu URI. (to avoid pranksters who will send me 10,000 emails)
  • And Finally, on lines 29 – 41 we are initiating what we call a “Precision Packet Capture” that will also create a PCAP file that I can download and evaluate as digital evidence.

 

On the OctoBlu system:
The OctoBlu flow has been set up to leverage four tools and one thing. You will see them defined below as follows:

  • Trigger (Tool): The Trigger is what initiates the flow, it has a specific URI assigned to it (called out in Line 24 above) and begins the workflow.
    • It receives the JSON payload from ExtraHop (Line 26) and sends the Server IP delivering the SWF File to the HTTP GET tool which then queries VirusTotal.com’s API passing my API Key and the Server IP.
    • It also passes the Client and Server IP’s as well as a URL to the “Compose Tool”
  • HTTP Get (Tool): This is the actual query of the VirusTotal API that checks to see if the IP is malicious or not.
  • Compose (Tool): Labeled “Consolidate JSON messages” below, this takes the JSON objects from both the initial trigger and the HTTP Get tools and creates a single set of metrics to be passed to the Template Tool.
  • Template (Tool): Labeled “Prepare Message” below, this takes all of the JSON metrics created in the previous tool and sends them to the “Send Email” thing
  • Send Email (Thing): This is the actual act of sending me an email warning me that I have had a user access a .SWF file from a malicious source.

(Please Click the Image)

ExtraBlu-A

 

 

The Warning from OctoBlu:
Below is a copy of the email I received when I replayed the pcap file from malware-traffic-analysis.net. As you can see, it includes the Server, the Client and a link to the Server IP’s VirusTotal dossier.

 

The Digital Evidence:
As noted on lines 29 – 41 in the ExtraHop trigger, we have also kicked off a Precision Packet Capture. This makes the actual transaction readily available to download and look at in WireShark and use to determine if there is an actual issue as well as leverage the PCAP itself as digital evidence. As you see below, you have a PCAP named “External SWF File Accessed”.

Conclusion:
So, the question was asked by @Brianmadden on twitter as he remarked that OctoBlu was now “True Blue Citrix”, “What can you do with it?”. With the right integration platform I believe that there is quite a bit that can be done with OctoBlu both with ExtraHop but also with their own portfolio of tools. What I love about the two platforms is their open-ness and their ability to increase the aperture of both Security, Dev and Network Operations teams allowing them to have the kind of agility needed to fight in today’s “hand-to-hand combat” world where breaches and vectors pivot, stick and move on a monthly, weekly and daily basis.

Using ExtraHop I have been able to deliver the following integration solutions: (with more to come from an ExtraHop “Blu Bundle”).

  • Get warning about users who are experiencing high latency
  • Get warnings about long logon times that fall outside an SLA
  • Now the post above, where I am warned when an end user accesses an known malicious external flash content.

Other scenarios could include using the Netscaler HTTP Callout feature to warn you when a user launches an ICAPROXY session from outside the US (a breach that actually happened), or when a known malicious actor accesses the company website hosted on a Netscaler VIP. You could also, potentially, use OctoBlu and MeshBlu to shut off Netbios on a system that we see encrypting file shares with Cryptowall.

My comment back to Brian, “the real question is, what CAN’T you do with it” was not meant to be snarky or pithy, it was borne from enthusiasm for open architectures. Sadly, it has been removed from the site but I spoke about this a few years ago at Geek Speak during a session called “Return of the Generalist”. API’s are your friend, those who do not embrace them run the risk of being irrelevant in the new world and may fall prey to “Digital Darwinism”. Embrace Python, Javascript, Go, etc and watch your value to this industry increase as well as your effectiveness. Be it INFOSEC, Citrix, SOA, SDN or Database, API’s will have a major role in tomorrow’s IT.

Thanks for reading!!!

Please watch the Video!!

 

John