Author Archives: John Smith

Advanced Persistent Surveillance: Threat Intelligence and Wire Data equals Real-time Wire Intelligence

Please watch the Video!!

As the new discipline of Threat Intelligence takes shape, Cyber Security teams will need to take a hard look at their existing tool sets if they want to take advantage of the dynamic, ever changing threat intelligence feeds providing them with information on which hosts are malicious and whether or not any of their corporate nodes have engaged in any sort of communications with any of the malicious hosts, DNS names or hashes that you are collecting from your CTI (Cyber Threat Intelligence) feeds. Currently the most common way that I see this accomplished is through the use of logs. Innovative products like Alienvault and Splunk have the ability to check the myriad of log files that they collect and cross reference them with CTI fees and check to see there have been any IP based correspondence with any known malicious actors called out by such feeds.

Today I want to talk about a different, and in my opinion, better way of integrating with Cyber Threat Intelligence using Wire Data and the ExtraHop Platform featuring the Discover and Explorer Appliances respectively.

How does it work? Well let’s first start with our ingredients.

  1. A threat analytics feed (open source, subscription, Bro or CIF created text file)
  2. A peer Unix-based system to execute a python script (that I will provide)
  3. An ExtraHop Discover Appliance
  4. An ExtraHop Explorer Appliance

Definitions:

  • ExtraHop Discover Appliance:
    An appliance that can passively (no agents) read data at speeds from 1GB to 40GB. It can also scale horizontally to handle large environments.
  • ExtraHop Explore Appliance:
    ExtraHop’s Elastic appliance that allows for grouping and string searching INTEL gathered off the wire.
  • Session Table: ExtraHop’s memcache that allows for instant lookup of known malicious hosts.

The solutions works by using the Unix peer to execute a python script that will collect the threat intelligence data. It then uploads the malicious hosts into the Discover Appliance’s Session Table (up to 32K records). The Discover appliance then waits to observe a session that connects with one of these known malicious sites. If it sees a session with a known site from the TI feed activities include, but are not limited to the following:

  • Updates a Threat Intelligence dashboard
  • Triggers an alert that warns the appropriate Incident Response team(s) about the connection to the malicious host
  • Writes a record to the ExtraHop Explorer Device
  • Triggers a Precision PCAP capturing the entire session/transaction to a PCAP file to be leveraged as digital evidence in the event that “Chet” the security guard needs to issue someone a cardboard box! (not sure if any of you are old enough to remember “Chet” from weird science)

Please Click Image:

ThreatIntel

Below you see the ExtraHop Threat Intelligence Monitoring Dashboard (last 30 minutes) showing the Client/Server and Protocol as well as the Alert and a running count of violations: (this is all 100% customizable)

Please Click Image:

On the Explorer Appliance, we see the custom data format for Malicious Host Access and we can note the regularity of the offense
Please Click Image:

And finally we have the Precision Packet Capture showing a PCAP file for forensics, digital evidence and if needed, punk busting.
Please Click Image:

Conclusion:
The entire process that I have outlined above took less than one minute to complete every single task (Dashboard, Alert, EXA, PCAP). According to Security Week, the average time to detect a breach has “Improved” to 146 days in their 2015 report. Cyber Threat Intelligence has a chance to drastically reduce the amount of time it takes to detect a breach but it needs a way to interact with existing data.  ExtraHop positions your Threat Intelligence investment to interact directly with the network, and in real time.  Many incumbent security tools are not built to accommodate solutions like CTI feeds via API or do not have an open architecture to leverage Threat Intelligence, much less use memcache to do quick lookups. The solution outlined above using ExtraHop with a Threat Intelligence feed positions INFOSEC teams to be able to perform Advanced Persistent Surveillance without the cost of expensive log indexing SIEM solutions. Since the data is analyzed in flight and in real time, you have a chance to greatly reduce your time to detection of a breach, maybe even start the Incident Response process within a few minutes!

What you have read here is not a unicorn, this exists today, you just need to open your mind to leveraging the network as a data source (in my opinion the richest) that can work in conjunction with your log consolidation strategy and maximize your investment in Cyber Threat Intelligence.

Incidentally, the “Malicious Host” you see in my logs is actually wiredata.net.  I did NOT want to browse any of the hosts on the blacklist so I manually added my host to the blacklist the accessed it.  Rest assured, WireData.net is not on any blacklists that I am aware of!

Thanks for reading!

John M. Smith

ExtraBlu: Checking Flash Content to see if it is from a malicious source using ExtraHop and OctoBlu

The Ransomware epidemic has spread on the internet like a plague in the last 18 months. In fact, Ransomware netted $200 million in Q1 of this year. One might think with numbers like that they would get acquired or start an IPO in the next year!! I predict “Ransomware” makes it into Webster’s dictionary by 2017! (You read it here first)As many (well…to the extent that I could call my readers “many”) of you have read in my earlier posts, I believe that the lack of surveillance, due to budget, staffing or just good ole fashion apathy, is the primary reason for most security breaches. However, in the case of Ransomware, I believe that wire data offers the only way to truly combat this. Ransomware plays out in the blind spot behind the perimeter, in a day and age when your credit report can keep you from renewing your clearance or even getting a job. This coupled with the COMPLETE AND UTTER LACK of advocacy for the consumer or accountability when the information is wrong , an email stating that you are delinquent on a bill is always taken VERY seriously and to think that people will just not open it isn’t necessarily practical. That said, the phishing attempts will continue to evolve and as we block one, they will program another. This is okay! When you use ExtraHop’s wire data analytics, you can pivot too. I know that threat intelligence is still developing but with the availability of restful API’s and an open platform, you have a great shot at keeping your malware/Ransomware exposure to a minimum.

In this post, I want to demonstrate one of those methods. Today I am going to walk through how I set up ExtraHop to integrate with one of our partners (Citrix) OctoBlu platform to just give you an example of how who open platforms can integrate with one another and provide unparalleled visibility as well as access to automatic workflows that can be used to corral infected systems and decrease exposure. I have been studying attack vectors for Cryptowall and in many cases, a user is redirected to a .SWF URI that contains the malicious software or directs them to download/install it. I want to demonstrate how you can leverage ExtraHop and OctoBlu to be able to audit access to these files and ensure that they are, in fact, not from malicious sources.

Materials: (and VERY special thanks to both VirusTotal and Malware-traffic-analysis.net who perform an utterly invaluable service to the community with their sites!!!)

  • 3 PCAP’s from Malware Analysis showing Angler Exploit Kit delivering Ransomware
  • A VirusTotal API key
  • An ExtraHop VM
  • An Ubuntu box running TCPReplay

Definitians:
There is some overlapping nomenclature with OctoBlu and ExtraHop so I want to take the time to define it.

  • ExtraHop Triggers: ExtraHop’s triggers are programmable (javascript) objects that allow you to interact directly with your Network and use your Network as a data source. They allow you to set conditions and initiate outcomes. In this case, we are initiating an OctoBlu trigger as well as a precision PCAP to perform a packet capture.
  • OctoBlu Trigger: (I am still learning about OctoBlu but…) An OctoBlu trigger is the item within an OctoBlu flow that initiates the actual workflow that is being performed.

 

On the ExtraHop System:
We set up a trigger that looks for externally accessed SWF files.

  • Online 14 we are looking for any uri that has .swf in it.
  • On line 15 we indicate that we are looking for non RFC1918 addresses (no 10,192 or 172 networks, just external) serving up the .swf file.
  • Then on lines 23 – 26 we access the OctoBlu trigger URI location to kick off the OctoBlu Flow.
  • Line 24 calls out my specific OctoBlu URI. (to avoid pranksters who will send me 10,000 emails)
  • And Finally, on lines 29 – 41 we are initiating what we call a “Precision Packet Capture” that will also create a PCAP file that I can download and evaluate as digital evidence.

 

On the OctoBlu system:
The OctoBlu flow has been set up to leverage four tools and one thing. You will see them defined below as follows:

  • Trigger (Tool): The Trigger is what initiates the flow, it has a specific URI assigned to it (called out in Line 24 above) and begins the workflow.
    • It receives the JSON payload from ExtraHop (Line 26) and sends the Server IP delivering the SWF File to the HTTP GET tool which then queries VirusTotal.com’s API passing my API Key and the Server IP.
    • It also passes the Client and Server IP’s as well as a URL to the “Compose Tool”
  • HTTP Get (Tool): This is the actual query of the VirusTotal API that checks to see if the IP is malicious or not.
  • Compose (Tool): Labeled “Consolidate JSON messages” below, this takes the JSON objects from both the initial trigger and the HTTP Get tools and creates a single set of metrics to be passed to the Template Tool.
  • Template (Tool): Labeled “Prepare Message” below, this takes all of the JSON metrics created in the previous tool and sends them to the “Send Email” thing
  • Send Email (Thing): This is the actual act of sending me an email warning me that I have had a user access a .SWF file from a malicious source.

(Please Click the Image)

ExtraBlu-A

 

 

The Warning from OctoBlu:
Below is a copy of the email I received when I replayed the pcap file from malware-traffic-analysis.net. As you can see, it includes the Server, the Client and a link to the Server IP’s VirusTotal dossier.

 

The Digital Evidence:
As noted on lines 29 – 41 in the ExtraHop trigger, we have also kicked off a Precision Packet Capture. This makes the actual transaction readily available to download and look at in WireShark and use to determine if there is an actual issue as well as leverage the PCAP itself as digital evidence. As you see below, you have a PCAP named “External SWF File Accessed”.

Conclusion:
So, the question was asked by @Brianmadden on twitter as he remarked that OctoBlu was now “True Blue Citrix”, “What can you do with it?”. With the right integration platform I believe that there is quite a bit that can be done with OctoBlu both with ExtraHop but also with their own portfolio of tools. What I love about the two platforms is their open-ness and their ability to increase the aperture of both Security, Dev and Network Operations teams allowing them to have the kind of agility needed to fight in today’s “hand-to-hand combat” world where breaches and vectors pivot, stick and move on a monthly, weekly and daily basis.

Using ExtraHop I have been able to deliver the following integration solutions: (with more to come from an ExtraHop “Blu Bundle”).

  • Get warning about users who are experiencing high latency
  • Get warnings about long logon times that fall outside an SLA
  • Now the post above, where I am warned when an end user accesses an known malicious external flash content.

Other scenarios could include using the Netscaler HTTP Callout feature to warn you when a user launches an ICAPROXY session from outside the US (a breach that actually happened), or when a known malicious actor accesses the company website hosted on a Netscaler VIP. You could also, potentially, use OctoBlu and MeshBlu to shut off Netbios on a system that we see encrypting file shares with Cryptowall.

My comment back to Brian, “the real question is, what CAN’T you do with it” was not meant to be snarky or pithy, it was borne from enthusiasm for open architectures. Sadly, it has been removed from the site but I spoke about this a few years ago at Geek Speak during a session called “Return of the Generalist”. API’s are your friend, those who do not embrace them run the risk of being irrelevant in the new world and may fall prey to “Digital Darwinism”. Embrace Python, Javascript, Go, etc and watch your value to this industry increase as well as your effectiveness. Be it INFOSEC, Citrix, SOA, SDN or Database, API’s will have a major role in tomorrow’s IT.

Thanks for reading!!!

Please watch the Video!!

 

John

 

Advanced Persistent Surveillance: Insider “Hating” with ExtraHop and Wire Data

My last few posts have been centered around how we can go about finding potential breaches in your environment using ExtraHop’s wire data analytics platform. Most of these have involved placing a logical boundary around a set of CIDR blocks and reporting on L4 transactions that fall outside defined boundaries. In the case of our Stream Analytics Critical Control Points post, we look at connections from PROD to networks other than PROD and taking a packet capture when we note a violation. (Please see SACCP post).

In this next post, I want to talk about how we can provide surveillance around L7 security. In today’s post we are going to look at monitoring database traffic at Layer 7.

Scenario:

A disgruntled employee is about to start querying a sensitive database to steal important information.  Since the user has approved credentials and will be accessing the database over approved ports and using approved protocols, many standard security tools will not detect this insider’s behavior as they are operating in the “Behind the Perimeter Blind Spot” that exists in most organizations.  In this hypothetical scenario, we have a table called “Employees” that we want to audit any/all ad hoc selections against it.  The way the CRM application is set up, the only transactions we should observe against this table would involve stored procedures.  Any “Select, Insert, Update or Drop” methods should be alerted on immediately.

Database: CRM
Sensitive Table: Employees

We set up the audit trigger below telling us to start a PCAP capture in the event that we see any sort of database transaction that includes “from employees”.

IH_1

 

We assign this trigger to the Database server housing our CRM Application and we should then be alerted any time someone runs an ad hoc query against the database.

Now, when an insider makes an ad hoc query, we can alert, send a syslog or, as in the case of the trigger above, initiate a packet capture.  As you can see below, we see a number of SENSITIVE TABLE PCAP files pertaining to the client that ran the report as well as the server it was run against.

IH_2

 

When we look at the PCAP we can see the Ad Hoc query that they ran as well as use the PCAP file as digital evidence to begin the process of dealing with the individual violating the policy.  As you can see, the user ran the query “select LastName from Employees”.  This is demo data but that could have been Select * from Customers or CCards.

IH_3

 

Conclusion:

As I recall, the Anthem breach was actually a system owner who notices someone running Ad Hoc queries against their customer database.  By providing Layer 7 visibility into the actual transaction, insiders can be effectively “named and shamed” when auditing with ExtraHop’s Wire Data platform.  In today’s world, credentials are a joke (trust me, I sit at the core and look at packets all day).  When an insider is using approved credentials and coming in over approved ports and protocols, the aperture needs to be increased to provide visibility into the L7 transactions to ensure that they are appropriate and that an insider is not running unauthorized queries against your sensitive data.

Layer 7 auditing with ExtraHop positions you to give insiders a cardboard box, not access!

Thanks for reading!

John

SACCP: Stream Analytics Critical Control Point

I left the enterprise approximately 30 months ago after being a cubicle drone for the last 18 years.  I now work for ExtraHop Networks, a software company that makes a wire data analytics platform for providing operational intelligence to organizations around their applications, the data that traverses their wire and basically shines light on the somewhat opaque world of packet analysis.

In the last few years, I can honestly say that I find myself getting a bit frustrated with the number of breaches that have occurred due, in my opinion, in large part to the lack of involvement by system owners in their own security. For my household alone, in the last 24 months, we are on our 5th credit card (in fact, I look at my expiration dates on most of my credit cards and chuckle on the inside knowing I will never make it.) I am also a former Federal Employee with a clearance so I also have the added frustration of knowing several Chinese hackers likely had access to my SF86 information (basically my personal and financial life story). In the last 15 years, we have added a range of regulatory framework, Security Operations Centers (SOC), I have watched INFOSEC budgets bulge while needing to justify my $300 purchase of Kiwi Syslog server. I have concluded that maybe the time has come for the industry to try a new approach. The breaches seem to get bigger and no matter what we put in place, insiders or hackers just move around it. At times I wonder if a framework I learned in my career prior to Information Technology may be just what the industry needs?

My first job out of College was with Maricopa County Environmental Health (I was the health inspector) and I was introduced to a concept called HACCP (Hazard Analysis Critical Control Point) and I think some of what I learned from it can be very relevant in analyzing today’s distributed and often problematic environments.


HACCP, pronounced “hassup”, is a methodology of ensuring food safety by the development of a series of processes that ensure, in most cases, that no one gets sick from eating your food.  It involves evaluating the ingredients of each dish and determining which food is potentially hazardous and what steps need to be taken to ensure that quality is ensured/maintained from food prep to serving.

While working as the health inspector, I was required to visit every permit holder twice a year and perform a typical inspection that involved taking temperatures, making sure they had hot water, employees washed hands and stayed home when they were sick, etc. But in most if not all of the restaurants I inspected, the process of checking temperatures, ensuring there is soap at the hand wash station and making sure there is hot water did not JUST happen during an inspection, I knew that in most cases it went on even when I was not on the premises. Sadly, in today’s enterprise, generally systems are only checked and/or monitored when an application team is being audited. An incumbent INFOSEC team cannot be responsible for the day to day security of a shared services or hosting team’s applications any more than I could be in every single restaurant every single day. The operator has to take responsibility; I am proposing the same framework for today’s enterprise. Share services and hosting teams need to take responsibility for their own security and use INFOSEC as an auditing and escalation solution. I will attempt to parallel how ExtraHop’s Stream analytics solution can provide an easy way to accomplish this even in today’s skeleton crew enterprise environments.

Let’s start with some parallels.

An example of a HACCP based SOP would be:

  • The cooling of all pre-cooked foods will ensure that foods are cooled from 135 degrees to 70 degrees within two hours
  • The entire cooling process from 135 degrees to 41 degrees will not take more than 6 hours.

So, I am taking away the “H” and putting in an “S” for SACCP I am proposing that we do the same for our applications and systems that we support at the packet level.  Just as ingredients may have chicken, cheese and other potentially hazardous ingredients applications may have SSO logins, access tokens, PII being transferred between DB and Middle or Front End tiers. We need to understand each part of an infrastructure that represents risk to an application and what an approved baseline is, what mitigation steps to take and who is responsible for maintaining it.  Let’s take a look at the 7 HACCP/SACCP principles.

Principle 1 – Conduct a Hazard Stream Analysis
The application of this principle involves listing the steps in the process and identifying where there may be significant risk. Stream analytics will focus on hazards that can be prevented, eliminated or controlled by the SACCP plan. A justification for including or excluding the hazard is reported and possible control measures are identified.

Principle 2 – Identify the Critical Control Points
A critical control point (CCP) is a point, transaction or process at which control (monitoring) can be applied to ensure compliance and, if needed, a timely response to a breach.

Principle 3 – Establish Critical Limits
A full understanding of acceptable thresholds, ports and protocols of specific transactions will help with identifying when CCP is outside an acceptable use.

Principle 4 – Monitor Critical Control Point
Monitor compliance with CCPs using ExtraHop’s Stream analytics Discover and Explorer appliances to ensure that communications are within the expected and approved ports and protocols established in each CCP.

Principle 5 – Establish Corrective Action
Part of this is not only understanding what to do when a specific critical control point is behaving outside the approved limits but to also establish who owns the systems involved in each CCP.  For example, if a Critical Control Point for a server in the middle-tier of an application is suddenly SCP-ing files out to a server in Russia, establish who is responsible for ensuring that this is reported and escalated as soon as possible as well as establish what will be done in the event a system appears to be compromised.

Principle 6 – Record Keeping
Using the ExtraHop Explorer appliance, custom queries can be set up and saved to ensure that there is proper compliance with established limits. Also integration with an external SIEM for communications outside the established limits can be enabled as well as HTTP push and Alerting.

Principle 7 – Establish Verification
Someone within the organization, either the INFOSEC team or team lead/manager must verify that the SACCP plan is being executed and that it is functioning as expected.

So what would a SACCP strategy look like?

Lets do a painfully simple exercise using both the ExtraHop Discover Appliance and ExtraHop Explorer Appliance to create a Stream Analytics Critical Control Point profile.

Scenario: We have a Network that we want to call “Prod”.

Principal 1: Analysis
Any system with an IP Address starting with “172.2” is a member of the Prod network and there should ONLY be INGRESS sourcing from the outside (The Internet) and Peer-to-Peer communications between Prod Hosts. No system on the Prod network should establish a connection OUTSIDE Prod.

Principal 2: Identify CCPs
In this case, the only Critical Control Point (CCP) is the Prod network.

Principal 3: Limits
As stated, the limits are that Prod hosts can accept connections from the outside BUT they should not establish any sessions outside the Prod network.

Principal 4: Monitoring

Using the ExtraHop Discover Appliance (EDA) we will create a trigger that identifies transactions based on the logical network names of their given address space and monitor both the ingress and egress of these networks.

In the figure below, we will outline how we are setting a logical boundary to monitor communications. In this manor we can lay the groundwork for monitoring the environment by first identifying which traffic belongs to which network.

  • You see on line 5 in the trigger below we are establishing which IP blocks belong to the source (egress) networks.
  • You then see on line 11 we are identifying the prod network as a destination (ingress).

*Important, you DO NOT have to learn to write triggers as we will write them for you but we are an open platform and we do provide an empty canvas to our customers should they want to paint their own masterpiece thus we are showing you how we do it.

 

Next we will leverage the ExtraHop Explorer Appliance (EXA) to demonstrate where the traffic is going. You will see on line 28 (although commented out) we are committing several metrics to the EXA such as source, destination, protocol, bytes, etc. This completes principal 4 and allows us to monitor the Prod network. In the figure below, you will see that we are grouping by “Sources”. You will note that Prod has successfully been classified and it has over one million transactions.

 

 

Principal 5: Establish Corrective Action
Well, in our hypothetical prod network, we have noted that there are some anomalies. As you can see below, when we filter on Prod as the source and we group by the Destinations we see that 15 of our nearly 1.3 million transactions were External. In most situations, this would go largely unnoticed by several tools however using SACCP and the ExtraHop’s Stream Analytics platform, the hosting team or SOC are positioned to easily see that there is an issue and begin the process of escalating it or remedying the issue with further investigation.

*Note, we can easily create an alert that can warn teams of when a transaction occurs outside the expected set of transactions. We also have a RESTful API that can be interrogated by existing equipment to see anomalies.

 

 

Digging Deeper:
As we dig a little deeper by pivoting to the Layer 7 communications (demonstrated in the video below) you will note that someone has uploaded a file to an external site at 14.146.24.124. Depending on what was in that file and existing policies, the mitigation could involve a cardboard box and a visit from the security guard.

 

Principal 6: Establish Record Keeping
The ExtraHop Discover Appliance has the ability to send a syslog to an incumbent SIEM system as well as a RESTFUL push. There is also a full alerting suite that can alert via email or SNMP Trap. In most enterprises, there is already an incumbent record keeping system, the ExtraHop platform has a variety of ways to integrate with the incumbent solution.

 

Principal 7: Verification
Someone should provide oversight of the SACCP plan and ensure that it is being executed and that it is having the desired results. This can either be the INFOSEC team management or hosting team management but someone should be responsible for ensuring that the shared services team(s) is (are) following the plan.

 

Conclusion:
The time has come for a new strategy, in several other industries where there is a regulatory framework for safety, compliance and responsibility there exists a culture of the operators taking responsibility for ensuring that they are compliant. The Enterprise is over 30 years old and just as the Health Inspector cannot be in every restaurant every day or a policeman cannot be on every street corner, the time has come for the IT industry to ask that system owners take some of the responsibility for their own security.

 

Thanks for reading and please check out the video below.

John

ADDENDUM!!!  (PUNKBUSTER OPTION!)

I wanted to take the time to show the next iteration of this, I call it precision punk busting…”err”..I mean Packet Capture.

The ExtraHop Discover Appliance has a feature called Precision Packet Capture.  Within the same narrative described above, I have edited my trigger to include taking a packet capture any time a policy is violated.  If you recall, I wanted to ensure that my Prod network ONLY communicated within the Prod network.  I added the following javascript to my trigger and you will see that I have instructed the appliance to kick off a packet capture in the event the policy is violated.

Busta_PCAP_Trigger

As a result of the FTP Traffic out to the internet we notice that we have a PCAP waiting for us indicating that a system has violated the Prod policy.

Busta_PCAP

 

We can also alert you that you have a PCAP waiting for you either via Syslog, SNMP or Email.  This PCAP can be used as forensics, digital evidence against an insider or a way to verify just wha the “F” just happened.

Having this information readily available and alerting either a system owner or SOC team that a policy was violated is a much easier surveillance method than sorting through Terabytes of logs or sifting through a huge PCAP file to get what you want.  Here we are ONLY writing PCAPs for those instances that violate the policy.

Thanks for reading!

Happy punk busting!!!

Thanks

John

Covering The “Behind The Perimeter” Blind-Spot

Well, I cannot tell you what the next big breach will be, I CAN tell you that it will involve one critical system communicating to another system that it was/is not supposed to.  Whether that is ex filtration via secure ssh (SCP) to a server in Belarus or mounting a hidden drive share using a default service account that was never changed, behind the perimeter represents a rather large blind spot for many security endeavors.  In the video below, you are seeing a very quick and simple method for monitoring peer-to-peer communications using wire data with the ExtraHop Platform.  This is a somewhat painful process with logs due to the fact that logging build-up and tear-downs can impact the hardware being asked to do the logging and if you are licensed to pay by how much data you index, it can be expensive.  Grabbing Flow records directly off the wire positions your security practice to have this information readily available in real-time with no impact on the infrastructure as no system/switch is asked to run in debug mode.  Transactions are taken directly off the wire by the ExtraHop Discover Appliance and written directly to the Elastic ExtraHop Explorer Appliance to provide a large and holistic view of every transaction that occurs within your critical infrastructure.

This positions system-owners with the ability to (within minutes) to audit critical systems and account for transactions that are suspect.  In the video below, you will see how we can audit flow records by white listing expected communications and slowly leaving malicious communications no where to hide as you slowly reduce the surface area that you are trying to patrol.


I will follow this up with another post on how we can monitor Layer 7 metrics such as SQL Queries against critical databases.

Thanks for reading, please watch the video!

John Smith

My Evolution to Wire Data

A little over a week ago, Gartner Analysts Vivek Bhalla and Will Cappelli identified wire data as “the most critical source of availability and performance”, you can read the Gartner report here.  One of the reasons I started this blog in 2013 was because of the profound value I had discovered when using wire data as the source of truth in my environment. Today, I’d like to expand on a 2013 article “when uptime is not enough, the case for wire data” and discuss why I favor wire data as the best source of information that can even extend beyond availability and performance. I will discuss the value of evaluating transactions in-flight and some of the challenges I have found with other sources of Intelligence.

So why Wire Data?
Prior to 2013 and my first exposure to wire data I was a logging fiend. While at one of my previous employers, during the yearly holiday lunch we all had to put a headshot photo on a wall and our peers got to fill in the caption. My peers filled in “it’s not our fault, and I have the logs to prove it” under my picture. Not having the money for Splunk initially we had set up a Netscaler load balanced KIWI Syslog Farm that wrote to a SQL Database (we called it “SKUNK” for SQL-KIWI-Splunk-works). Later, at another employer I was able to purchase Splunk and used it extensively to collect logs and obtain operational awareness by leveraging the ad-hoc capabilities of the Splunk platform. Suffice it to say, I loved logs! While I am still an advocate of log data I started to find that anything that writes information to disk then evaluates the Intel AFTER it is written to disk becomes liable to the disk for performance as the database/PCAP/data set grows. We have seen this with some of the PCAP driven NPM vendors and logs can be liable to the same limitations. I have colleagues that have mid-high six figure investments in Indexing and Clustering technologies just to accommodate the size of their data store where they are keeping the logs. In the case of Splunk, they have done some innovating around hot, warm and cold disk storage which has assuaged some of the cost/performance burden but eventually, you end up with rack(s) of servers supporting your log infrastructure and if the data is not properly indexed or map-reduced, performance will suffer.

In Flight vs. On Disk
ExtraHop’s Wire Data analytics platform evaluates the transaction in transit. This provides some significant differences between Machine Data analytics (log data) and Wire Data Analytics. ExtraHop is NOT dependent on any other system for information. When given a span or tap, data is “taken” directly off the wire. Where a syslog solution depends on the end point to forward or syslog data off to them, ExtraHop observes the transaction in flight and evaluates it (uri, user-agent, process time, Database Query or Stored Procedure client ip, etc) at the time it happens. This ensures that transactions are quickly and accurately evaluated regardless of how much data is being written to disk. If a system is I/O bound, it could very easily impact its ability to send logs to a syslog server. However, since the ExtraHop appliance is basically providing a packet-level CCTV, transactions are observed and evaluated vs. waiting for them to be written to disk, then evaluated. If endpoints are I/O bound, the ExtraHop platform will not only continue to give you information/Intel about your degrading application performance, it will also tell you that your systems are I/O bound by reporting on zero windows observed. There are no agents or forwarders to install with ExtraHop as it is operating passively on the wire via a Tap or Span. As a security note, most hackers worth their weight in salt will shut off logging if they compromise a system. With ExtraHop they will not be able to hide as ExtraHop is not dependent on any syslog daemon or forwarder to get information. ExtraHop “TAKES” intelligence off the wire, whereas logging systems, while I still LOVE logs, depend on other systems to “GIVE” data.

Beyond Performance
Equally important to the Break-Fix/Performance value derived by Wire Data, there is also the Business Intelligence/Operational Analytics portion. Few people realize the level of value that can be derived from parsing payloads. In the retail space, I have seen Wire Data solutions that deliver real-time business analytics that had previously been a very long and costly OLAP/Data Warehousing job, now replace by parsing a few fields within the XML and delivering the data in real-time to the analytics teams as well then having the, now parsed and normalized, data forwarded to a back end database. With the Kerberos support introduced in 5.x, you can leverage the ExtraHop Memcached Session table to map users to IP Addresses then perform a lookup against it for every single transaction allowing you map ever single users transaction so that if they call and complain about performance, you can see every transaction that they have run or, if you suspect a user of malfeasance, you can audit their every move on your network and lastly, if you want to ensure that your CEO/CIO/CFO have a good experience or ensure that their credentials have not been phished, you can audit their transactions by user ID or Client IP Address. I tend to think of the ExtraHop appliance as a “prospectors pan” that they use to find gold, ExtraHop allows you to sift through the gravel and sand that can create a lot of noise in today’s 10GB, 40GB and soon 100GB networks and pull out those operational gold nuggets of Intel right off the wire. There’s a gold mine on every wire, ExtraHop is there to help you find it.

Conclusion
You will find numerous posts on this site showing integration between ExtraHop and Splunk, please understand I am NOT down on Splunk or any other logging technology, I am simply explaining the differences between using Machine data and Wire Data and why I use wire data as my first and best source of intelligence. The truth is, the ExtraHop platform is itself, an exceptionally good forwarder of data (you can read about it in other articles on this site) and can be used in conjunction with your machine data as it has the ability to “take” the Intelligence directly off the wire and “give” it to your back-end solution, if it’s Splunk/ELK, KAFKA, MongoDB or their own Big-Data appliance (ExtraHop EXA) . For my money, there isn’t a better forwarder than an ExtraHop appliance who is going to send you data regardless of the state of your servers/systems while at the same time, providing real-time visibility into transactions at the time they occur. I am very happy to see Gartner’s analysts starting to acknowledge the value that can be derived from Wire Data and recognize the hard work by a number amazing pioneers in Seattle.

What’s on your wire?

Thanks for reading, again if you want to read the Gartner report click here to download a copy.

John M. Smith

Advanced Persistent Surveillance: SSH Connectivity

Today I read about a Juniper announcement that unauthorized code in Juniper firewalls can allow an attacker to listen in on conversations, even decrypting communications by using your firewall as a MITM. A second, unrelated according to the company, announcement concerned a pair of exploits, one that allows an attacker telnet or ssh access into the device and that a “knowledgeable” user could also decrypt vpn traffic once the firewall had been compromised. While they say that there is no way to tell if you have been victim of this exploit, there are some ways you can check to see if there is any malicious activity on your devices and you CAN do so without sifting through a terabyte of log files.

Most Juniper customers will shut telnet off in favor of ssh so I will focus on how to use wire data analytics to monitor for potential malicious behavior over ssh.

First, I am not what you would call an INFOSEC expert. I worked in IT Security for a few years handling event correlation and some perimeter stuff but I firmly believe that anyone that is responsible for ANYTHING with an IP address should consider themselves a security practitioner, at least for those systems under their purview. I would consider myself more of a “packet jockey”. I am a Solutions Architect for a Wire Data analytics company, ExtraHop Networks. I have spent the better part of the last two years sitting at the core of several large enterprises looking at packets and studying behavior. Today I will go over why it is important to monitor any and all ssh connections and I will discuss why logs aren’t enough.

Monitoring SSH:
While the article states “There is no way to detect that this vulnerability was exploited.”, I would say that if you see a non RFC1918 address SSH-ing into your firewall, something needs to be explained. Currently, most teams monitor ssh access by syslogging all access attempts to a remote syslog server where they can be observed, hopefully, in time to notify someone that there has been unauthorized activity. The issue here is that once someone compromises the system, if they are worth their weight in salt, the first thing they do is turn off logging. In addition, the act of sifting through syslogs can be daunting and time consuming and at times does not deliver the type of agility needed to respond to an imminent threat.

Enter ExtraHop Wire Data Analytics:


What I like about wire data analytics is that you are not dependent on a system to self-report that it has been compromised. Simply put you cannot hide from ExtraHop, we will find you! Yes, you can breach the Juniper firewall (or any other ssh enabled device) and shut logging off but you cannot prevent us from seeing what you are doing.

*(I am assuming you can shut logging off, I know you can on the ASA’s but I have never administered a Juniper firewall so don’t quote me on that but most devices have to be told what and where to log).

On the wire, there is nowhere to hide, if you are coming from an IP address and you are connecting to another IP address, you’re busted. Whether you are running a “select * from …” on the production database server, SCPing the company leads to your home FTP server or compromising a firewall. ExtraHoop offers several ways to monitor ingress and egress traffic, today I am going to discuss how we can monitor ssh via the Discover Appliance as well as how to leverage our new big data platform, our Explorer Appliance.

Using the Discover Appliance to monitor SSH Traffic:

One of the first and easiest ways to check and see if you have had anyone ssh into your firewall is to simply browse to it in the UI and go to L7 protocols and look for SSH.

Click to enlarge

 

Click to enlarge

You can also build a quick dashboard showing who has ssh’d into the box and make it available for your SOC to view and alert you on. The dashboard below is showing the number of inbound SSH packets. You see the source IP of 172.16.243.1 as well as 23 inbound packets. We can also show you outbound packets as well.

This can all be done literally within 5 minutes and you can have total visibility into any ssh session that connects to your Juniper firewall, or ANY ssh enabled device, or ANY device over ANY port or protocol.

Can we have an alert? Yes, ExtraHop has a full alerting system that allows you to alert on any ssh connection to your gateway devices.

Monitoring SSH via the ExtraHop Explorer Appliance:

A few weeks ago, ExtraHop introduced their Explorer Appliance. This is an accompanying appliance that allows you to write flows and layer 7 metrics to a big data back end as part of a searchable index. In the example I am going to show you I will be speaking specifically about “Flow” records. ExtraHop can surgically monitor any port for any device and write them out to the explorer appliance. For Flow records, since they are very intense, we do not automatically log them, we recommend that you set them on a per host basis from the Admin console. Once added, any and all communications will be audited and searchable for that host.

To audit ssh connectivity of our Juniper Firewall we will go to the discovered device and select the parent node. From there on the right hand side you will see an “ExtraHop ID:” (Note the Search Textbox above it)

Click to enlarge

 

You will past the device ID into the search box and click “Global Records Query”

Click to enlarge

This will be the initial default filter, you will then add a 2nd Filter as seen below by setting the receiver port to 22

Click to enlarge

Now that you have the ExtraHop Device ID and Port 22 set as a filter, you can see below that you are able to audit, both in real-time and in the past, any/all ssh sessions to your Juniper firewall or any other device that you wish to monitor on any other port. You can save this query and come back to it periodically as a method of ongoing auditing of your firewall and ssh traffic.

Click to enlarge

What am I looking for here?
For me, I would be interested in any non-RFC1918 addresses, the number of bytes and the source host. If you notice that it is a laptop from the guest wireless network (or the NATed IP of the Access Point) then that may be something to be concerned with. As I stated earlier, while the announcement stated that you cannot tell if the exploit has been used, I think consistent auditing using wire data gives them no place to hide if they do compromise your ssh-enabled appliance and it is generally a good idea to monitor ssh access. In the real-time grid above, you can see the sender “oilrig.extrahop.com” is ssh’d into our Juniper Firewall. Does not matter if the first thing they do is shut of logging or if it is an insider who controls it. They can’t hide on the wire.

ExtraHop offers a full alerting suite that can whitelist specific jump boxes and hosts and provide visibility into just those hosts who you do not expect to see ssh’d into any system you have as well as the ability to monitor any other ingress or egress traffic that may look out of the ordinary. (Example: A SQL Server FTPing to a site in China or someone accessing a hidden share that is not IPC$).

Conclusion:
At the end of the day, the next big breach will involve one host talking to another host that they were not supposed to, weather that is my credit card number being SCP’d to Belarus or my SF86 form being downloaded by China. Advanced Persistent Surveillance of critical systems is the best way to prepare yourself and your system owners for tomorrow’s breach. While I am very thankful to the INFOSEC community for all that they do, for a lot of us, by the time a CVE is published, it is too late. The next generation of digital vigilance will involve hand-to-hand combat and no one will give you a sharper knife or a bigger stick than Wire Data Analytics with ExtraHop.

Thank you for Reading!

John

 

The push to make Healthcare Information more portable has been an ongoing battle for the last few decades. A variety of factors, including evolving federal legislation and the regulatory framework, the spike in mergers and acquisitions as well as the general need for bigger and better analytics drive this push. Over $US 500 million dollars has already been allocated to help assist with the setup and deployment of existing Health Information Exchanges nationwide. Over the last four years, we have noted a considerable upward trend towards the consolidation merger and acquisitions of hospitals. Fiecehealthcare.com cites acquisitions as one strategy to meet the demands of new health care rules. With the mergers of clinics and hospitals, comes the rather daunting and labor intensive task of consolidating disparate E.H.R. systems. With dozens of different vendors using a variety of back end databases and systems, such consolidation efforts generally involve complex data warehousing projects that are expensive and time consuming, not to mention painful for the providers and, on occasion, the patients. Currently, the process of providing a federated view into healthcare informatics is very vendor specific and consolidating the myriad of vendor data can be challenging for an individual location. With the need to do this across multiple interface vendors and multiple hospitals the challenge becomes exponentially greater.

Enter Extrahop:
ExtraHop creates a logical layer between the various HL7 vendors thereby “agnostifying” (judging by spellcheck, that isn’t even a word) the HL7 data. The data can then be put into a relational database in parallel to your existing HIS system(s). With ExtraHop, this data can now be leveraged by ‘Big Data’ solutions that utilize a relational database back end. By providing a parallel path to Big Data, ExtraHop allows you to continue with your existing EMR/HIS investment while writing the metrics to a big data back end for you at the same time.

How does it work?
ExtraHop is a completely agentless, passive monitoring solution that spans the network traffic and observes the HL7 as it traverses the wire. In doing so, it makes the HL7 data agnostic as the specific segments are parsed into a more database friendly format and stored appropriately. ExtraHop is unique in that it allows you to exclude specific patient data segments to ensure that no sensitive data is saved while populating a second data store that can be safely used for B2B partners, state and federal agencies as well as third parties.

\\

Use Cases: (All Real-Time):
My first job after college was as an Environmental Health Specialist for Maricopa County. One of my duties was to investigate foodborne illnesses. I wish I could have had access to real-time HL7 data to allow quick visibility into issues such as a (hypothetical) rise in Hepatitis A diagnosis or demand in gamma-globulin shots after a state fair or food festival. Currently public health surveillance systems struggle to get access to real-time data due to costs and complexities around consolidating information. ExtraHop’s Wire Data Analytics Engine gives healthcare providers the ability to securely, inexpensively and quickly provide this information in real time. This gives public health officials ready-access to information without the hospital or clinic being forced to share sensitive or business critical or competitive information. It also positions large healthcare organizations to be able to leverage the same agility regardless of the myriad of back end HIS/EMR systems that may exist throughout their own enterprises. Most importantly, hospitals and clinics have the ability to share aggregate data that does not contain personally identifiable information on patients. Long term, this potentially positions Quality Assurance (QA) teams to oversee specific Diagnosis (DG1), Demographic and Pharmaceutical (RXE|RXR|RXC) combinations that can indicate risk and potentially provide better oversight of patient care.

Other Potential Real-Time Use Cases:

  • Tracking prescriptions by physician
  • Geospatially track DG1 (Diagnosis) and OBX/OBR (Observations) records (PAN FLU Surveillance)
  • Federate patient data for large hospital organizations
  • Correlate external events with hospital visits (Flooding impact on water quality by checking GI admission data)
  • Catch potential fraud and duplicate billing incidents
  • Track insurance information to allow for indigent care planning
  • Audit Meaningful Use Compliance

Mergers and Acquisitions:
We have all observed a rise in mergers and acquisitions in healthcare the last few years. While companies may decide to merge for financial reasons, often little consideration is given to the effort needed to integrate existing, disparate information systems. In addition, large hospital conglomerates simultaneously have integration challenges and the need to provide a federated view into their healthcare data often through an Healthcare Information Exchange. Even if every hospital has the exact same HIS/EMR system, providing a centralized view can be very challenging. ExtraHop offers the ability to federate all HL7 transactions into a parallel data store that can be leveraged to facilitate auditing, HIE participation and streamlining workflows. There is always a market for better data, ExtraHop’s Wire Data Analytics provides the ability to centralize information and position IT to be a profit center with data as its currency.

Important HL7 data can be intercepted off the wire and populated to a centralized, parallel data store in real-time using a number of methods.

  • Syslog transactions to a centralized back end like Splunk
  • Push the data securely over HTTPs using a RESTFUL push
  • Write the data directly to a big data (MongoDB) back end

This allows healthcare companies and agencies to securely send data over the internet via HTTPs or directly to a back end Syslog or MongoDB server. This makes the ExtraHop solution a very multi-tenant and multi-topology friendly solution.

Example: Large Conglomerate setting up a parallel system by harvesting HL7 data directly off the wire.

 

Health Information Exchanges:


Three of the critical roadblocks HIE’s face, according to GovHealthIT include: Data Sharing, Complexity Costs and Competition. ExtraHop’s wire data analytics platform addresses each one of these roadblocks and can offer a secure, standardized and practical solution for healthcare information sharing.

Data Sharing:
Much of our healthcare data exists in silos and data exchanges between EHRs and HIE’s uneven across states and regions. By collecting and parsing the data directly off the wire the clinic or hospital is no longer tethered to their incumbent EHR/HIS system for sharing data. This allows for data to be written to an external data store at the same time it is written to the EHR/HIS system.

Complexity Costs:
The article cites complexity as the “most cutting criticism of HIEs”, they point out that a wide variety of state and regional HIEs operate at different scales, this puts the owner of the data in a difficult position. Healthcare providers also struggle to implement “scores of software systems”. Despite all of this, data remains in silos and largely unavailable. ExtraHop’s wire data analytics platform resolves this issue by taking the data directly off the wire. The data can then be parsed, normalized in a manner that suits any downstream HIEs receiving the data. Again, by providing a logical layer between the messages and the EHRs we can seamlessly integrate HL7 data without labor intensive data warehousing projects.

Competition:
The article states that hospitals that routinely compete for patients are being asked to share data with their competitors. This is something that is outright blasphemy in any industry. ExtraHop can allow the HL7 owner to dictate which fields and segments are shared with the HIE and potentially assuage any concerns about business specific data being made available to competitors. With ExtraHop’s HL7 protocol analysis, the data owner can choose what data is sent and what data is not sent to the HIEs.

I would also cite a quote from the following paper from the University of Chicago:

“As a rapidly increasing number of health care providers adopt electronic health records (EHRs), the benefits that can be realized from these systems is substantially greater when patient data is not trapped within individual institutions. The timely sharing of electronic health information can improve health care quality, efficiency, and safety by ensuring that healthcare providers have access to comprehensive clinical information”

Using wire data to set up a parallel environment can position healthcare providers to leverage an extremely robust solution (40gb/s) that will allow you to write data to a parallel system with absolutely zero impact on your incumbent EMR/HIS systems. This opens up the possibility of sharing information across multiple state and local partners, B2B partners and can allow you to make secure data available to third parties with assurances that no patient information is written. By taking the data directly off the wire, incumbent systems are not subject to the data warehousing processes and procedures that could impact performance and since the data is written completely parallel to these systems, it can be made available in real-time.

Click the Image below to Enlarge

With the ability to geo-code where the messages are coming from PHIN’s can track ambulatory care in real-time. Below is a fictitious example of what that might look like in ExtraHop. We basically log wire data transactions, weather it is an XML field, Database query or HL7 diagnosis (DG1) segment.


 

Conclusion:
The ability to share information has benefits beyond just the consolidation of data across multiple healthcare providers. Leveraging wire data analytics with ExtraHop positions your company or agency to be able to engage in big data solutions, write to a parallel data store or even securely share non-patient data with third parties with no impact on your existing systems. By providing an abstraction layer between the back end data stores and the HL7 messages themselves we are able to securely, efficiently and quickly make healthcare information considerably more portable. This has benefits for public health institutions such as my former employer, the Centers for Disease Control, state and local health departments and centralized healthcare providers overseeing hundreds of hospitals and thousands of patients.

This opens up healthcare informatics to new tools and new ways to perform analytics and provides the same level of integration across healthcare providers regardless of how complex their back end EMRs and HIS systems are. By leveraging wire data we have a chance to take healthcare into the next phase of informatics without impacting the providers, practitioners and most importantly, the patients.

Thanks for reading

John M. Smith

 

 

 

 

ExtraHop Central Manager: A TRM, Consultant or Channel Partner’s best friend

I was asking a few of my friends who are work as resellers where they make their money? Professional Services/Consulting or reselling. What one reseller told me is that from a profit standpoint, services are the highest margin with ever increasing margin pressure on software licenses and hardware being considerably lower. I was told (and I experienced this as a customer to numerous Platinum Citrix partners) that the channel is able to offer their expertise along with the hardware/software. I would purchase my licenses and a block of professional services from my partner and they would spend time helping me get the new environment planned. The channel consults with my team on security, best practices, architecture and even training providing them a chance to market their best product, their talent.

One of the limiting factors in professional services is the size of your bench. I know a fantastic resource (rock star) working for a Citrix platinum partner in Nashville and I am certain that he does a great job of generating revenue for his company but there are only so many resources like him in the greater Nashville area much less the United States. He can only be at in so many places during the few hours a week that he has. As they say, it’s impossible to be in two places at once…..or is it?

ExtraHop Central Manager:
ExtraHop Networks has an appliance called the ExtraHop Central Manager, this is a virtual appliance that can be imported into your Hypervisor and can be leveraged to provide a portal into a customer’s Physical or virtual ExtraHop Appliance. In this design, absolutely no wire/packet data is exchanged between the customer’s downstream appliance and the ECM portal however, you are able to write rules on the ECM that are relevant to your customers and apply them to the downstream ExtraHop appliances.

Scenario:
You are a Citrix reseller and you have sold licenses to two Customers who publish VDI to deliver applications to end users. One customer (AWS) has their information stored in an AWS cloud environment and another customer is a large hospital (Customer A). Both of them have around 500 Citrix Platinum seats and neither can afford to have or cannot find a full time Netscaler/XenDesktop expert. While the AWS customer cannot afford to have a full Operations staff your hospital does have an operations team but they are still learning their way through IT and spend more time routing tickets than fixing problems. So in sum we have the following:

  • AWS Customer: AWS Hosted Customer
    • 500 Platinum Seats
    • No Operations Staff
    • No proficient Citrix expertise
  • Customer A: Hospital
    • 500 Platinum Seats
    • Has an Operations Team but is limited in scope
    • No proficient Citrix expertise

What can you do for your AWS Customer?
At one of my jobs years ago my manager told me “if I could just put you in a bottle and have the ops guys drink it…” At ExtraHop we can do better than a bottle, we can do a bundle. Bundles are made up of critical metrics that are gathered via triggers by assembling custom dashboards to provide early responders and entry level operations teams with the ability to leverage the years of experience of the person who wrote it. It truly is a way to “level up” your incumbent resources. Additionally, they can be customized for specific environments/customers. So in the case of your AWS customer, I would load the Citrix Bundle as well as parse metrics on your back end database allowing you to see not just the Citrix performance, but the back end database calls or web based calls. You can then configure custom-branded reporting for the customer or you can configure alerts that summon both partner resources as well as the customer to start the triage process and make proactive something more than a buzz word. This also presents an opportunity for premium services to be delivered as you can get directly engaged with a customer to deliver consulting services.

What can you do for your large hospital?
For your hospital, I would load the Citrix bundle as well as deploy the HL7 Module. Leveraging these two technologies will allow you to monitor/alert on Citrix related issues as well as provide real time visibility into your HL7 Interfaces. Additionally, all of the moving parts that go into the customer’s HIS and EMR systems can be monitored so that you can see slow stored procedures, SOAP calls and IBMMQ calls that are part of your EMR suite or you can provide early detection of issues with outside API calls that are made out to integration partners. Let’s say this same customer was also breached last year by some malware and they would like you to record all egress packets to ensure that nothing is stolen from them again. You can create a logical ATO (Authority To Operate) boundary earmarking which hosts, ports and protocols their critical infrastructure should be talking to and provide them a periodic summary of any communications that were outside of that. Examples include:

  • The SA account querying patient data ad hoc
  • A computer from the mail room making a database connection to your EMR database.
  • Your Web/SOAP/REST tier making ad hoc queries instead of canned stored procedures consistent with its applications.
  • An FTP server starting up all of the sudden or your medical imaging server making an FTP connection to an IP address in Russia.

You will be able to remotely monitor this information for this customer and modify it remotely from the ECM again, with NO DATA being transferred between the ECM and your customer’s Network.

In both of these scenarios you are positioned to potentially sell a services agreement to your customers allowing you to leverage professional services (where you have control of the margin) and it allows the customer to have some choices in support options. I personally have paid the 6 figure cost for a vendor-specific TRM only to have them route support tickets for me. The customer gets to leverage an top flight Architecture/Operations team without hiring/training one themselves and they don’t have to make the commitment of an FTE (if they can find one) or a TRM. They also get to leverage the unique talent bench that only exists in the channel.

 

Channel

Conclusion:
Leveraging ExtraHop’s wire data analytics platform allows both customers and consultants, MSPs and Channel partners to have the best of both worlds. The customer gets unparalleled visibility into their environment without installing any agents or any impact on their critical systems. They also get to tap into the unique knowledge that exists in the Channel without the need to have a person on site. The Channel gets the ability to more efficiently leverage their bench to make sure that their customer’s systems are running smoothly and provide billable escalation when it is not. Custom built bundles can be leveraged so that each customer’s environment can have relevant and effective monitoring and they don’t have to be on premise because ExtraHop’s ECM appliance puts them on the wire.

Thanks for reading

John M. Smith

 

 

 

 

 


No End in Sight: Cyber Security and the Digital Maginot Line

whackamalware1

Yesterday my spouse was informed by a laboratory company where she was having some blood work done that she needed to provide them a credit card number that they could put on file in case our insurance company could not pay or did not pay the bill for the lab costs. This after showing our insurance card and providing proof that we are insured. Having lived with me the last 7 years she asked the woman at the counter for a copy of the InfoSec strategy asking them to “please include information on encryption ciphers, key lengths as well as information on how authentication and authorization is managed by their system and if her credit card information would be encrypted at rest”. Needless to say, they had no idea what she was talking about much to the exasperation of the people waiting behind her in line as well as the front office staff. She ended up getting her tests done but was told she would not be welcomed back if she was going to continue to be unwilling to surrender her Credit Card number to their front office for them to, digitally, keep on file.

Between the two of us, we have replaced 4 or 5 cards in the last 3 years due to various breaches, I have had to replace two and, I believe, she has had to replace 3 of them. In my case, each incident cost me around $800 that I had to wait weeks to get back and only after I went into the bank and filled out forms to attest that I did not make the charges. Each incident was about 4 hours of my time by the time all was said and done. Yes, there were lawsuits and lawyers were paid six figure sums as a result and I am sure they deserved it but at the end of the day, I was without my $800-$1600 for an extended period of time and I had to run through a regulatory maze just to get back what I had lost. No…..I never got any settlement money, I hope they spent it well. Fortunately for me, I am 46 years old now and have a great job, if this had happened to 26 year-old (still a screw-up) John, it would have been utterly devastating as I likely would have been evicted from my apartment and had bill collectors calling me. I can’t imagine calamity this creates for some folks.

I am somewhat dumbfounded that any company at any level would seek to get people to surrender their information digitally given the egregious levels of retail breaches that have plagued the industry the last few years. Forget that consumer advocacy is non-existent, while some retailers have been very forward in understanding the impact to their consumers, I simply do not see things getting better, EVER. The current method by which Cyber Security is practiced today is broken and there seems to be no motivation to fix it. This in spite of extremely costly settlements and damage to brands, the way we practice security today is deeply flawed and it’s not the Security team’s fault. Until system owners start taking some responsibility for their own security, these breaches will simply never end.

Bitching about the lack of responsibility of system owners isn’t new to me, my first “documented” rant on it was back in early 2010. As a system owner, I, almost compulsively, logged everything that went on and wrote the metrics to a centralized console. In a way, it was a bit of a poor-man’s DevOps endeavor. In doing so, I was able automate reporting so that when I came into work each morning, I would spend 15 minutes sipping my coffee and looking at all of the non-standard communications that went on the previous day (basically all internet traffic that did not use a web browser and all traffic outside the US). No, it wasn’t full IDS/IPS production but on two separate occasions, I was able to find malware before several seven figure investments in malware detection software. That is two instances in four years or 2/1000 mornings (approximately 4 years’ worth of work minus vacations, holidays etc.) where I noted actionable intelligence. That may not have been a lot but if you are one of the dozens of retailers who have had breaches in the last few years, I think it is plausible to assume the systems teams could have had an impact on the success of a breach had they been a little more involved in their own security. Don’t underestimate the value of human observation.

Why the INFOSEC is not enough?
Short of a crystal ball, I am not sure how we expect INFOSEC teams to be able to know what communication is acceptable and what communications are not. In the last few years “sophisticated persistent advanced super-duper complex malware” generally means that someone compromised a set of credentials and ran amuck for months on end stealing the digital crown jewels. Even if a policeman is parked outside my house, if they see someone walk up, open the door with a key and walk out with my safe, 60 TV (Actually, I don’t have a 60 inch TV) and other valuables how the hell are they supposed to know they should or should not be doing that. In most cases, this is the digital equivalent of what is happening in some of these breaches accept that digitally, I am sitting at my couch while all of this is going on in front of me. If an attacker has gotten credentials or has compromised a system and is stealing, expecting the security team to see this before extensive damage is done is unrealistic. With some of the social engineering techniques that exist and some of the service accounts used with elevated privileges, you don’t always have the 150 login failures to warn you. If I am actually paying attention, I can actually say, “Hey, what the hell are you doing, put that TV down before I call the cops!” (Or, my step-daughter is a foodie and she has some cast iron skillets that could REALLY leave a lump on someone’s head).

The presence of an INFOSEC team does not absolve system owners of their own security any more than the presence of a police department in my city means I don’t have to lock my doors or pay attention to who comes and goes from my house.

Police: “911 operator what is your emergency?”

Me: “I’ve been burgled, someone came into my house and stole from me”

Police: “When did this happen? Are they still in your house?”

Me: “It happened six months ago but I don’t know if they are still in my house stealing from me or not”

Police: “Ugh!!”

If someone has made a copy of the keys to my house it is not the police’s fault if they don’t catch them illegally entering my home in the same manor that the police cannot be everywhere, all the time, you INFOSEC team cannot inspect every digital transaction all the time.

Thought Exercise:
If someone has compromised a set of credentials or, say a server in your REST/SOAP tier and they are running ad hoc queries against your back end database, let’s evaluate how that would look to the system owner vs. the INFOSEC practitioner.

To the INFOSEC Practitioner: They see approved credentials over approved ports, since they are not the DBA or the Web Systems owner so this, likely, does not trigger any responses because the INFOSEC resource is not privy to the day to day behavior or design.
The DBA: The DBA should notice that the types of queries have changed and fall out of their chair.
Web Properties team: They should have a similar “WTF!?!?” moment as they note that the change from what is normally stored procedures or even recognizable SQL statements to custom ad hoc queries of critical data.

In this scenario, one in which I covered on wiredata.net in May of 2014, it is obvious that the INFOSEC professional is not as well positioned to detect the breach as he or she does not manage the system on a day to day basis and while several processes have INFOSEC involved during the architecture the idea that your INFOSEC team is going to know everything about every application is neither practical or reasonable. It is imperative that system owners take part in making sure their own systems are secure by engaging in a consistent level of intelligence gathering and surveillance. In my case, it was 15 minutes of every morning. Ask yourself, do you know every nonstandard communication that sourced from your server block? Will you find out within an hour, 8 hours, a single day? These are things that are easily accomplished with wire data or even log mongering but to continue to be utterly clueless of who your systems are talking to outside of normal communications (DNS, A/D, DB, HTTP) to internal application partners is to perpetuate the existing paradigm of simply waiting for your company to get breached. While we give the INFOSEC team the black eye, they are the least likely group to be able to see an issue in spite of the fact that they are probably going to be held accountable for it.

There are services from companies like FireEye and BeyondTrust that offer innovative threat analytics and offer a number of “non-charlatan” solutions to today’s security threats. I’ve struggled to avoid calling Cyber Security an abject failure but we are reaching the point where the Maginot line was more successful than today’s Cyber Security efforts. I am not a military expert and won’t pretend to be one but as I understanding, the Maginot line, the French solution to the German invasion during WWI, was built on the strategies of the previous war (breach) and was essentially perimeter centric and the enemy simply went around it (sound familiar?). So perimeter centric was it that apparently upon being attacked from behind they were unable do defend themselves as the turrets were never designed to turn all the way around. The thought of what to do once an enemy force got inside was apparently never considered. I find the parallels between today’s Cyber Security efforts and the Maginot line to be somewhat surprising. I am not down on perimeter security but a more agile solution is needed to augment perimeter measures. One might even argue that there really isn’t a perimeter anymore. The monitoring of peer-to-peer communications by individual system owners is an imperative. While these teams are stretched thin already (don’t EVEN get me started on morale, workload and all around BS that exists in today’s Enterprise IT) what is the cost of not doing it? In every high profile breach we have noted in the last three years, all of these “sophisticated persistent threats” could have been prevented by a little diligence on the part of the system owners and better integration with the INFOSEC apparatus.

Cyber Insurance Policies could change things?
Actually, we are starting to see insurance providers force companies to purchase a separate rider for cyber breach insurance. I can honestly say, this may bring about some changes to the level of cyber responsibility shown by different companies. I live in Florida where we are essentially the whipping boys for the home owners insurance industry and I have actually received notification that if I did not put a hand rail on my back porch that they would cancel my policy. (The great irony being that I fell ass over teakettle on that very back porch while moving in). While annoyed, I had a hand rail installed post haste as I did not want to have my policy cancelled since, at the time, we only had one choice for insurance in Florida and it was the smart thing to do.

Now imagine I call that same insurance company with the following claim:
“Hello, yes, uh, I am being sued by the Girl Scouts of America because one of them came to my door to sell me cookies and she fell through my termite eaten front porch and landed on the crushed beer bottles that are strewn about my property cutting herself and then she was mauled by my five semi-feral pit bulls that I just let run around my property feeding them occasionally”.

Sadly, this IS Florida and that IS NOT an entirely unlikely phone call for an adjuster to get, however, even more sad is the fact that this analogy likely UNDERSTATES the level of cyber-responsibility taken by several Enterprises when it comes to protecting critical information and preventing a breach. If you are a Cyber Insurance provider and your customer cannot prove to you that they are monitoring peer-to-peer communications, I would think twice about writing the policy at all.

In the same manor that insurance agents drive around my house, expect auditors to start asking questions about how your enterprise audits peer-to-peer communications. If you cannot readily provide a list of ALL non-standard communications within a few minutes, you have a problem!! These breaches are now into the 7-8 digit dollar amounts and those companies who do not ensure proper diligence do so at their own peril.

Conclusion:
As an IT professional and someone who cares about IT Security, I am somewhat baffled at the continued focus on yesterday’s breach. I can tell you what tomorrow’s breach will be, it will involve someone’s production system or servers with critical information on them having a conversation with another system that it shouldn’t. This could mean a compromised web tier server running ad hoc queries; this could be a new FTP Server that is suddenly stood up and sending credit card information to a system in Belarus. This could be a pissed of employee emailing your leads to his gmail account. The point is, there ARE technologies and innovations out there that can help provide visibility into non-standard communications. While I would agree that today’s attacks are more complex, in many cases, they involve several steps to stage the actual breach itself. With the right platform, vigilant system owners can spot these pieces being put into place before they start or at least maybe detect the breach within minutes, hours or days instead of months. Let’s accept the fact that we are going to get breached and build a strategy on quelling it sooner. As a consumer who looks at his credit card expiration date and thinks to himself “Yeah right!” basically betting it gets compromised before it expires. I see apathy prevailing and companies who really don’t understand what a pain in the ass it is when I have to, yet again, get another Debit or Credit card due to a breach and while they think it is just their breach, companies need to keep in mind that your breach may be the 3rd or 4th time your customer has had to go through this and it is your brand that will suffer disproportionately as a result. Your consumers are already fed up and companies need to assume that the margin of error was already eaten up by whichever vendor previously forced your customers through post-breach aftermath. I see system owners continuing to get stretched thin and kept out of the security process and not taking part in the INFOSEC initiatives at their companies, either due to apathy or workload. And unfortunately, I see no end in sight….

Thanks for reading

John M. Smith