Triage Practical Solution – Malware Event – Prefetch $MFT IDS

Wednesday, December 9, 2015 Posted by Corey Harrell 1 comments
You are staring at your computer screen thinking how you are going to tell your ISO what you found. Thinking about how this single IDS alert might have been overlooked; how it might have been lost among the sea of alerts from the various security products deployed in your company. Your ISO tasked with you triaging a malware event and now you are ready to report back.


Triage Scenario


To fill in those readers who may not know what is going on you started out the meeting providing background information about the event. The practical provided the following abbreviated scenario (for the full scenario refer to the post Triage Practical – Malware Event – Prefetch $MFT IDS):

The ISO continued “I directed junior security guy to look at the IDS alerts that came in over the weekend. He said something very suspicious occurred early Saturday morning on August 15, 2015.” Then the ISO looked directly at you “I need you to look into whatever this activity is and report back what you find.” “Also, make sure you document the process you use since we are going to use it as a playbook for these types of security incidents going forward.”

Below are some of the initial questions you need to answer and report back to the ISO.

        * Is this a confirmed malware security event or was the junior analyst mistaken?
        * What type of malware is involved?
        * What potential risk does the malware pose to your organization?
        * Based on the available information, what do you think occurred on the system to cause the malware event in the first place?


Information Available


Despite the wealth of information available to you within an enterprise, only a subset of data was provided for you to use while triaging this malware event. The following artifacts were made available:

        * IDS alerts for the time frame in question (you need to replay the provide pcap to generate the IDS alerts. pcap was not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)

        * Prefetch files from the system in question (inside the Prefetch.ad1 file)

        * File system metadata from the system in question (the Master File Table is provided for this practical)


Information Storage Location within an Enterprise


Each enterprise’s network is different and each one offers different information for triaging. As such, it is not possible to outline all the possible locations where this information could be located in enterprises. However, it is possible to highlight common areas where this information can be found. To those reading this post whose environments do not reflect the locations I mention then you can evaluate your network environment for a similar system containing similar information or better prepare your network environment by making sure this information starts being collected in systems.

        * IDS alerts within an enterprise can be stored on the IDS/IPS sensors themselves or centrally located through a management console and/or logging system (i.e. SIEM)

        * Prefetch files within an enterprise can be located on the potentially infected system

        * File system metadata within an enterprise can be located on the potentially infected system


Collecting the Information from the Storage Locations


Knowing where information is available within an enterprise is only part of the equation. It is necessary to collect the information so it can be used for triaging. Similar to all the differences between enterprises’ networks, how information is collected varies from one organization to the next. Below are a few suggestions for how the information outlined above can be collected.

        * IDS alerts don’t have to be collected. They only need to be made available so they can be reviewed. Typically this is accomplished through a management console or security monitoring dashboard.

        * Prefetch files are stored on the potentially infected system. The collection of this artifact can be done by either pulling the files off remotely or locally. Remote options include an enterprise forensic tools such as F-Response, Encase Enterprise, or GRR Rapid Response, triage scripts such as Tr3Secure collection script, or by using the admin share since Prefetch files are not locked files. Local options can use the same options.

        * File system metadata is very similar to prefetch files because the same collection methods work for collecting it. The one exception is the NTFS Master File Table ($MFT) can’t be pulled off by using the admin share.


Potential DFIR Tools to Use


The last part of the equation is what tools one should use to examine the information that is collected. The tools I’m outlining below are the ones I used to complete the practical.

      Security Onion to generate the IDS alerts
        * Winprefetchview to parse and examine the prefetch files
        * MFT2CSV to parse and examine the $MFT file


Others’ Approaches to Triaging the Malware Event


Before I dive into how I triaged the malware event I wanted to share the approaches used by others to tackle the same malware event. I find it helpful to see different perspectives and techniques tried to solve the same issue. I also wanted to thank those who took the time to do this so others could benefit from what you share.

Matt Gregory shared his analysis on his blog My Random Thoughts on InfoSec. Matt did a great job outlining not only what he found but by explaining how he did it and what tools he used. I highly recommend taking the time to read through his analysis and the thought process he used to approach this malware event.

An anonymous person (at least anonymous to me since I couldn’t locate their name) posted their analysis on a newly created blog called Forensic Insights. Their post goes into detail on analyzing the packet capture including what was transmitted to the remote device.


Partial Malware Event Triage Workflow


The diagram below outlines the jIIr workflow for confirming malicious code events. The workflow is a modified version of the Securosis Malware Analysis Quant. I modified Securosis process to make it easier to use for security event analysis.



Detection: the malicious code event is detected. Detection can be a result of technologies alerting on it or a person reporting it. The workflow starts in response to a potential event being detected and reported.

Triage: the detected malicious code event is triaged to determine if it is a false positive or a real security event.

Compromised: after the event is triaged the first decision point is to decide if the machine could potentially be compromised. If the event is a false positive or one showing the machine couldn’t be infected then the workflow is exited and returns back to monitoring the network. If the event is confirmed or there is a strong indication it is real then the workflow continues to identify the malware.

Malware Identified: the malware is identified two ways. The first way is identifying what the malware is including its purpose and characteristics using available information. The second way is identifying and obtaining the malware sample from the actual system to further identify the malware and its characteristics.

Root Cause Analysis: a quick root cause analysis is performed to determine how the machine was compromised and to identify indicators to use for scoping the incident. This root cause analysis does not call for a deep dive analysis taking hours and/or days but one only taking minutes.

Quarantine: the machine is finally quarantined from the network it is connected to. This workflow takes into account performing analysis remotely so disconnecting the machine from the network is done at a later point in the workflow. If the machine is initially disconnected after detection then analysis cannot be performed until someone either physically visits the machine or ships the machine to you. If an organization’s security monitoring and incident response capability is not mature enough to perform root cause analysis in minutes and analysis live over the wire then the Quarantine activity should occur once the decision is made about the machine being compromised.


Triage Analysis Solution


To triage the malware event outlined in the scenario does not require one to use all of the supplied information. The triage process could had started with either the IDS alert the junior security analyst saw or the prefetch files from system in question to see what program executed early Saturday morning on August 15, 2015. For completeness, my analysis touches on each data source and the information it contains. As a result, I started with the IDS signature to ensure I included it in my analysis.


IDS alerts


The screenshot below shows the IDS signatures that triggered by replaying the provided malware-event.pcap file in Security Onion. I highlighted the alert of interest.




The IDS alert by itself provides a wealth of information. The Emerging Threats (ET) signature that fired was "ET TROJAN HawkEye Keylogger FTP" and this occurred when the machine in question (192.168.200.128) made a connection to the IP address 107.180.21.230 on the FTP destination port 21. To determine if the alert is a false positive it’s necessary to explore the signature (if available) and the packet responsible for triggering it. The screenshot below shows the signature in question:



The signature is looking for a system on the $HOME_NET going to an external system on the FTP port 21 and the system has to initiate the connection (as reflected by flow:established,to_server). The packet needs to contain the string “STOR HAWKEye_”. The packet that triggered this signature meets all of these requirements. The system connected to an external IP address on port 21 and the picture below shows the data in the packet contained the string of interest.



Based on the network traffic and the packet data the IDS alert is not a false positive. I performed Internet research to obtain more context about the malware event. A simple Google search on HawkEye Keylogger produces numerous hits. From You Tube videos showing how to use it to forums posting cracked versions to various articles discussing it. One article is TrendMicro’s paper titled Piercing the HawkEye: Nigerian Cybercriminals Use a Simple Keylogger to Prey on SMBs Worldwide and just the pictures in the paper provide additional context (remember during triage you won’t have time to read 34 page paper.) The keylogger is easily customizable since it has a builder and it can delivery logs through SMTP or FTP. Additional functionality includes: stealing clipboard data, taking screenshots, downloading and executing other files, and collecting system information.

Research on the destination IP address shows the AS is GODADDY and numerous domain names map back to the address.



Prefetch files


When I review programs executing on a system I tend to keep the high level indicators below in mind. Over the years, these indicators have enabled me to quickly identify malicious programs that are or were on a system.


  • Programs executing from temporary or cache folders
  • Programs executing from user profiles (AppData, Roaming, Local, etc)
  • Programs executing from C:\ProgramData or All Users profile
  • Programs executing from C:\RECYCLER
  • Programs stored as Alternate Data Streams (i.e. C:\Windows\System32:svchost.exe)
  • Programs with random and unusual file names
  • Windows programs located in wrong folders (i.e. C:\Windows\svchost.exe)
  • Other activity on the system around suspicious files


The collected prefetch files were parsed with Winprefetchview and I initially sorted by process path. I reviewed the parsed prefetch files using my general indicators and I found the suspicious program highlighted in red.



The program in question is suspicious for two reasons. First, the program executed from the temporarily Internet files folder. The second reason and more important one was the name of the program, which was OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE (%20 is the encoding for a space). The name resembles a program trying to be disguised as a document. This is a social engineering technique used with phishing emails. To gain more context around the suspicious program I then sorted by the Last Run time to see what else was executing around this time.



The OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE program executed on 8/15/15 at 5:33:55 AM UTC, which matches up to the time frame the junior security analyst mentioned. The file had a MD5 hash of ea0995d9e52a436e80b9ad341ff4ee62. This hash was used to confirm the file was malicious as reflected in an available VirusTotal reportShortly thereafter another executable ran named VBC.exe but the process path was not reflected in the files referenced in the prefetch file itself. The other prefetch files did not show anything else I could easily tie to the malware event.


File System Metadata


At this point the IDS alert revealed the system in question had network activity related to the HawkEye Keylogger. The prefetch files revealed a suspicious program named OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE and it executed on 8/15/15 at 5:33:55 AM UTC. The next step in the triage process is to examine the file system metadata to identify any other malicious software on the system and to try to identify the initial infection vector. I reviewed the metadata in a timeline to make it easier to see the activity for the time of interest.

For this practical I leveraged the MFT2CSV program in the configuration below to generate a timeline. However, an effective technique - but not free - is using the home plate feature in Encase Enterprise against a remote system. This enables you to see all files and folders while being able to sort different ways. The Encase Enterprise method is not as comprehensive as a $MFT timeline but works well for triaging.



In the timeline I went to the time of interest, which was 8/15/15 at 5:33:55 AM UTC. I then proceeded forward in time to identify any other suspicious files. A few files were created within seconds of the OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE program executing. The files’ hashes will need to be used to determine more information about them since I am unable to view them.



The timeline then shows the VBC.EXE program executing followed by activity associated with a user surfing the Internet.



The timeline was reviewed for about a minute after the suspicious program executed and nothing else jumps out. The next step is go back to 8/15/15 at 5:33:55 AM UTC in the timeline to see what proceeded this event. There was more activity related to the user surfing the Internet as shown below.



I kept working my way through the web browsing files to find something to confirm what the user was actually doing. I worked my way through Yahoo cookies and cache web pages containing the word “messages”. There was nothing definite so I continued going back in time. I worked my way back to around 5:30 AM UTC where cookies for Yahoo web mail were created. This activity was three minutes prior to the infection; three minutes is a long time. At this point additional information is needed to definitely answer how the system became infected in the first place. At least I know that it came from the Internet using a web browser. note: in the scenario the pcap file was meant for IDS alerts only so I couldn’t use it to answer the vector question.


Researching Suspicious Files


The analysis is not complete without researching the suspicious files discovered through triage. I performed additional research on the file OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE using its MD5 hash ea0995d9e52a436e80b9ad341ff4ee62. I went back to its VirusTotal report and noticed there didn’t appear to be a common name in the various security product detections. However, there were unique detection names I used to conduct additional research. Microsoft’s detection name was TrojanSpy:MSIL/Golroted.B and their report said the malware “tries to gather information stored on your PC”. A Google search of the hash also located a Malwr sandbox report for the file. The report didn’t shed any light on the other files I found in the timeline.

The VBC.EXE file was no longer on the system preventing me from performing additional research on this file. The pid.txt and pidloc.txt files’ hashes were associated with a Hybrid Analysis report for a sample with the MD5 hash 242e9869ec694c6265afa533cfdf3e08. The report had a few interesting things. The sample also dropped the pid.txt and pidloc.txt files as well as executing the REGSVCS.EXE as a child process. This is the same behavior I saw in the file system metadata and prefetch files. The report provided a few other nuggets such as the sample tries to dump Web browser and Outlook stored passwords.


Triage Analysis Wrap-up


The triage process did confirm the system was infected with malicious code. The infection was a result of the user doing something on the Internet and additional information is needed to confirm what occurred on the system for it to become infected in the first place. The risk to the organization is the malicious code tries to capture and exfiltrate information from the system including passwords. The next step would be to escalate the malware event to the incident response process so a deeper analysis can be done to answer more questions. Questions such as what data was potentially exposed, what did the user do to contribute to the infection, was the attack random or targeted, and what type of response should be done.

Triage Practical – Malware Event – Prefetch $MFT IDS

Sunday, November 22, 2015 Posted by Corey Harrell 7 comments
Another Monday morning as you stroll into work. Every Monday morning you have a set routine and this morning was no different. You were hoping to sit down into your chair, drink some coffee, and work your way through the emails that came in over the weekend. This morning things were different. As soon as you entered the office, your ISO had a mandatory meeting going on and they were waiting for you to arrive. As you entered the meeting the ISO announces “each week it seems like another company is breached. The latest headline about Company XYZ should be our wake up call. The breach could had been prevented but it wasn’t since their security people were not monitoring their security products and they never saw the alerts telling them they had a problem.” At this point you started to see where this was going; no one at your company pays any attention to all those alerts from the various security products deployed in your environment. Right on cue the ISO continued “what happened at Company XYZ can easily happen here. We don't have people looking at the alerts being generated by our security products and even if we had the bodies to do this we have no processes in place outlining how this can be accomplished.” As you sipped your coffee you came close to spitting it out after you heard what came next. The ISO continued “I directed junior security guy to look at the IDS alerts that came in over the weekend. He said something very suspicious occurred early Saturday morning on August 15, 2015.” Then the ISO looked directly at you “I need you to look into whatever this activity is and report back what you find.” “Also, make sure you document the process you use since we are going to use it as a playbook for these types of security incidents going forward.”


Triage Scenario


The above scenario outlines the activity leading up to the current malware security event. Below are some of the initial questions you need to answer and report back to the ISO.

        * Is this a confirmed malware security event or was the junior analyst mistaken?
        * What type of malware is involved?
        * What potential risk does the malware pose to your organization?
        * Based on the available information, what do you think occurred on the system to cause the malware event in the first place?


Information Available


In an organization’s network you have a wealth of information available to you for you to use while triaging a security incident. Despite this, to successfully triage an incident only a subset of the data is needed. In this instance, you are provided with the following artifacts below for you to use during your triage. Please keep in mind, you may not even need all of these.

        * IDS alerts for the timeframe in question (you need to replay the provide pcap to generate the IDS alerts. pcap is not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)
        * Prefetch files from the system in question (inside the Prefetch.ad1 file)
        * File system metadata from the system in question (the Master File Table is provided for this practical)


Supporting References


The below items have also been provided to assist you working through the triage process.

        * The jIIr-Practical-Tips.pdf document shows how to replay the packet capture in Security Onion and how to mount the ad1 file with FTK Imager.
        * The file hash list from the system in question. This is being provided since you do not access to the system nor a forensic image. This can help you confirm the security event and any suspicious files you may find.


The 2015-11-22_Malware-Event Prefetch MFT IDS practical files can be downloaded from here

The 2015-11-22_Malware-Event Prefetch MFT IDS triage write-up is outlined in the post Triage Practical Solution – Malware Event – Prefetch $MFT IDS


For background information about the jIIr practicals please refer to the Adding an Event Triage Drop to the Community Bucket article

Labels: , , ,

Adding an Event Triage Drop to the Community Bucket

Wednesday, November 18, 2015 Posted by Corey Harrell 5 comments
By failing to prepare, you are preparing to fail.

~ Benjamin Franklin

Let's also stop saying if company X looked into their alerts then they would had seen there was a security issue. We need to start providing more published information instructing others how to actually triage and build workflows to respond to those alerts. If we don’t share and publish practical information about triaging workflows then we shouldn’t be pointing out the failures of our peers.

~ Corey Harrell

As soon as you can get past the fact that I quoted myself in my own article those two quotes really show a security predicament people and companies are facing today. Companies are trying to implement or improve their security monitoring capabilities to gain better visibility about threats in their environment. Defenders are looking to gain or improve their skills and knowledge to enable them to perform security monitoring and incident response activities for companies. On the one hand, according to Ben Franklin we need to take steps to prepare ourselves and if we don’t then we will fail. This means defenders need to constantly work to improve their knowledge, skills, and workflows so they are better prepared to perform security monitoring and incident response activities. If they aren’t preparing then most likely they will fail when they are called upon to respond to a security incident. On the other hand, according to myself as a community we don’t publish and share a lot of resources that others can use to improve their knowledge, skills, and workflows related to performing security monitoring and incident response activities. Please don’t get me wrong. There is some great published information out there and there are those who regularly share information (such as Harlan and Brad Duncan) but these individuals are in the minority. This brings us to our current security predicament. We need to prepare. We lack readily available information to help us prepare. So numerous companies are failing when it comes to performing security monitoring and incident response activities.

As I was thinking about this predicament I was wondering how I can contribute to the solution instead of just complaining about the problem. I know my contribution will only be a drop in a very large bucket but it will be a drop nonetheless. This post outlines the few drops that will start appearing on jIIr.

A common activity defenders perform is event triage. Event triage is the assessment of a security event to determine if there is a security incident, its priority, and the need for escalation. A defender performs this assessment repeatedly as various technologies alert on different activity, which generates events for the defender to review. As I explored this area for my $dayjob I found that most published resources say you need to triage security events but most didn’t provide practical information about how to actually do it. My hope is I can make at least a small contribution to this area.

Objective

My purpose is to provide resources and information to those seeking to improve their knowledge, skillsets, and workflows for event triage.

Method

I will periodically publish two posts on jIIr. The first post will outline a hypothetical scenario and a link will be provided to a limited data set. The data set will contain four or less artifacts that need to be analyzed to successfully complete the scenario. When performing event triage most of the time only a subset of data needs to be examined to successfully assess the event. To encourage this line of thinking I’m limiting the dataset to at most four artifacts containing information required to solve the scenario. The datasets will be pulled from the test systems I build to improve my own skills, knowledge, and workflows. If I’m building and deleting these systems I might as well as use them to help others. I’ll try to make the datasets resemble what may be available in most environments such as operating system artifacts, logs, and IDS alerts. Accompanying the dataset may be a document briefly explaining how to perform a specific task such as generating IDS alerts by replaying a packet capture in Security Onion. The scenarios will reflect areas I have or am working on so the type of simulated incidents will be limited. Please keep in mind, similar to performing event triage for a company some of my scenarios may be false positives.

The second post will be published between one to three weeks after the first post. The second post will outline a triage process one could use to assess the security event described in the scenario. At a minimum, the process will cover: where in a network this information can be found, how to collect this information, free/open source tools to use, how to parse the artifacts in the provided dataset, and how to understand the data you are seeing. The triage process will be focused on being thorough but fast. The faster one can triage a single event the more events they can process. If I come across any other DFIR bloggers who published how they triaged the security event then this post will contain links to their websites so others can see how they approached the same problem.

Summary

My hope is this small contribution adds to the resources available to other defenders. Resources they can use to improve their workflows, skills, and knowledge. Resources they can use to better prepare themselves instead of preparing to fail.


To anyone I do help better prepare themselves, I only ask for one thing in return. For you to take a few minutes of your time to purposely share something you find useful/helpful with someone in your life. The person can be anyone you know from a co-worker to colleague to a fellow student to a complete stranger asking for help online. Take a few minutes of your life to share something with them. Losing a few minutes of our time has minimum impact on us but it can make a huge difference in the lives of others and possibly help them become better prepared for what they may face tomorrow.

God bless and Happy Hunting.

Labels:

Random Thoughts

Saturday, November 7, 2015 Posted by Corey Harrell 8 comments
Things have been quiet on jIIr since I over committed myself. The short version is I had zero time for personal interests outside of my commitments, $dayjob, and family. Things are returning back to normal so it’s time to start working through my blog idea hopper. In the meantime, this post is sharing some of my recent random thoughts. Most of these thoughts came in response to reading an article/email, seeing a tweet, hearing a presentation, or conversing with others. 


~ We need to stop looking to others (peers, vendors, etc) to solve our problems. We need to stop complaining about a lack of resources, information, training, tools, or anything else. We need to start digging into our issues to solve them ourselves instead of looking for the easy answers.

~ As we work to better defend our organizations, we need to take to heart R. Buckminster Fuller's advice. "You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete." Our focus needs to be on building and improving the new reality while ignoring the haters who are stuck in the past.

~ We need to stop saying we don't have enough resources. We need to focus on our workflows and seek out ways to improve, automate, and become more efficient. Slight changes on existing workflows can free up resources for other areas.

~ We need to start using this new technology called Google. There is no such thing as a stupid question but there are questions that can be easily answered by doing a simple Google search.

~ Let's get our current generation tools working properly before talking about next generation. If we can't properly configure and use our current tools then getting a so called “next generation” tool won't solve anything.

~ We need to stop saying we need more training. We need to stop saying for me to do task X I need to be sent to training Y. We just need to realign our priorities to spend time on self-development. Turn off the TV, buy a book, build some virtual machines, conduct some tests, and analyze the results.

~ How about we talk more about detecting and responding to basic security threats. If we can't alert on commodity malware infections or web server compromises and have effective workflows triaging those alerts then hunting shouldn't even be in our vocabulary. Forget about hunting and focus on the basics.

~ Let's stop generalizing by saying if company X was monitoring their logs then they would had detected the compromise sooner. That is until there is more published practical information telling organizations how they can actually set up their security monitoring capability. If there is very little practical information or assistance about building a security monitoring program then we shouldn't be surprised when organizations struggle with the same complicated process.

~ On the same note and while we are at it. Let's also stop saying if company X looked into their alerts then they would had seen there was a security issue. We need to start providing more published information instructing others how to actually triage and build workflows to respond to those alerts. If we don’t share and publish practical information about triaging workflows then we shouldn’t be pointing out the failures of our peers.

~ Let's stop focusing our security strategy on the next new product instead of looking at how to better leverage our existing products. New products may address a need but we might be able to address the same need with existing products and use the money we save to address other needs.

~ Let's stop with the presentations and articles pretending to tell other defenders how to do something while the author says they are not saying how exactly they do it to prevent threats from knowing. This serves no purpose and is counterproductive since it’s actually not telling other defenders how to do something. What’s the point of saying anything in the first place?

~ Please let's stop adding noise to the intelligence sharing echo chamber. Whether if its products or conferences, most say we need more threat intelligence and we need to start sharing more. No other specifics are added; just that we need it and others need to share it. In the end we are just echoing noise without adding any value.

~ We need to stop saying how we have a shortage of talented security staff to hire. It is what it is. We need to start talking about how we can develop highly motivated people who want security as their career. We may not be able to hire talented security staff but we can definitely grow them to meet our needs.

~ We need to expand our focus on detecting and responding to threats from being primarily end point focused to server focused. A good percentage of articles, intelligence sources, and products talk about end point clients with very little mention about servers. How about detecting and responding to compromised web servers? How about database servers? How about CMS servers such as Joomla, WordPress, and Drupal? Our conversations are only talking about a part of our IT infrastructures and not the entire infrastructures.

~ We need to stop complaining that our management just doesn't get or take security seriously. The issue can be two things. Maybe we aren't communicating in a way for them to care. Maybe security really is not a high priority. Either way, we need to either: fix it, move on to another organization, or just accept it and stop complaining about it.
Labels:

A Warning about Hidden Costs

Sunday, August 23, 2015 Posted by Corey Harrell 3 comments
I saw the excitement in my son's eyes as the biggest smile was stretching from ear to ear. He slowly stretched out his arm to show me what he got at camp that day. He was extremely excited and I could sense his happiness as I heard him say "I won it with only one dollar. I did it on my first try. Can we keep it?" My eyes focused on what was in his hand. It was a plastic bag with a small goldfish swimming around. "I won it at the fair today. Can we keep it?" In that split second I quickly ran through what owning a fish might entail and it was very similar to the picture used in this post. I then said "yes, we can keep it". My son excitedly ran to his summer camp counselor with so much excitement to tell her the fish was going home with him.

As we were walking to pick up my youngest son I realized the first thing I didn't think about. My five year old would be upset seeing his brother with a goldfish and knowing he doesn't have one. I thought problem solved; we'll just buy him one at the pet store while we are there getting supplies. We reached my five year old in his camp and his eyes grew bigger and bigger as he saw the bag. "Is that a fish" he asked and my seven year old replied "Daddy is getting you one too".  At that moment both kids had smiles as they kept staring at the little fish swimming in the bag. As we were walking down the hall we walked past another parent. She saw the bag with the fish and nervously said "Oh lucky you". I laughed and I could see she was a bit nervous walking down the hall to pick up her kid.

On the drive home, I remembered what my wife said at one point. Dam, my wife. Make that item number two that didn’t cross my mind when my son asked me if we could keep the fish. She has been dead set against owning a fish and this time playing like I misunderstood or didn’t hear her won’t work. “Absolutely no fish" is pretty clear. I knew I wasn’t getting out of this one so I thought I might as well get something out of it. I sent her a text message saying the boys had a big surprise for her. Despite her continued texts trying to guess the surprise on my drive home I wouldn't answer them and I only deflected saying she had to wait.

As my wife opened the door both of my sons went running up to her. They said guess what a few times trying to gather their thoughts from their excitement. Then my seven year old says "at the fair I won a fish on my first try. I did it with only one dollar. Daddy said we could keep it and he is getting Gab one too." She started to give me that stare until she walked over and started watching the fish swim around in its bag of water. Maybe she ran through what a fish would entail too but maybe not. Whatever it was I wasn't going to ask when she said it looks like we are making a trip to the pet store.

On the drive to the pet store my wife and I were on the same page. We would get to the store then buy a basic tank, a second fish, and some food. As we walked up and down the aisle there were tanks of all sizes. Not sure what size we needed we asked the store for assistance. The cashier said he would send over the fish lady. I gave him a puzzled look and was like "fish lady?" He said that's what we call her since she knows everything about fish.

We continued walking up and down the aisle waiting for the fish lady while continuously stopping my boys from wrestling each other. A younger girl was walking towards us and I asked if she was the fish lady. She laughed and then explained all the tanks and fish she owns. I told her we were looking for a tank to hold two goldfish. She said each fish should have least 10 gallons of water and then I glanced at the shelf. At that moment I knew getting the small basic tank we thought that would work was no longer an option. Nope, we had to get a real fish tank. As we continued listening to the fish lady she started going down the list of things we would need. Water conditioner, food, gravel for the bottom of the tank, filter, vegetation (fake or real), a stand for the tank to keep it level, structures for the fish to hide in, and the list went on. My wife and I both reached for our phones to confirm what she was saying without her noticing (we research everything before buying something). We were making sure she wasn't trying to pull a fast one on us and our quick research confirmed what the fish lady said. I even saw the weekly work that owning a fish entails. I stopped counting all of the things I didn’t think about when I quickly ran through the list of what I thought owning a fish entails.

After hearing the fish lady I said that's a lot more than I was expecting. My kid won a fish at the fair and we thought we would only need a basic tank. She cracked a smile and then said "oh, a fair fish huh". After she helped us and was walking away I got the feeling this must happen a lot. Parents getting a fish at the fair, going to the fish store, and then getting hit over the head with what it really entails to own a fish. We grabbed a shopping cart, grabbed all of our supplies, the fish my five year old picked out, and selected the stand for our 20 gallon tank. As we left the store I kept thinking about the dollar fish that just cost us hundreds of dollars. That evening I spent hours putting together the stand and tank while my wife was cleaning all the items going into the tank (another thing we weren't expecting).

What I thought owning a fish entailed was nothing close to what is actually involved with owning a fish. Spending a dollar to win a fish was nothing compared to the hundreds of dollars needed to take care of the fish. The weekly work I envisioned was a lot less than the actual work I have been doing for weeks.

If I could do it again knowing now what I didn't know when we sent our son to camp that day. I would do things differently. I would had told him to save his dollar and do not bring home any fish. Mommy and I are doing some research and then next weekend we will go get the supplies and fish to set up a nice tank. It will be better than just watching two goldfish swimming around in a 20 gallon tank. This is the approach I would had taken. The approach of not trying to make things work with a dollar fish because in the end I still paid the same amount as I would had going with the better option in the first place.

My guess is this story plays out every year at a lot of organizations. The only exception is organizations are not dealing with goldfish but tools.

Labels:

Go Against the Grain

Wednesday, August 12, 2015 Posted by Corey Harrell 0 comments
“You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.” —Richard Buckminster Fuller

It's very easy to accept the way things are and say in response "it is what it is". It's easy to say I tried and give up when others push back against the things you want to change. It's easy to say this is how we always did it so why change anything now. Now let's put this into context of information security. It's easy to accept the thinking "that no one gets security" and then take on the mentality of not doing anything to change it by saying "it is what it is". It's easy to say I tried and give up when you make an attempt change how people approach security but then get push back by others. It's easy to say this is how we always approached securing our organization so why change anything now.

The quote I opened this post with nicely summarizes how you can go against the grain and put an organization on a better path to addressing their security risks. How you can change the existing security strategy focused on prevention to one focused on a balance between prevention, detection, and response. Start building the better approach (model) to enable others to see the value it adds. Continue building out the better approach and showing value to others. Showing the value and benefits results in people buying into the new approach. Eventually the change will take hold putting the organization on the better path. Building the better approach is more effective than fighting against the existing reality and those who are complacent with the way things are. Changing the security strategy won't occur without some resistance. There will be remnants of those who resist your changes and will fight to make things go back to the way things were. Those remnants won't be as successful in influencing change because they will be fighting against a new reality and they will lack the motivation and/or determination to go against the grain to build a better model.
Labels:

Minor Updates to Auto_rip

Monday, August 10, 2015 Posted by Corey Harrell 5 comments
This is a quick post to pass along that I updated my auto_rip script. For those who may not know, auto_rip is a wrapper script for Harlan Carvey's RegRipper program and it executes RegRipper’s plug-ins based on categories and in a specific order. To learn more about my script please see my previous post Unleashing auto_rip. The auto_rip updates are pretty minor. I separated out the changes to a change log instead of documenting changes in the script itself, added a device category (due to a new plug-in), and I added most of the new RegRipper plug-ins Harlan created (as of 7/30/15). The download location can be found on the right of my blog or using this link to its Google drive location.


****** 08/11/2015  Update *******

At this time I removed the compiled executable from auto_rip. The compiled executable is having issues and I'm working to resolve it. However, the perl script is present and works fine. As soon as I'm able to compile the script into an exe then I'll add it back to the auto_rip archive

SIEM – One Year Later

Sunday, July 26, 2015 Posted by Corey Harrell 1 comments
We are overwhelmed with data and are not sure what to look at or collect? I came across this paraphrased comment in a SIEM forum and it echoes a sentiment I have seen about SIEM. Deploying the technology results in a ton of noise and alerts making it hard to make sense of. Some organizations struggle using SIEM effectively and at times their staff are drowning in a sea of logs and alerts. The comment is also one foreign to me. I’ve read about it and seen others say this but I never witnessed it for myself. My path, my journey was a different one. This post reflects on my SIEM journey for the past year in hopes that it can help others who are either taking their first step or are already on their SIEM journeys.

Disclosure: jIIr is my personal blog and is not affiliated with my employer. This post only covers my personal experience and does not go in to details related to SIEM implementation in my employer’s environment. Any questions along these lines will unfortunately go unanswered. Some lines are not meant to be crossed and this is one of them.

Start with Why It Is Needed


My journey didn’t start when the SIEM was acquired but it occurred long before then when my perspective about security strategies changed. The security strategy is critical to explore since the strategy is the force pushing organizations down the SIEM path in the first place. Let’s go back in time to when I was in a unit performing vulnerability assessments against other public sector organizations. Over time I began to see fundamental problems with various security strategies I encountered.

Some strategies were completely centered on prevention. Almost every security resource – including funding, staffing, etc. – was applied to tasks and projects related to preventing attacks. In these organizations we always found something; in every organization we always found something. With each finding came the task of explaining to auditors on my side the cause of the finding. Auditors see things in black and white but security findings are not clear cut. The truth is we probe the target’s environment and security program moving from well protected areas to areas not well protected. These were the areas our findings came from. Organizations cannot protect everything in a manner to prevent everything; it’s an impossible task and even with an unlimited budget it is not achievable.

Some strategies were centered on compliance. Security resources and priorities are focused on the findings in the latest audit. As time goes by the security strategy is to address and reduce those findings without taking in to consideration areas posing the highest risks to the organization. At times during our engagements we were welcomed. The expectation was we would highlight areas they should focus on and help the security folks convince management to allocate the appropriate resources to address those areas. For a while I thought the work we were doing accomplished this as well. In time I became to see things differently. No matter how effective an audit is, this security strategy will never work since there is a fundamental problem. Audits only confirm if something complies with criteria outlined in a regulation, policy, or standard. If something has no criteria then it is very difficult for an audit to list it as a finding since each finding needs to be tied back to a criteria. If the criteria (regulation or policy) doesn’t address something that is a high risk to the organization then the audit/assessment will miss it as well. Making matters worse is it is very difficult to determine how effective something is. If something exists, has supporting documentation, and passes some tests in most cases this satisfies the audit requirement. Audits don’t truly evaluate how effective processes and controls are. They check a box as something being present then move on to other areas not well protected (remember it’s impossible to protect everything). This results in significant gaps in security strategies driven by compliance and the news headlines of breaches for organizations compliant with regulation X highlights this.

Then some security strategies were reactive. This is when the security strategy is based on the latest event; both inside the organization and outside of it. With each new event the organization switches focus and resources to address it even if it is not the highest risk to the organization. This leads down a path of random controls put in place to combat random threats and what little security resources are available is used in an ad-hoc manner. Reactive security strategies in my opinion are not even a strategy and are doomed for failure.

Over time, the fundamental problems with various security strategies I encountered made me ask myself a single question. If I ever found myself in a position to lead an organization’s immature security program. How should I approach building their program from scratch? Exploring this question brought me to various information security resources. It even lead me to obtaining my Masters of Science in Information Assurance. In time I came to the following conclusion:

1. There are fundamental problems with security strategies based on prevention, compliance, and reactive.

2. Most information security decisions I witnessed in my entire career were not based on actual data to support the decisions. Decisions were based on experience, intuition, what someone else recommended, or shiny new objects. At times, decisions not based on actual data resulted in bad choices, wasted what little security resources are available, and didn’t address the actual threats the organizations faced.

3. What security strategies need is “an intelligence-driven approach, whereby decisions are made based on real-time knowledge regarding the cyber adversaries and their attack methods, and the organization’s security posture against them”.

The fundamental problems with security strategies based on prevention, compliance, and reactive would be addressed in an intelligence-driven approach. Security decisions would not only address the most significant risks an organization faces but the decisions would be influenced by facts and information. The path to intelligence-driven security needs to start with the foundation mentioned in the quote below.

“To be ready to take on an intelligence program, the organization needs to have a foundation in place for monitoring the network for intrusions and a workflow process for responding to incidents.” 

This became my perspective on how security strategies needed to be and I ended up in a security office who agreed with the strategy. This strategy started me down the SIEM path as part of laying a foundation for security monitoring. The driving force behind SIEM was security monitoring and it influenced what logs were collected and how they were analyzed.

Expect a Significant Time Commitment


I knew there was going to be a significant time requirement at my $dayjob but I didn’t know about the impact on my personal time. I had a well-rounded background to take on a SIEM project but I never built the equivalent of a SOC. I read the often quoted percentage that “somewhere between 20% and 30% of SIEM deployments among his client base fail, meaning not only do they not meet predefined goals, but also that many organizations don't even bother using the product”. I also read the articles and comments about how difficult SIEM deployments are, how complicated SIEM management is, and how companies frequently end up in a sea of alerts with no clue what to do about them. I guessed what the impact would be on an organization for a failed security investment. How after getting buy-in, making a sizable investment in technology, and allocating staff to only end up with something that doesn’t meet any goals would be devastating. Not only would this not provide the needed foundation for intelligence driven security but the failure would linger for a long time for the organization. Any other request for security resources would be even more difficult because it will be looked upon as another wasted investment since the investment in SIEM failed. Any other security initiatives may be looked at with doubt and wonder if they can even be successful since the security office failed with the SIEM initiative. Needless to say, failure was possible but it wasn’t an option in my opinion.

I put most of my personal time I allocate for research, writing, and reading on hold to allow me to focus on building the SOC. I spent my time instead researching and learning from others about how to build an effective security monitoring capability. A small portion of what I explored was mentioned in the posts:  Linkz for SIEM, Linkz for Detection and Response, Making Incident Response a Security Program Enabler, and Linkz for Intelligence Driven Security and Threat Intelligence. In essence, my time at the $dayJob was spent implementing SIEM capabilities, building processes, training staff, and managing security monitoring while my personal time was spent exploring all aspects related to security monitoring and intelligence driven security.

It was a personal sacrifice I made for my employer to ensure our SIEM project would be successful but in the past year my knowledge and skills have grown by leaps and bounds. There’s a personal sacrifice leading a SIEM implementation but making that sacrifice is worth it in the end.

Prepare, Prepare, and Prepare Some More


In John Boyd’s autobiography Boyd: The Fighter Pilot Who Changed the Art of War one lesson we all can learn is how he approached things. Take the following quote about someone who Boyd influenced and was a member on his team:

“His attitude was “Maybe so. But if not me, who?” He was the right man in the right place at the right time. He had done his homework and knew his briefing was rock solid.”

The lesson we can all learn is for us to do our homework; to prepare for each possibility and the things that can and will go wrong. This is probably the best advice I could give anyone traveling down this path. You might be the right man in the right place at the right time to lead an organization as they deploy SIEM technology but you need to do your homework. You need to research and explore the issues trying to be solved, developing a detailed project plan with phases, getting buy-in by explaining the issues and why SIEM is the technology to solve it,  identifying the exact logs required and what in those logs is needed, etc.. The list goes on but the point of this advice is to prepare. Prepare, and then prepare some more since the effort and time spent doing this will ensure a smooth deployment. I spent a considerable amount of time preparing upfront and during the deployment and this helped me avoid pitfalls that could had impacted the project.

Leverage Use Cases


As I started this journey one of the first things I did was to learn from others who took this journey before me. The main person who influenced my thoughts and thus my approach was Anton Chuvakin. In most articles he advocates to leverage use cases when deploying a SIEM and hands down this is the best advice for a successful SIEM project. He authored a lot of posts on the subject but the best one as it relates to a SIEM project is the one hyperlinked in my quote below:

“The best reference I found to help architect a SIEM solution is a slide deck by Anton Chuvakin. The presentation is Five Best and Five Worst Practices for SIEM and it outlines the major areas to include in your SIEM project (16 to be exact). It may not cover everything -such as building rules, alarms, and establishing triage processes - but it does an outstanding job outlining how to avoid the pitfalls others have fallen in. If anyone is considering a SIEM deployment or in the midst of a SIEM deployment then this is the one link they will want to read.”

His slide deck influenced my SIEM project plan from the point of getting buy-in to the actual implementation. To implement a use case there wasn’t a lot of information on it so I put together the SIEM Use Case Implementation Mind Map to have a repeatable process for each use case.

The beauty in leveraging use cases. Not only does it make building a SOC more manageable by focusing on detecting certain activity in smaller chunks and building the processes around those chucks but it makes it very easy to show others the value SIEM adds. If the SIEM deployment takes one year to complete then using multiple use case can show progress and what was accomplished throughout the year. The value added is shown throughout the year instead of waiting until the end. That is if the proper preparation was done in advanced.

Focus on Triage


Thinking back over the past year and what I found to be the most challenging with SIEM technology or any detection technology for that matter is how events/alerts need to be handled. I found bringing in logs, creating custom detection rules, and tuning rules to be easy compared to developing the triage processes surrounding each category of events/alerts. My previous thoughts on the subject still ring true today:

In my opinion establishing triage processes is the second most critical step (detection rules are the first.) Triage is what determines what is accepted as "good" behavior, what needs to be addressed, and what needs to be escalated. After the rules are implemented take some time reviewing the rules that fired. Evaluate the activity that triggered the rule and try out different triage techniques. This is repeated until there is a repeatable triage process for the rules. Continue testing the repeatable triage process to make it more efficient and faster. Look at the false positives and determine if there is a way to identify them sooner in the process? Look at the techniques that require more in-depth analysis and move them to later in the process? The triage process walks a fine line between being as fast as possible and using resources as efficient as possible. Remember, the more time spent on one alarm the less time is spent on others; the less time on one alarm increases malicious activity being missed.

The triage processes are critical and can help prevent encountering the paraphrased comment I opened this post with. We are overwhelmed with data and are not sure what to look at or collect? Unfortunately, when it comes to triage for the most part you are on your own. This was one area I found to be truly lacking in documentation and the few cheat sheets I found didn’t account for the majority of stuff one needs to consider. Determining what triage processes were needed followed by developing those processes and then training others was definitely a tough challenge. It was tougher since it also involved honing those triaging processes to improve their efficiency and speed. Despite the difficulty, focusing on triage enables you to see through the noise and know what to look for.

Metrics – These Are Needed


If someone would had told me two years ago I be reading about security metrics I would had laughed out loud. Two years ago I didn’t see the value metrics offer until my eyes were opened exploring intelligence driven security. Another comment I saw in the SIEM forum was the following quote: “what information should be provided to the management on the daily basis which can justify the purchase of  SIEM”. This is where metrics can come in to play. By recording certain information it makes it easier to communicate what one is detecting and responding to. Communicating metrics and trends shows value to management, highlights weak areas in the security program, or uncovers patterns in attacks. During the past year I explored various information security metrics. As it relates to the SIEM, the VERIS schema is probably the best one I found for recording the information from detecting and responding to security events. As soon as you have the documented information then the fun part is finding creative ways to communicate these to others.

You Are Alone Standing on the Shoulders of Others


As I embarked on this journey I learned from others who have either dealt with SIEM technology or security monitoring. Initially, I read online what they published and made available to the public. I reached out to a few people about the additional questions I had. I looked around locally and reached out to others who were managing their own security monitoring capabilities. Needless to say, my SIEM journey was successful since I was standing on the shoulders of those who came before me.

However, at the end of the day each organization is different. Others can provide you with advice and share their experience but what is actually encountered is unique. The environment is unique and certain aspects of the use cases are unique as well. Due to this uniqueness, most of the time throughout the year we (my team and myself) were on our own. I mention this as a word of caution to others who may expect a vendor or others to solve their SIEM challenges for them. Others and vendors can help to a certain extent but you and your team are the ones who need to solve the challenges you face. At that point you will find yourself alone standing on the shoulders of others.
 

Looking Forward to SIEM Year 2


Reflecting back over the past year it has been a demanding challenge. The journey hasn't ended since work still needs to be done. There will always be work to do and new challenges to work through. Hopefully, the experienced I shared will avert someone from stepping on a SIEM land mind resulting in them yelling out for help as they are drowning in the sea of logs and alerts.
Labels: ,

Villager or Special Forces - That Is The Question

Friday, July 10, 2015 Posted by Corey Harrell 0 comments
At certain times we will find ourselves being like Special Forces going against what seems like a villager with a pitchfork. We are better equipped, better trained, possess more technical knowledge, and have more advanced skills. Despite their best efforts, the pitchfork and the one welding it doesn't stand a chance against our arsenal and our ability to use it.

At other times we find ourselves as the villager holding the pitchfork going up against what feels like Special Forces. They are smarter, have more resources, and possess more advanced skills. Despite our best effort and our ability to use the pitchfork; it's still a pitchfork going against a Special Forces arsenal and people who can use it.

The pendulum swings between the villager and Special Forces in the information security field. Between the two, I'd rather be the villager. The villager is the one facing the constant challenge. Unless of course, the pendulum only contains Special Forces. Special Forces against Special Forces would be the ultimate challenge.
Labels:

Linkz for Intelligence Driven Security and Threat Intelligence

Tuesday, June 30, 2015 Posted by Corey Harrell 4 comments
What’s the strategy one should use when trying to defend an organization against the threats we face today. At times the security strategy has been reactive. Decisions and the direction forward are based on the latest incident the organization experienced. This approach is not effective since it is the equivalent of firefighting where resources are used on addressing the latest fire without identifying the underlying issues causing the fires in the first place. At other times the security strategy is based on compliance. Decisions and the direction forward are based on regulations or standards the organization has to be compliant with. This approach is not as effective either. It will provide an organization with some minimum security controls but it may not help with defending against the threats we face today (the news highlights organizations who are compliant but are still compromised anyway). One security strategy that has gained traction over the years and is more effective than the previous two is intelligence driven security. The direction forward and “decisions are made based on real-time knowledge regarding the cyber adversaries and their attack methods, and the organization’s security posture against them”. This approach is more effective than the previous two since it enables an organization to allocate security resources to address the highest risks and threats they face.

In this post, I sharing linkz to various resources I found useful over the past few years related to the intelligence driven security, threat intelligence, threat intelligence data, consuming threat intelligence data, and threat intelligence sharing.

Intelligence Driven Security Links


These links are related to intelligence driven security, which RSA defined as “developing real-time knowledge on threats and the organization’s posture against those threats in order to prevent, detect, and/or predict attacks, make risk decisions, optimize defensive strategies, and enable action.”

Achieving Intelligence-Driven Information Security


The first link is one that will always hold a certain personal value since it was one of the first papers I read on the topic years ago. The RSA paper Getting Ahead of Advanced Threats: Achieving Intelligence-Driven Information Security discusses how an organization can approach managing their security program in this manner. The paper addresses: what organizations need to know, categories of cyber-risk data, intelligence driven information security, roadmap to achieving intelligence driven information security, opportunities for quick wins, and information sharing. I spent years performing vulnerability assessments against other organizations and each engagement it became more and more clear that the traditional approaches to security management were no longer effective. What was needed was an approach were factual information influenced decisions instead of decisions being based solely on someone's judgment or gut feeling. The approach in this paper is very light on details but it does address the thought process behind it and to me this was very helpful. The paper did nail the foundation one needs to have in place to achieve this as seen in the following quote: “to be ready to take on an intelligence program, the organization needs to have a foundation in place for monitoring the network for intrusions and a workflow process for responding to incidents.”

Strategic Cyber Intelligence


The reason behind leveraging intelligence in security management is to help people make better security decisions. These decisions can be related to addressing risks, security strategies, and resource usage. Despite this being the driving force behind intelligence driven security a good percentage of the material I’ve seen on the topic is more focused on the real time intelligence about threats and not about the intelligence an organization needs to make better security decisions. The next link I picked up from Richard Bejtlich and it’s a document titled Strategic Cyber Intelligence. If there is only one link to read in this post then this document is it. My words wouldn't do justice in describing this document so instead I'm opting to use part of the executive summary to describe it.

While there has been much emphasis on tactical cyber intelligence to help understand the “on-the-network” cyber-attacks so frequently in the news, there has been little discussion about the strategic and operational levels in order to better understand the overall goals, objectives, and inter-relationships associated with these tactical attacks. As a result, key consumers such as C-suite executives, executive managers, and other senior leaders are not getting the right type of cyber intelligence to efficiently and effectively inform their organizations’ risk management programs. This traditionally tactical focus also hampers the capability of the cyber intelligence function to communicate cyber risks in a way that leaders can fully interpret and understand.

Adopting Intelligence Driven Security


The next links I found helpful since they had some good talking points and a nice diagram to get buy-in to approach security in a more intelligent manner. The RSA Adopting Intelligence Driven Security paper provides a high-level overview about adopting an intelligence driven security strategy. Topics discussed include: visibility, analysis, action, and difference between today's security strategies and intelligence driven. The RSA blog post What is Intelligence Driven Security? provides very similar but less information than their paper.

Threat Intelligence Links


Threat intelligence is a needed component in achieving intelligence driven security but the two are not the same. This can be seen in the iSightPartners threat intelligence definition, which is “cyber threat intelligence is knowledge about adversaries and their motivations, intentions, and methods that is collected, analyzed, and disseminated in ways that help security and business staff at all levels protect the critical assets of the enterprise”. These links provide information about threat intelligence.

Threat intelligence: Collecting, Analysing, and Evaluating


The MWR InfoSecurity Threat intelligence: Collecting, Analysing, and Evaluating whitepaper provides an excellent overview about threat intelligence and a threat intelligence program. Topics included are: what is threat intelligence, building a threat intelligence program, strategic/operational/tactical threat intelligence, and technical threat intelligence. The paper is well worth taking the time to read since the overview touched on most components of a threat intelligence program.

Definitive Guide to Cyber Threat Intelligence


iSIGHT Partners is a vendor providing threat intelligence services. They released a short ebook titled Definitive Guide to Cyber Threat Intelligence (at the time of this post the link for the PDF is here and if it no longer works then you'll need to provide them your email address to receive the download link). In their own words the following is why they wrote the book: "We figured that since we wrote the book on cyber threat intelligence, we might as well write the book on cyber threat intelligence". The book itself is 74 pages and addresses the following: defining cyber threat intelligence, developing threat intelligence requirements, collecting threat information, analyzing and disseminating threat intelligence, using threat intelligence, implementing an intelligence program, and selecting the right threat intelligence partner. The one area where I think this book shines is by outlining the components that a commercial threat intelligence service should have.

Actionable information for Security Incident Response


ENISA released the Actionable information for Security Incident Response document that was “intended as a good practice guide for the exchange and processing of actionable information”. The document discusses some of the following points: properties of actionable information, levels of information, collecting information, preparing information, storing information, analyzing information, and case studies. The document does an outstanding job outlining the characteristics of actionable intelligence as well as a process one could use to process it.

Threat Intelligence for Dummies


Another ebook released by another threat intelligence vendor named Norse is the book Threat Intelligence for Dummies. The book is a short read (52 pages) and touches on the following areas: understanding threat intelligence, gathering threat intelligence, scoring threat intelligence, supporting incident response, threat mitigation, and buying criteria for threat intelligence solutions. The book is another option for those looking for a more general overview about threat intelligence.

Five Steps to Build an Effective Threat Intelligence Capability


The next link is for a Forrester report about building an effective threat intelligence capability. The first half of the report outlines the case for needing a threat intelligence capability while the second half discusses the actual capability. The topics include: intelligence cycle, intelligence disciplines, and five steps to build the intelligence capability. This report is another approach to building the capability and I find it beneficial to see different approaches for accomplishing the same thing. It makes it easier to pick and choose aspects from the various approaches to find what works best for you.

Open Source Threat Intelligence Data Feeds Links


Data about threats, adversaries, and methods they use can be obtained from various sources. One source for regularly updated threat data is from publically available sources. Despite the data being freely available care must be taken in selecting the data feeds to use. For each data feed its characteristics must be evaluated to determine the value added for an organization's security monitoring and response process. (Side note: consuming as many feed as possible is counterproductive and could actually impede security monitoring.)

Evaluating Threat Intelligence Data Feeds


These links are a bit dated but they are as relevant today as when they were published. David Bianco's posts The Pyramid of Pain and What Do You Get When You Cross a Pyramid With A Chain? outline an approach to evaluate the value of indicators. The Pyramid of Pain is a versatile model that can be used when not only evaluating indicators in open source threat intelligence feeds but it is also helpful when trying to assess the coverage in a security monitoring program.

Feeds, Feeds, and More Feeds


The next link is a word of caution from Jack Crook about using threat intelligence data feeds. In his post Feeds, feeds and more feeds his provides some food for thought for those looking to start consuming feeds. Below is a very telling quote from his post:

By blindly alerting on these types of indicators you also run the risk of cluttering your alert console with items that will be deemed, 99.99% of the time, false positive. This can cause your analysts to spend much unneeded time analyzing these while higher fidelity alerts are sitting there waiting to be analyzed.”

Threat Data Feeds


Now with the links about evaluating data feeds and a word of caution out of the way I can now provide links to websites that contain links to publically available sources for threat data. It’s an easy way to provide a wealth of feed options by linking work done by others.

Introducing the Active Threat Search
Critical Stack Bro Intelligence Framework (need to register but it is well worth it)
Collective Intelligence Framework Data Sources
Threat Intelligence Feeds Gathered by Combine
Opensource intel links
uiucseclab cif-configs


Consuming Threat Intelligence Data Links


One of the ENISA actionable information characteristics is ingestibility. Ingestibility is the ability of the organization receiving the data to "consume it painlessly and automatically, including correlating and associating it with other information". The consumption is what makes the information useful to an organization to identify vulnerabilities, mitigate an ongoing attack, or detecting a new threat.

Leveraging Threat Intelligence in Security Monitoring


Securosis published a decent paper titled Leveraging Threat Intelligence in Security Monitoring. The paper discusses threat intelligence sources (is mostly focused on malware), and the network security monitoring process before going into detail on integrating threat intelligence with the security monitoring process. The part I really liked about the paper is it outlines a process for managing the security monitoring process that consumes threat intelligence and it takes the time to explain each component. Even if an organization doesn't use this process its helpful to see how someone else approached consuming threat intelligence to see what can be used to improve their security monitoring processes.

How to Use Threat Intelligence with Your SIEM?


The next link is really a bunch of links. Anton Chuvakin is a Gartner analyst who focuses on SIEM, security monitoring, and incident response. His analysis reports requires a Gartner account to access but he does share some of his research on his blog. Anton wrote numerous posts addressing: consuming threat intelligence, threat intelligence, and threat intelligence data. His post How to Use Threat Intelligence with Your SIEM? talks about how SIEMs can consume threat intelligence data for an organization and the post really hits home since this is one way I consume TI data. He also released the following posts related to threat intelligence:

Threat Intelligence

On Internally-sourced Threat Intelligence
Delving into Threat Actor Profiles
On Threat Intelligence Use Cases
On Broad Types of Threat Intelligence
Threat Intelligence is NOT Signatures!
The Conundrum of Two Intelligences!


Threat Intelligence Data

On Threat Intelligence Sources
How to Make Better Threat Intelligence Out of Threat Intelligence Data?
On Comparing Threat Intelligence Feeds
Consumption of Shared Security Data
From IPs to TTPs


McAfee SIEM and Open Source Intelligence Data Feeds


An easy way to consume open source threat intelligence data is by feeding it into a properly configured SIEM and correlating the data across an organization’s logs. The next few links explain one method to accomplish this with the McAfee SIEM (formerly known as Nitro). The SIEM stores intelligence data inside items called watchlists and these watchlists can be dynamically updated with intelligence feeds. The post Creating a Watchlist from Malc0de shows how to accomplish creating a dynamic watchlist to populate it with the Malc0de feed. I populate my dynamic watchlists using a script; there are always different ways to arrive at the same destination. The watchlist containing threat intelligence data can then be used in correlation. The next post SIEM Foundations: Threat Feeds walks you through creating a static watchlist (I don’t recommend this approach with intelligence feeds) followed by showing different ways to use the watchlist.

Splunk and Open Source Intelligence Data Feeds


Different SIEMs are able to consume threat intelligence data in different ways. The previous links were for McAfee SIEM and the next links are for Splunk. The Deep Impact post Splunk and Free Open-Source Threat Intelligence Feeds “is a write-up for integrating some readily available free and open-source threat intelligence feeds and block lists into Splunk”. The thing I really liked about this post was the author not only explained how to perform this integration but he also released a script to help others do the same.

Bro and Open Source Intelligence Data Feeds


To make use of open source intelligence data feeds you don’t need a SIEM technology. All you need is technology that can consume the data feeds you selected. The next link is a great example of that. Critical Stack has put together their Threat Intelligence for The Bro Platform. I don't use Bro but I find this idea really slick. They set up a portal where you can log-in, review numerous open source intelligence feeds, select the feeds you want, and then create a subscription that gets ingested into Bro. This has really lowered the bar for people to use open source threat intelligence and even if you don't use Bro the portal is a nice reference for available data feeds.

Threat Intelligence Sharing Link


Approaching intelligence driven security provides an organization with visibility into their environments. Visibility into the threats they face, the actual attacks conducted against their environment, and their security posture to defend against those threats. Not only does intelligence driven security result in the organization consuming external threat intelligence but it enables the organization to develop and maintain their own threat intelligence based on what they are seeing. Internally developed intelligence can be shared with others. The last link is the only one I had for intelligence sharing.

NIST Guide to Cyber Threat Information Sharing


The NIST Special Publication 800-150 Guide to Cyber Threat Information Sharing (in draft format at the time of this post) expands on the NIST Computer Security Incident Handling Guide by exploring "information sharing, coordination, 228 and collaboration as part of the incident response life cycle". The guide is broken down into the following parts: incident coordination and information sharing overview, understanding current cybersecurity capabilities, establishing, & maintaining, and using information sharing relationships. The guide might be of value for those interested in a more formalized approach to intelligence sharing.

***** 07/01/15 Addendum *****


In response to this post the author of the CYINT Analysis blog pointed me to the threat intelligence resources webpage they put together. The webpage contains additonal resources I didn't discuss in this post and numerous others I wasn't aware about. I wanted to update this post to point to the CYINT Analysis resources webpage.