skip to main |
skip to sidebar
Sunday, November 22, 2015
Posted by
Corey Harrell
Another Monday morning as you stroll into work. Every Monday morning you have a set routine and this morning was no different. You were hoping to sit down into your chair, drink some coffee, and work your way through the emails that came in over the weekend. This morning things were different. As soon as you entered the office, your ISO had a mandatory meeting going on and they were waiting for you to arrive. As you entered the meeting the ISO announces “each week it seems like another company is breached. The latest headline about Company XYZ should be our wake up call. The breach could had been prevented but it wasn’t since their security people were not monitoring their security products and they never saw the alerts telling them they had a problem.” At this point you started to see where this was going; no one at your company pays any attention to all those alerts from the various security products deployed in your environment. Right on cue the ISO continued “what happened at Company XYZ can easily happen here. We don't have people looking at the alerts being generated by our security products and even if we had the bodies to do this we have no processes in place outlining how this can be accomplished.” As you sipped your coffee you came close to spitting it out after you heard what came next. The ISO continued “I directed junior security guy to look at the IDS alerts that came in over the weekend. He said something very suspicious occurred early Saturday morning on August 15, 2015.” Then the ISO looked directly at you “I need you to look into whatever this activity is and report back what you find.” “Also, make sure you document the process you use since we are going to use it as a playbook for these types of security incidents going forward.”
Triage Scenario
The above scenario outlines the activity leading up to the current malware security event. Below are some of the initial questions you need to answer and report back to the ISO.
* Is this a confirmed malware security event or was the junior analyst mistaken?
* What type of malware is involved?
* What potential risk does the malware pose to your organization?
* Based on the available information, what do you think occurred on the system to cause the malware event in the first place?
Information Available
In an organization’s network you have a wealth of information available to you for you to use while triaging a security incident. Despite this, to successfully triage an incident only a subset of the data is needed. In this instance, you are provided with the following artifacts below for you to use during your triage. Please keep in mind, you may not even need all of these.
* IDS alerts for the timeframe in question (you need to replay the provide pcap to generate the IDS alerts. pcap is not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)
* Prefetch files from the system in question (inside the Prefetch.ad1 file)
* File system metadata from the system in question (the Master File Table is provided for this practical)
Supporting References
The below items have also been provided to assist you working through the triage process.
* The jIIr-Practical-Tips.pdf document shows how to replay the packet capture in Security Onion and how to mount the ad1 file with FTK Imager.
* The file hash list from the system in question. This is being provided since you do not access to the system nor a forensic image. This can help you confirm the security event and any suspicious files you may find.
The 2015-11-22_Malware-Event Prefetch MFT IDS practical files can be downloaded from here
The 2015-11-22_Malware-Event Prefetch MFT IDS triage write-up is outlined in the post Triage Practical Solution – Malware Event – Prefetch $MFT IDS
For background information about the jIIr practicals please refer to the Adding an Event Triage Drop to the Community Bucket article
Wednesday, November 18, 2015
Posted by
Corey Harrell
By failing to prepare, you are preparing to fail.
~ Benjamin Franklin
Let's also stop saying if company X looked into their alerts then they would had seen there was a security issue. We need to start providing more published information instructing others how to actually triage and build workflows to respond to those alerts. If we don’t share and publish practical information about triaging workflows then we shouldn’t be pointing out the failures of our peers.
~ Corey Harrell
As soon as you can get past the fact that I quoted myself in my own article those two quotes really show a security predicament people and companies are facing today. Companies are trying to implement or improve their security monitoring capabilities to gain better visibility about threats in their environment. Defenders are looking to gain or improve their skills and knowledge to enable them to perform security monitoring and incident response activities for companies. On the one hand, according to Ben Franklin we need to take steps to prepare ourselves and if we don’t then we will fail. This means defenders need to constantly work to improve their knowledge, skills, and workflows so they are better prepared to perform security monitoring and incident response activities. If they aren’t preparing then most likely they will fail when they are called upon to respond to a security incident. On the other hand, according to myself as a community we don’t publish and share a lot of resources that others can use to improve their knowledge, skills, and workflows related to performing security monitoring and incident response activities. Please don’t get me wrong. There is some great published information out there and there are those who regularly share information (such as Harlan and Brad Duncan) but these individuals are in the minority. This brings us to our current security predicament. We need to prepare. We lack readily available information to help us prepare. So numerous companies are failing when it comes to performing security monitoring and incident response activities.
As I was thinking about this predicament I was wondering how I can contribute to the solution instead of just complaining about the problem. I know my contribution will only be a drop in a very large bucket but it will be a drop nonetheless. This post outlines the few drops that will start appearing on jIIr.
A common activity defenders perform is event triage. Event triage is the assessment of a security event to determine if there is a security incident, its priority, and the need for escalation. A defender performs this assessment repeatedly as various technologies alert on different activity, which generates events for the defender to review. As I explored this area for my $dayjob I found that most published resources say you need to triage security events but most didn’t provide practical information about how to actually do it. My hope is I can make at least a small contribution to this area.
Objective
My purpose is to provide resources and information to those seeking to improve their knowledge, skillsets, and workflows for event triage.
Method
I will periodically publish two posts on jIIr. The first post will outline a hypothetical scenario and a link will be provided to a limited data set. The data set will contain four or less artifacts that need to be analyzed to successfully complete the scenario. When performing event triage most of the time only a subset of data needs to be examined to successfully assess the event. To encourage this line of thinking I’m limiting the dataset to at most four artifacts containing information required to solve the scenario. The datasets will be pulled from the test systems I build to improve my own skills, knowledge, and workflows. If I’m building and deleting these systems I might as well as use them to help others. I’ll try to make the datasets resemble what may be available in most environments such as operating system artifacts, logs, and IDS alerts. Accompanying the dataset may be a document briefly explaining how to perform a specific task such as generating IDS alerts by replaying a packet capture in Security Onion. The scenarios will reflect areas I have or am working on so the type of simulated incidents will be limited. Please keep in mind, similar to performing event triage for a company some of my scenarios may be false positives.
The second post will be published between one to three weeks after the first post. The second post will outline a triage process one could use to assess the security event described in the scenario. At a minimum, the process will cover: where in a network this information can be found, how to collect this information, free/open source tools to use, how to parse the artifacts in the provided dataset, and how to understand the data you are seeing. The triage process will be focused on being thorough but fast. The faster one can triage a single event the more events they can process. If I come across any other DFIR bloggers who published how they triaged the security event then this post will contain links to their websites so others can see how they approached the same problem.
Summary
My hope is this small contribution adds to the resources available to other defenders. Resources they can use to improve their workflows, skills, and knowledge. Resources they can use to better prepare themselves instead of preparing to fail.
To anyone I do help better prepare themselves, I only ask for one thing in return. For you to take a few minutes of your time to purposely share something you find useful/helpful with someone in your life. The person can be anyone you know from a co-worker to colleague to a fellow student to a complete stranger asking for help online. Take a few minutes of your life to share something with them. Losing a few minutes of our time has minimum impact on us but it can make a huge difference in the lives of others and possibly help them become better prepared for what they may face tomorrow.
God bless and Happy Hunting.
Saturday, November 7, 2015
Posted by
Corey Harrell
Things have been quiet on jIIr since I over committed myself. The short version is I had zero time for personal interests outside of my commitments, $dayjob, and family. Things are returning back to normal so it’s time to start working through my blog idea hopper. In the meantime, this post is sharing some of my recent random thoughts. Most of these thoughts came in response to reading an article/email, seeing a tweet, hearing a presentation, or conversing with others.
~ We need to stop looking to others (peers, vendors, etc) to solve our problems. We need to stop complaining about a lack of resources, information, training, tools, or anything else. We need to start digging into our issues to solve them ourselves instead of looking for the easy answers.
~ As we work to better defend our organizations, we need to take to heart R. Buckminster Fuller's advice. "You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete." Our focus needs to be on building and improving the new reality while ignoring the haters who are stuck in the past.
~ We need to stop saying we don't have enough resources. We need to focus on our workflows and seek out ways to improve, automate, and become more efficient. Slight changes on existing workflows can free up resources for other areas.
~ We need to start using this new technology called Google. There is no such thing as a stupid question but there are questions that can be easily answered by doing a simple Google search.
~ Let's get our current generation tools working properly before talking about next generation. If we can't properly configure and use our current tools then getting a so called “next generation” tool won't solve anything.
~ We need to stop saying we need more training. We need to stop saying for me to do task X I need to be sent to training Y. We just need to realign our priorities to spend time on self-development. Turn off the TV, buy a book, build some virtual machines, conduct some tests, and analyze the results.
~ How about we talk more about detecting and responding to basic security threats. If we can't alert on commodity malware infections or web server compromises and have effective workflows triaging those alerts then hunting shouldn't even be in our vocabulary. Forget about hunting and focus on the basics.
~ Let's stop generalizing by saying if company X was monitoring their logs then they would had detected the compromise sooner. That is until there is more published practical information telling organizations how they can actually set up their security monitoring capability. If there is very little practical information or assistance about building a security monitoring program then we shouldn't be surprised when organizations struggle with the same complicated process.
~ On the same note and while we are at it. Let's also stop saying if company X looked into their alerts then they would had seen there was a security issue. We need to start providing more published information instructing others how to actually triage and build workflows to respond to those alerts. If we don’t share and publish practical information about triaging workflows then we shouldn’t be pointing out the failures of our peers.
~ Let's stop focusing our security strategy on the next new product instead of looking at how to better leverage our existing products. New products may address a need but we might be able to address the same need with existing products and use the money we save to address other needs.
~ Let's stop with the presentations and articles pretending to tell other defenders how to do something while the author says they are not saying how exactly they do it to prevent threats from knowing. This serves no purpose and is counterproductive since it’s actually not telling other defenders how to do something. What’s the point of saying anything in the first place?
~ Please let's stop adding noise to the intelligence sharing echo chamber. Whether if its products or conferences, most say we need more threat intelligence and we need to start sharing more. No other specifics are added; just that we need it and others need to share it. In the end we are just echoing noise without adding any value.
~ We need to stop saying how we have a shortage of talented security staff to hire. It is what it is. We need to start talking about how we can develop highly motivated people who want security as their career. We may not be able to hire talented security staff but we can definitely grow them to meet our needs.
~ We need to expand our focus on detecting and responding to threats from being primarily end point focused to server focused. A good percentage of articles, intelligence sources, and products talk about end point clients with very little mention about servers. How about detecting and responding to compromised web servers? How about database servers? How about CMS servers such as Joomla, WordPress, and Drupal? Our conversations are only talking about a part of our IT infrastructures and not the entire infrastructures.
~ We need to stop complaining that our management just doesn't get or take security seriously. The issue can be two things. Maybe we aren't communicating in a way for them to care. Maybe security really is not a high priority. Either way, we need to either: fix it, move on to another organization, or just accept it and stop complaining about it.