Detect Fraud Documents 360 Slides

Friday, June 29, 2012 Posted by Corey Harrell 2 comments
I recently had the opportunity to attend the SANs Digital Forensics and Incident Response summit in Austin Texas. The summit was a great con; from the outstanding presentations to networking with others from the field. I gave a SANs 360 talk about my technique for finding fraudulent word documents (I previously gave a preview about my talk). I wanted to release my slide deck for anyone who wanted to use it as a reference before my paper is completed. You can grab it from my Google sites page listed as “SANs 360 Detect Frauduelent Word Documents.pdf”.

For those who were unable to attend the summit can still read all of the presentations. SANS updated their Community Summit Archives to include the Forensics and Incident Response Summit 2012. I highly recommend checking out the work shared by others; a lot of it was pretty amazing.
Labels: ,

Computers Don’t Get Sick – They Get Compromised

Sunday, June 10, 2012 Posted by Corey Harrell 3 comments
Security awareness campaigns have done an effective job of educating people about malware. The campaigns have even reached the point to where if people hear certain words they see images in their minds. Say viruses then pictures of a sick computer pops into their minds. Say worms then there’s an image of a computer with critters and germs crawling all over it. People from all walks of life have been convinced malware is similar to real life viruses; the viruses that make things sick. This point of view can be seen at all levels within organizations to family members to the average Joe who walks into the local computer repair shop to the person responsible for dealing with an infected computer. It’s no wonder when malware ends up on a computer people are more likely to think about ER then they are CSI. More likely to do what they can to make the “sickness” go away then they are to figure out what happened. To me people’s expectations about what to do and the actions most people take seems to resemble more like how we deal with the common cold. The issue with this is that computers don’t get sick – they get compromised.

Security awareness campaigns need to move beyond imagery and associations showing malware as something that affects health. Malware should instead be called what it is; a tool. A tool someone is using in an effort to take something from us, our organizations, our families, and our communities. Taking anything they can whether it’s money, information, or computer resources. The burglar picture is a more accurate illustration about what malware is than any of the images showing a “sick” computer. It’s a tool in the hands of a thief. This is the image we need people to picture when they hear the words: malware, viruses, or worms. Those words need to be associated with tools used by criminals and tools used by hostile entities. Maybe then their expectations will change about malware on their computer or within their network. Malware is not something that needs to be “made better” with a trip to the ER but something we need to get to the bottom of to better protect ourselves. It’s not something that should be made to go away so things can go on as normal but something we need to get answers and intelligence from before moving forward. People need to associate malware with going home one day and finding burglary tools sitting in their living room. Seeing the tools in their house should make them want to ask: what happened, how this occurred, and what was taken before they ask when they can continue on as normal.

Those entrusted with dealing with malware on computers and networks need to picture malware the same way. It’s not some cold where we keep throwing medicine at it (aka antivirus scan after antivirus scan). It’s a tool someone placed there to do a specific thing. Making the malware go away is not the answer; the same way that making the tools in the living room disappear doesn’t address the issue. Someone figured out a way to put a tool on the computer and/or network and it’s up to us to figure out how. The tool is a symptom of a larger issue and the intelligence we can learn from answering how the malware got onto a system can go a long way in better protecting the ones relying on our expertise. We need to perform analysis on the systems in order to get to the compromise’s root cause.

The approach going forward should not be to continue with the status quo. The status quo of doing what it takes to make the “sickness” go away without any other thought about answering the questions “how”, “when”, and “what”. Those tasked with dealing with a malware infected computer should no longer accept the status quo either. Removing the malware without any investigative actions to determine the “how”, “when”, and “what”. The status quo views malware on a computer as if the computer is somehow “sick”. It’s time to change the status quo to make it reflect what malware on a computer actually is. The computer isn’t “sick” but compromised.
Labels:

Compromise Root Cause Analysis Model

Sunday, June 3, 2012 Posted by Corey Harrell 3 comments
A common question runs through my mind every time I read an article about: another targeted attack, a mass SQL injection attack, a new exploit being rolled into exploit packs, or a new malware campaign. The question I ask myself is how would the attack look on a system/network from a forensic perspective. What do the artifacts look like that indicates the attack method used. Unfortunately, most security literature doesn’t include this kind of information for people to use when responding to or investigating attacks. This makes it harder to see what evidence is needed to answer the questions “how” and “when” of the compromise. I started researching Attack Vector Artifacts so I could get a better understanding about the way different attacks appear on a system/network. What started out as a research project to fill the void I saw in security literature grew into an investigative model one can use to perform compromise root cause analysis. A model I’ve been successfully using for some time to determine how systems became infected. It’s been awhile since I blogged about attack vector artifacts so I thought what better way to revisit the subject than sharing my model.

DFIR Concepts


Before discussing the model I think it’s important to first touch on two important concepts. Locard’s Exchange Principle and Temporal Context; both are key to understanding why attack vector artifacts exist and how to interpret them.

Locard’s Exchange Principle states when two objects come into contact, something is exchanged from one to the other. Harlan’s writing about the Locard’s Exchange Principle is what first exposed me to how this concept applies to the digital world. The principle is alive and well when an attacker goes after another system or network; the exchange will leave remnants of the attack on the systems involved. There is a transfer between the attacker’s systems with the targeted network/systems. An excellent example of the Locard’s Exchange Principle occurring during an attack can be seen in my write-up Finding the Initial Infection Vector. The target system contained a wealth of information about the attack. There was malware, indications Java executed, Java exploits, and information about the website that served up the exploits. All this information was present because the targeted system came into contact with the web server used in the attack. The same will hold true for any attack; there will be a transfer of some sort between the systems involved in an attack. The transfer will result in attack vector artifacts being present on the targeted network in the attack.

Temporal Context is the age/date of an object and its temporal relation to other items in the archaeological record (as defined by my Google search). As it relates to the digital world, temporal context is comparing a file or artifact to other files and artifacts as it relates to a timeline of system activity. When there is an exchange between attacking and targeted systems the information left on the targeted network will be related. The information will be very closely related within a short timeframe. Temporal Context can also be seen in my post Finding the Initial Infection Vector. The attack activity I first identified was at 10/16/2011 6:50:09 PM and I traced it back to 10/8/2011 23:34:10. This was an eight day timeframe where all the attack vector artifacts were present on the system. It’s a short timeframe when you take into consideration the system had been in use for years. The timeframe grows even smaller when you only look at the attack vector artifacts (exploit, payload, and delivery mechanisms). Temporal Context will hold true for any attack resulting in the attack vector artifacts all occurring within a short timeframe.

What Are Attack Vector Artifacts?


In my previous post (Attack Vector Artifacts) I explained what an attack vector is and the three components it consists of. I think it’s important to briefly revisit the definition and components. An attack vector is a path or means somebody can use to gain access to a network/system in order to deliver a payload or malicious outcome. Based on that definition, an attack vector can be broken down into the following components: delivery mechanisms, exploit, and payload. The diagram below shows their relationship.

Attack Vector Model

At the core is the delivery mechanism which sends the exploit to the network or system. The mechanism can include email, removable media, network services, physical access, or the Internet. Each one of those delivery mechanisms will leave specific artifacts on a network and/or system indicating what initiated the attack.

The purpose of the inner delivery mechanism is to send the exploit. An exploit is something that takes advantage of a vulnerability. Vulnerabilities could be present in a range of items: from operating systems to applications to databases to network services. When vulnerabilities are exploited it leaves specific artifacts on a network/system and those artifacts can identify the weakness targeted. To see what I mean about exploit artifacts you can refer to the ones I documented for Java (CVE-2010-0840 Trusted Methods and CVE-2010-0094 RMIConnectionImpl) , Adobe Reader (CVE-2010-2883 PDF Cooltype), Windows (Autoplay and Autorun and CVE 2010-1885 Help Center), and social engineering (Java Signed Applet Exploit Artifacts).

A successful exploit may result in a payload being sent to the network or system. This is what the outer delivery mechanism is for. If the payload has to be sent to then there may be artifacts showing this activity. One example can be seen in the system I examined in the post Finding the Initial Infection Vector. The Java exploit used the Internet to download the payload and there could have been indications of this in logs on the network. I said “if the payload has to be sent” because there may be instances where the payload is a part of the exploit. In these cases there won’t be any artifacts.

The last component in the attack vector is the desired end result in any attack; to deliver a payload or malicious outcome to the network/system. The payload can include a number of actions ranging from unauthorized access to denial of service to remote code execution to escalation of privileges. The payload artifacts left behind will be dependent on what action was taken.

Compromise Root Cause Analysis Model


To identify the root cause of an attack you need to understand the exploit, payload, and delivery mechanisms artifacts that are uncovered during an examination. The Attack Vector Model does an outstanding job making sense of the artifacts by categorizing them. The model makes it easier to understand the different aspects of an attack and the artifacts that are created by the attack. I’ve been using the model for the past few years to get a better understanding about different types of attacks and their artifacts. Despite the model’s benefits for training and research purposes, I had to extend it in order to use it for investigative purposes by adding two additional layers (source and indicators). The end result is the Compromise Root Cause Analysis Model as shown below.


Compromise Root Cause Analysis Model

At the core of the model is the source of the attack. The source is where the attack originated from. Attacks can originate from outside or within a network; it all depends on who the attacker is. An external source is anything residing outside the control of an organization or person. A few examples are malicious websites, malicious advertisements on websites, or email. On the opposite end are internal attacks which are anything within the control of an organization or person. A few examples include infected removable media and malicious insiders. The artifacts left on a network and its systems can be used to tell where the attack came from.

The second layer added was indicators. The layer is not only where the information and artifacts about how the attack was detected would go but it also encompasses all of the artifacts showing the post compromise activity. A few examples of post compromise activity includes: downloading files, malware executing, network traversal, or data exfiltration. The layer is pretty broad because it groups the post compromise artifacts to make it easier to spot the attack vector artifacts (exploit, payload, and delivery mechanisms).

Compromise Root Cause Analysis Model In Action


The Compromise Root Cause Analysis Model is a way to organize information and artifacts to make it easier to answer questions about an attack. More specifically to answer: how and when did the compromise occur? Information or artifacts about the compromise are discovered by completing examination steps against any relevant data sources. The model is only used to organize what is identified. Modeling discovered information is an ongoing process during an examination. The approach to using the model is to start at the indicators layer then proceed until the source is identified.

The first information to go in the indicators layer is what prompted the suspicion that an attack occurred. Was there an IDS alert, malware indications, or intruder activity. The next step is to start looking for post compromise activity. There will be more artifacts associated with the post compromise then there will be for the attack vectors. While looking at the post compromise artifacts keep an eye out for any attack vector artifacts. The key is to look for any artifacts resembling a payload or exploit. Identifying the payload can be tricky because it may resemble an indicator. Take a malware infection as an example. At times an infection only drops one piece of malware on the system while at other times a downloader is dropped which then downloads additional malware. In the first scenario the single malware is the payload and any artifacts of the malware executing is also included in the payload layer. In the second scenario the additional malware is indicators that an attack occurred. The dropper is the payload of the attack. See what I mean; tricky right? Usually I put any identified artifacts in the indicators layer until I see payload and exploit information that makes me move it into the payload layer.

After the payload is identified then it’s time to start looking for the actual payload and its artifacts. The focus needs to be on the initial artifacts created by the payload. Any addition artifacts created by the payload should be lumped into the indicators layer. To illustrate let’s continue to use the malware scenario. The single malware and any indications of it first executing on the system would be organized in the payload layer. However, if the malware continued to run on the system creating additional artifacts – such as modifying numerous files on the system -then that activity goes into the indicators layer.

The outer delivery mechanism is pretty easy to spot once the payload has been identified. Just look at the system activity around the time the payload appeared to see it. This is the reason why the payload layer only contains the initial payload artifacts; to make it easier to see the layers beneath it.

Similar to the delivery mechanism, seeing the exploit artifacts is also done by looking at the system activity around the payload and what delivered the payload to the system. The exploit artifacts may include: the exploit itself, indications an exploit was used, and traces a vulnerable application was running on the system.

The inner delivery mechanism is a little tougher to see once the exploit has been identified. You may have a general idea about where the exploit came from but actually finding evidence to confirm your theory requires some leg work. The process to find the artifacts is the same; look at the system activity around the exploit and inspect anything of interest.

The last thing to do is to identify the source. Finding the source is done a little different then finding the artifacts in the other layers. The previous layers require looking for artifacts but with the source layer it involves spotting when the attack artifacts stop. Usually there is a flurry of activity showing the delivery mechanisms, exploit, and payload and at some point the activity just stops. In my experiences, this is where the source of the attack is located. As soon as the activity stops then everything before the last identified artifact should be inspected closely to see if the source can be found. In some instances the source can be spotted while in others it’s not as clear. Even if the source cannot be identified at least some avenues of attack can be ruled out based on the lack of supporting evidence. Lastly, if the source points to an internal system then the whole root cause analysis process starts all over again.

Closing Thoughts


I spent my personal time over the past few years helping combat malware. One thing I always do is determine the attack vector so I could at least provide sound security recommendations. I used the compromise root cause analysis process to answer the questions “how” and “when” which thus enabled me to figure out the attack vector. Besides identifying the root of a compromise there are a few other benefits to the process. It’s scalable since it can be used against a single system or a network with numerous systems. Artifacts located from different data sources across a network are organized to show the attack vector more clearly. Another benefit is the process uses layers. Certain layers may not have any artifacts or could be missing artifacts but the ones remaining can still be used to identify how the compromised occurred. This is one of the reasons I’ve been able to figure out how a system was infected even after people took attempts to remove the malware while in the process destroying the most important artifacts showing how the malware got there in the first place.