Ripping Volume Shadow Copies – Introduction

Sunday, January 29, 2012 Posted by Corey Harrell 0 comments
Windows XP is the operating system I mostly encounter during my digital forensic work. Over the past year I’ve been seeing more and more systems running Windows 7. 2011 brought with it my first few cases where the corporate systems I examined (at my day job) were all running Windows 7. There was even a more drastic change for the home users I assisted with cleaning malware infections because towards the end of the year all my cases involved Windows 7 systems. I foresee Windows XP slowly becoming a relic as the corporate environments I face start upgrading the clients on their networks to Windows 7. One artifact that will be encountered more frequently in Windows 7 is Volume Shadow Copies (VSCs). VSCs can be a potential gold mine but for them to be useful one must know how to access and parse the data inside them. The Ripping Volume Shadow Copies series is discussing another approach on how to examine VSCs and the data they contain.

What Are Volume Shadow Copies

VSCs are not new to Windows 7 and have actually been around since Windows Server 2003. Others in the DFIR community have published a wealth of information on what VSCs are, their forensic significance, and approaches to examine them. I’m only providing a quick explanation since Troy Larson’s presentation slides provide an excellent overview about what VSCs are as well as Lee Whitfield’s Into the Shadows blog post. Basically, the Volume Shadow Copy Service (VSS) can backup data on a Windows system. VSS monitors a volume for any changes to the data stored on it and will create backups only containing those changes. These backups are referred to as a shadow copies. According to Microsoft, the following activities will create shadow copies on Windows 7 and Vista systems:

        -  Manually (Vista & 7)
        -  Every 24 Hours (Vista)
        -  Every 7 Days (7)
        -  Before a Windows Update (Vista & 7)
        -  Unsigned Driver Installation (Vista & 7)
        -  A program that calls the Snapshot API (Vista & 7)

Importance of VSCs

The data inside VSCs may have a significant impact on an examination for a couple of reasons. The obvious benefit is the ability to recover files that may have been deleted or encrypted on the system. This ringed true for me on the few cases involving corporate systems; if it wasn’t for VSCs then I wouldn’t have been able to recover the data of interest. The second and possibly even more significant is the ability to see how systems and/or files evolved over time. I briefly touched on this in the post Ripping Volume Shadow Copies Sneak Peek. I mentioned how parsing the configuration information helped me know what file types to search for based on the installed software. Another example was how the user account information helped me verify a user account existed on the system and narrow down the timeframe when it was deleted. A system’s configuration information is just the beginning; documents, user activity, and programs launched are all great candidates to see how they changed over time.

To illustrate I’ll use a document as an example. When a document is located on a system without VSCs - for the most part - the only data that can be viewed in the document is what is currently there. Previous data inside the document might be able to be recovered from copies of the document or temporary files but won’t completely show how the document changed over time. To see how the document evolved would require trying to recover it at different points in time from system backups (if they were available). Now take that same document located on a system with VSCs. The document can be recovered from every VSC and each one can be examined to see its data. The data will only be what was inside the document when each VSC was created but it could cover a time period of weeks to months. Examining each document from the VSCs will shed light on how the document evolved. Another possibility is the potential to recover data that was in the document at some point in the past but isn't in the document that was located on the system. If system backups were available then they could provide additional information since more copies of the document could be obtained at other points in time.

Accessing VSCs

The Ripping Volume Shadow Copies approach works against mounted volumes. This means a forensic image or hard drive has to be mounted to a Windows system (Vista or 7) in order for the VSCs in the target volume to be ripped. There are different ways to see a hard drive or image’s VSCs and I highlighted some options:

        -  Mount the hard drive by installing it inside a workstation (option will alter data on the hard drive)

        -  Mount the hard drive by using an external hard drive enclosure (option will alter data on the hard drive)

        -  Mount the hard drive by using a hardware writeblocker

        -  Mount the forensic image using Harlan Carvey’s method documented here, here, and the slide deck referenced here

        -  Mount the forensic image using Guidance Software’s Encase with the PDE module (option is well documented in the QCCIS white paper Reliably recovering evidential data from Volume Shadow Copies)

Regardless of the option used to mount the hard drive or image, the Windows vssadmin command or Shadow Explorer program can show what if VSCs are available for a given mounted volume. The pictures below show the Shadow Explorer program and vssadmin command displaying the some VSCs for the mounted volume with drive letter C.

Shadow Explorer Displaying C Volume VSCs

VSSAdmin Displaying C Volume VSCs

Picking VSCs to examine is dependent on the examination goals and what data is needed to accomplish those goals. However, time will be a major consideration. Does the examination need to review an event, document, or user activity for specific times or for all available times on a computer? Answering that question will help determine if certain VSCs covering specific times are picked or if every available VSCs should be examined. Once the VSCs are selected then they can be examined to extract the information of interest.

Another Approach to Examine VSCs

Before discussing another approach to examining VSCs it’s appropriate to reflect on the approaches practitioners are currently using. The first approach is to forensically image each VSC and then examine the data inside each image. Troy’s slide deck referenced earlier has a slide showing how to image a VSC and Richard Drinkwater's Volume Shadow Copy Forensics post from a few years ago shows imaging VSCs as well. The second popular approach doesn’t use imaging since it copies data from each VSC followed by examining that data. The QCCIS white paper referenced earlier outlines this approach using the robocopy program as well as Richard Drinkwater in his posts here and here. Both approaches are feasible for examining VSCs but another approach is to examine the data directly inside VSCs bypassing the need for imaging and copying. The Ripping VSCs approach examines data directly inside VSCs and the two different methods to implement the approach are: Practitioner Method and Developer Method.

Ripping VSCs: Practitioner Method

The Practitioner Method uses ones existing tools to parse data inside VSCs. This means someone doesn’t have to learn a new tool or learn a programming language to write their own tools. All that’s required is for the tool to be command line and the practitioner willingness to execute the tool multiple times against the same data. The picture below shows how the Practitioner Method works.

Practitioner Method Process

Troy Larson demonstrated how a symbolic link can be used to provide access to VSCs. The mklink command can create a symbolic link to a VSC which then provides access to the data stored in the VSC. The Practitioner Method uses the access provided by the symbolic link to execute one’s tools directly against the data. The picture above illustrates a tool executing against the data inside Volume Shadow Copy 19 by traversing through a symbolic link. One could quickly determine the differences between VSCs, parse registry keys in VSCs, examine the same document at different points in time, or track a user’s activity to see what files were accessed. Examining VSCs can become tedious when one has to run the same command against multiple symbolic links to VSCs; this is especially true when dealing with 10, 20, or 30 VSCs. A more efficient and faster way is to use batch scripting to automate the process. Only a basic understanding about batch scripting (need to know how a For loop works) can create powerful tools to examine VSCs. In future posts I’ll cover how simple batch scripts can be leverage to rip data from any VSCs within seconds.

Ripping VSCs: Developer Method

I’ve been using the Practitioner Method for some time now against VSCs on live systems and forensic images. The method has enabled me to see data in different ways which was vital for some of my work involving Windows 7 systems. Recently I figured out a more efficient way to examine data inside VSCs. The Developer Method can examine data inside VSCs directly which bypasses the need to go through a symbolic link. The picture below shows how the Developer Method works.

Developer Method Process

The Developer Method programmatically accesses the data directly inside of VSCs. The majority of existing tools cannot do this natively so one must modify existing tools or develop their own. I used the Perl programming language to demonstrate that the Developer Method for ripping VSCs is possible. I created simple Perl scripts to read files inside a VSC and I modified Harlan’s lslnk.pl to parse Windows shortcut files inside a VSC. Unlike the Practitioner Method, at the time of this post I have not extensively tested the Developer Method. I’m not only discussing the Developer Method for completeness when explaining the Ripping VSCs approach but my hope is by releasing my research early it can help spur the development of DFIR tools for examining VSCs.

What’s Up Next?

Volume Shadow Copies have been a gold mine for me on the couple corporate cases where they were available. The VSCs enabled me to successfully process the cases and that experience is what pushed me towards a different approach to examining VSCs. This approach was to parse the data while it is still stored inside the VSCs. I’m not the only DFIR practitioner looking at examining VSCs in this manner. Stacey Edwards shared in her post Volume Shadow Copies and LogParser how she runs the program logparser against VSCs by traversing through a symbolic link. Rob Lee shared his work on Shadow Timelines where he creates timelines and lists deleted files in VSCs by executing the Sleuthkit directly against VSCs. Accessing VSCs’ data directly can reduce examination time while enabling a DFIR practitioner to see data temporally. Ripping Volume Shadow Copies is a six part series and the remaining five posts will explain the Practitioner and Developer methods in-depth.

        Part 1: Ripping Volume Shadow Copies - Introduction
        Part 2: Ripping VSCs - Practitioner Method
        Part 3: Ripping VSCs - Practitioner Examples
        Part 4: Ripping VSCs - Developer Method
        Part 5: Ripping VSCs - Developer Example
        Part 6: Examining VSCs with GUI Tools

Dual Purpose Volatile Data Collection Script

Monday, January 2, 2012 Posted by Corey Harrell 17 comments
When responding to a potential security incident a capability is needed to quickly triage the system to see what's going on. Is a rogue process running on the system, whose currently logged onto the system, what other systems are trying to connect over the network, or how do I document the actions I took on the system. These are valid questions during incident response whether the response is for an actual event or a simulation. One area to examine to get answers is the systems' volatile data. Automating the collection of volatile data can save valuable time which in turn helps analysts examine the data faster in order to get answers. This post briefly describes (and releases) the Tr3Secure volatile data collection script I wrote.

Tr3Secure needed a toolset for responding to systems during attack simulations and one of the tools had to quickly collect volatile data on a system (I previously discussed what Tr3Secure is here). However, the volatile data collection tool had to provide dual functions. First and foremost it had to properly preserve and acquire data from live systems. The toolset is initially being used in a training environment but the tools and processes we are learning need to be able to translate over to actual security incidents. What good is mastering a collection tool that can’t be used during live incident response activities? The second required function was the tool had to help with training people on examining volatile data. Tr3Secure members come from different information security backgrounds so not every member will be knowledgeable about volatile data. Collecting data is one thing but people will eventually need to know how to understand what the data means. The DFIR community has a few volatile data collection scripts but none of the scripts I found provided the dual functionality for practical and training usage. So I went ahead and wrote a script to meet our needs.

Practical Usage

These were some considerations taken into account to ensure the script is scalable to meet the needs for volatile data collection during actual incident response activities.

        Flexibility

Different responses will have different requirements on where to store the volatile data that’s collected. At times the data may be stored on the same drive where the DFIR toolset is located while at other times the data may be stored to a different drive. I took this into consideration and the volatile data collection script allows for the output data to be stored on a drive of choice. If someone prefers to run their tools from a CD-ROM while someone else works with a large USB removable drive then the script can be used by the both of them.

        Organize Output
 
Troy Larson posted a few lines of code from his collection script to the Win4n6 sometime ago. One thing I noticed about his script was that he organized the output data based on a case number. I incorporated his idea into my script; a case number needs to be entered when the script is run on a system. A case folder enables data collected from numerous systems to be stored in the same folder (folder is named Data-Case#). In addition to organizing data into a case folder, the actual volatile data is stored in a sub-folder named after the system the data came from (system's computer name is used to name the folder). To prevent overwriting data by running the script multiple times on the same system I incorporated a timestamp into the folder name (two digit month, day, year, hour, and minute). Appending a timestamp to the folder name means the script can execute against the same system numerous times and all of the volatile data is stored in separate folders. Lastly, the data collected from the system is stored in separate sub-folders for easier access. The screenshot below shows the data collected for Case Number 100 from the system OWNING-U on 01/01/2012 at 15:46.


        Documentation

Automating data collection means that documentation can be automated as well. The script documents everything in a collection log. Each case has one collection log so regardless if data is collected from one or ten systems an analyst will only have to worry about reviewing one log.


The following information is documented both to the screen for an analyst to see and a collection log file: case number, examiner name, target system, user account used to collect data, drives for tools and data storage, time skew, and program execution. The script prompts the analyst for the case number, their name, and the drive to store data on. This information is automatically stored in the collection log so the analyst doesn’t have to worry about maintaining documentation elsewhere. In addition, the script prompts the analyst for the current date and time which is used to record the time difference between the system and the actual time. Every program executed by the script is recorded in the collection log along with a timestamp of when the program executed. This will make it easier to account for artifacts left on a system if the system is examined after the script is executed. The screenshot below shows the part of the collection log for the data collected from the system OWNING-U.


        Preservation

RFC 3227’s Order of Volatility outlines that evidence should be collected starting with the most volatile then proceeding to the less volatile. The script takes into account the order of volatility during data collection. When all data is selected for collection, the memory is first imaged then volatile data is collected followed by collecting non-volatile data. The volatile data collected is: process information, network information, logged on users, open files, clipboard, and then system information. The non-volatile data collected is installed software, security settings, configured users/groups, system's devices, auto-runs locations, and applied group policies. Another item the script incorporated from Troy Larson’s comment in the Win4n6 group is preserving the prefetch files before volatile data is collected. I never thought about this before I read his comment but it makes sense. Volatile data gets collected by executing numerous programs on a system and these actions can overwrite the existing prefetch files with new information or files. Preserving the prefetch files upfront ensures analysts will have access to most of the prefetch files that were on the system before the collection occurred (four prefetch files may be overwritten before the script preserves them). The script uses robocopy to copy the prefetch files so the file system metadata (timestamps, NTFS permissions, and file ownership) is collected along with the files themselves. The screenshot below shows the preserved files for system OWNING-U.


        Tools Executed

The readme file accompanying the script outlines the various programs used to collect data. The programs include built-in Windows commands and third party utilities. The screenshot below shows the tools folder where the third party utilities are stored.




I’m not going to discuss every program but I at least wanted to highlight a few. Windows diskpart command allows for disks, partitions, and volumes to be managed through the command line. The script leverages diskpart to make it easy for an analyst to see what drives and volumes are attached to a system. Hopefully, the analyst won’t need to open up Windows explorer to see what the removable media drive mappings are since the script displays the information automatically as shown below. Note, to make diskpart work a text file needs to be created in the tools folder named diskpart_commands.txt and the file needs to contain these two commands on separate lines: list disk and list volume.


Mandiant’s Memoryze is used to obtain a forensic image of the system’s memory. Memoryze supports a wide range of Windows operating systems which makes the script more versatile for dumping RAM. The key reason the script uses Memoryze is because it’s the only free memory imaging program I found that allows an image to be stored in a folder of your choice. Most programs will place the memory image in the same folder where the command line is opened. This wouldn’t work because the image would be dropped in the folder where the script is located instead of the drive the analyst wants. Memoryze uses an xml configuration file to image RAM so I borrowed a few lines of code from the MemoryDD.bat batch file to create the xml file for the script. Note, the script only needs the memoryze.exe; to obtain the exe install Memoryze on a computer then just copy memoryze.exe to the Tools folder.

PXServer’s Winaudit program obtains the configuration information from a system and I first became acquainted with the program during my time performing vulnerability assessments. The script uses Winaudit to collect some non-volatile data including the installed software, configured users/groups, and computer devices. Winaudit is capable of collecting a lot more information so it wouldn’t be that hard to incorporate the additional information by modifying the script.

Training Usage

These were the two items put into the script to assist with training members on performing incident response system triage.

        Ordered Output Reports

The script collects a wealth of information about a system and this may be overwhelming to analysts new to examining volatile data. For example, the script produces six different reports about the processes running on a system. A common question when faced with so many reports is how should they be reviewed. The script’s output reports have numbers which is the suggested order for them to be reviewed. This provides a little assistance to analysts until they develop their own process for examining the data. The screenshots below shows the process reports in the output folder and those reports opened in Notepad ++.




        Understanding Tool Functionality and Volatile Data

The script needs to help people better understand what the collected data means about the system where it came from. Two great references for collecting, examining, and understanding volatile data are Windows Forensic Analysis, 2nd edition and Malware Forensics: Investigating and Analyzing Malicious Code. I used both books when researching and selecting the script’s tools to collect volatile data. What better ways to help someone better understand the tools or data then by directing them to references that explain it? I placed comments in the script containing the page number where a specific tool is discussed and the data explained in both books. The screenshot below shows the portion of the script that collects process information and the references are highlighted in red.



Releasing the Tr3Secure Volatile Data Collection Script

There are very few things I do forensically that I think are cool; this script happens to be one of them. There are not many tools or scripts that work as intended while at the same time provide training. People who have more knowledge about volatile data can hit the ground running with the script investigating systems. The script automates imaging memory image, collecting volatile/non-volatile data, and documenting every action taken on the system. People with less knowledge can leverage the tool to learn how to investigate systems. The script collects data then the ordered output and references in the comments can be used to interpret the data. Talk about killing two birds with one stone.

The following is the location to the zip file containing the script and the readme file (download link is here). Please be advised, a few programs the script uses require administrative rights to run properly.

Enjoy and Happy Hunting ...

***** Update *****

The Tr3Secure script has been updated and you can read about the changes in the post Tr3Secure Data Collection Script Reloaded

***** Update *****

Ripping Volume Shadow Copies Sneak Peek

Monday, December 19, 2011 Posted by Corey Harrell 4 comments
I was hesitant to do a sneak peak about a different approach to examine Volume Shadow Copies (VSCs). I personally don’t like sneak peeks and would rather wait to see the finished product. I think it’s along the lines of starting a movie then stopping it after 15 minutes and being forced to finish watching months later. If I don’t like sneak peeks then why am I putting others through it? I previously mentioned how I wanted to spend my furlough days by putting together some posts about another approach to examining VSCs. Well last week was my furlough week and my family wrote a new version to the carol The Twelve Days of Christmas. Four out of town trips, three sick kids, two family emergencies, and one blogger quarantined to his room. Needless to say I had to spend my time focused on my family. I won’t have time to write the VSCs blog posts until next month so I at least wanted to show one example on how I use this method.

There are times when I get a system that has been altered and one change is removing financial software from the system. This is pretty important because if I’m trying to locate financial data then I need to know what software is on the system so I know what kind of files to look for. There is a chance some file types might initially be missed if I’m not aware a certain program was installed at some point in the past. Different registry keys can help determine what programs were installed or executed but you can get a more complete picture about a system by looking at those same registry keys at different points in time. Performing registry analysis in this manner has allowed me to quickly identify uninstalled financial applications which reduced the time needed to find the data. Anyone who has used Harlan’s RipXP understands the value in seeing registry keys at different points in time. I used the same concept with one exception: numerous registry keys can be queried at the same time when dealing with VSCs.

The system I used for this demonstration was a live Windows 7 Ultimate 32 bit system. In the past I also used it against Windows 7 and Vista. forensic images

Obtaining General Operating System Information

I discussed previously one initial examination step is to get a better understanding about the system I’m facing. I use a batch script with Regripper to obtain a wealth of information about how the system was configured when it was last powered on. The configuration information is from only one point in time but if the system has VSCs then that means the same information can be obtained from different points in time. Seeing the same configuration information enables you to see how the system changed slightly over time including what software was installed or uninstalled. To do this I made some modifications to the general operating system batch script which lets me run it against VSCs I have access to.

I’m not going to discuss accessing VSCs in this post. For information on how to access VSCs I’d check out Harlan’s Even More Stuff post since he provides a link to his slide deck he gave to the online DFIR meet-up on the topic. My Windows 7 system had 19 VSCs and for the demonstration I only used the following:

        - ShadowCopy19 12/13/2011 6:13:35 PM

        - ShadowCopy16 12/01/2011 8:08:50 AM

        - ShadowCopy3 11/28/2011 11:19:40 AM

        - ShadowCopy1 8/26/2011 12:15:34 PM

The screen shot below shows the main menu to the vsc-parser (most selections have sub menus). To review the system to identify software of interest I’m interested in selection 2: “Obtain General Operating System Information from Volume Shadow Copies”.


The selection will immediately execute my Regripper batch file against every VSC I have access to. The picture below shows the script running against my four VSCs. I highlighted the samparse and uninstall plug-ins that executed.


The output from the script is nicely organized into different folders based on what the information is.


I’m interested in the software on the system which means I need the reports in the software-information folder. A report was created for each VSC I had access to (notice how the file name contains the VSC number it came from).


Now at this point I can review the reports and notice the slight differences between each VSCs. I tend to look at the most recent VSC then work my way to the oldest VSC. It makes it easier to see how the system slightly changed over time from the forensic image I examined first.


On a case I used this technique and it helped me to identify a financial application that was removed from the system. In the end it saved some a lot of time because this was one of my initial steps and I knew right off the bat I was looking for specific file types. Some may be wondering why I decided to highlight the samparse plug-in as well. At another time the same technique helped me verify a user account existed on the system and narrow down the timeframe when it was removed from the system.

I showed an example running Regripper against registry hives stored in VSCs on a live Windows 7 system. However, the approach is not only limited to registry hives or Regripper since you can pretty much parse any data stored in a VSC.

A Time of Reflection

Tuesday, December 13, 2011 Posted by Corey Harrell 7 comments
Certain events in life cause you to reflect on humility and put back into perspective the meaningful things in life. You remember that in time almost everything is replaceable. Another forensicator will fill your shoes at work and your organization will continue to go on. Another researcher will continue your research and the little that you did accomplish will eventually just be a footnote. Another person will step up to provide assistance to others in forensic forums and listservs. Your possessions and equipment will become someone else’s to enjoy and use. When looking at the big picture, the work we do and value will eventually fade away and life will go on as if we were never there. One of the only things remaining will be the impact we make on others in the little time we have available to us. One doesn’t need a lot of time or resources to make an impact; all that’s needed is having a certain perspective.

Everyone should look out not [only] for his own interests, but also for the interests of others. Philippians 2:4

Having an outlook that looks beyond one’s own self interests can positively impact others and I think the statement holds true regardless of religious beliefs. A perspective that takes into consideration others’ interests is displayed everyday in the Digital Forensic and Incident Response (DFIR) community. DFIR forums have thousands of members but there are only a few who regularly take the time to research and provide answers to others’ questions. DFIR listservs are very similar that despite their membership the minority are the ones who regularly try to help others. Look at the quality information (books, articles, blogs, white papers, etc) available throughout the community and their authorship is only a small fraction of the people in the community. These are just a few examples out of many how individuals within the DFIR community use their time and resources in an effort to not only better themselves but to educate others as well.

When I look at the overall DFIR community I think there’s only a minority who are looking beyond their own interests in an effort to help others. A few people have helped me over my career which contributed to where I am today. They never asked for anything in return and were genuinely interested in trying to help others (myself included). If the DFIR community is what it is because of a few people giving up their time and resources to make a positive impact on others, then I can only wonder what our community would look like if the majority of people looked beyond their own interests to look after the interests of others. In the meantime all I can do is to continue to try to remember to look beyond myself in every aspect of my life. To try to consider those around me so I can help whoever crosses my path needing assistance. When the day is over one of the only things remaining will be the impact I have on others.
Labels:

Don’t Overlook Simulations

Monday, December 5, 2011 Posted by Corey Harrell 0 comments
A few weeks ago my family and I were eating dinner at our dining room table. A car alarm started going off outside so I went to the window to see what was going on. I first checked to make sure our cars weren't the ones making the noise and then I saw it was my neighbor’s car across the street. I went back to the dining table when my three year old said "the car is saying there is a fire drill". Laughing aside his statement made a lot of sense. Before that moment the only time he has heard loud sirens have been during fire drills. Naturally, his first thought when he heard something similar was a fire drill was happening. Fire drills are one simulation people have practiced (most of the time forced) over and over again to help them know how to proceed when the real thing occurs. Simulations in DFIR work the same way in helping educate ourselves how to proceed in certain types of scenarios.

Most trainings I attended reinforced learning by having the attendees practice on test images or data. The attendees just don't stumble around in the data since their objective is dictated by working through a simulation based on some case scenario. The simulation training approach even carries over to when people want to improve their skills on their own. Similar to the fire drill, different DFIR scenarios can expose people to different types of cases so they are more aware about their options and what to do when a real case comes up. The choices one has available for scenarios are to either use a test case put together by someone else or create your own. I found the latter option to be extremely effective at better preparing me since I can focus on areas I want to improve on. Simulations are how I developed my skills to investigative malware infected systems.

Forensicator Readiness is the thought process I use to develop and implement different scenarios. The process focuses my efforts on the exact skills or knowledge I want to learn more about. One simulation I’ve been working on for some time is answering these two questions about infected systems: is the system infected and how did the system get infected. All the different scenarios I developed overtime and some research I conducted was a direct result of trying to answer those two questions.

My scenarios started out by manually infecting systems with different malware to develop my skills in finding malware both in memory and on disk. Once I was effective at quickly locating the malware - without scanning - then the next step was to purposely attack systems. Some attacks I conducted such as running Metasploit against systems with malware as the payload while other attacks involved finding malicious SPAM emails or active drive-by attacks. In all the scenarios I simulated infections with different initial infection vectors on systems to provide myself with test cases. I improved my skills by examining the infected systems so I could answer were the systems infected and how did it occurred.

My neighbor’s car alarm put my three year old in fire drill mode. He didn’t get up and start walking towards the door because it was my neighbor’s car. The drill was for my neighbor and not us; otherwise our cars would have told us. :) Putting ourselves through our own simulations in advance increases our ability to be in the right mode when we need it. I was better prepared when I took on my first infected system. Not only did I locate the malware (without av scanning) but I was successful in tracing the infection back to a drive-by against an Adobe Reader vulnerability. It wasn’t luck I was able to do this right out of the gates. Nor was it luck I have continued to do this on system after system. This ability is a direct result of honing my skills in advanced by putting together my own simulations focused on areas I want to improve on.

People and children are not able to just able magically figure out how to exit a building in chaos. It takes practice and when chaos occurs the training kicks him to help people know how to proceed. DFIR is the same way; we won’t magically know how to process certain cases or answer certain questions. It takes practice and overtime we develop the knowledge on how to proceed with certain cases. Practicing can take the form of trainings or self simulations. Trainings are a one size fits all where the content is the same across the board. An advantage that self simulations have over trainings is one’s ability to focus on whatever area one wants. Time can be better spent focusing on the areas one doesn’t have knowledge about while trainings can be used to supplement other areas (this approach is a better way to use training dollars as well). The next time one wants to develop their DFIR skills then self simulations shouldn’t be overlooked as a viable option.
Labels:

jIIr Updates

Saturday, December 3, 2011 Posted by Corey Harrell 1 comments
A few quick updates about some things related to the blog …

Digital Forensic Search (DFS) Updates

I updated the Digital Forensic Search’s index today. Eight new blogs were added and I updated the URL for an existing blog. In no particular order the new editions are: Sketchymoose's Blog, Forensics For the Newbs, WriteBlocked, Hexacorn Blog, Zena Forensics, Taksati, Chris Sanders, and SANs Penetration Testing Blog. As usual, the Introducing the DFS blog post has been updated to reflect the changes.

I’m going to continue documenting the sites in the index on the Intro to DFS post. However, I’m probably going to stop posting updates on the blog since I’m leaning towards mentioning the changes through my twitter account.

I’m Now on Twitter

Earlier in the week I finally finished setting up my Twitter account and actually started to use it. As my profile indicates Twitter is my platform to share random thoughts which will mostly be focused on information security. I said mostly because the account won’t solely be used to discuss security. Please feel free to hit me up at corey_harrell.

A Different Approach to Analyzing Volume Shadow Copies

In a few weeks I’m going to have some time off from work since I’m taking some “furlough” days. My plan is to spend the time putting together some material (blog posts and videos) to further demonstrate a different approach to analyzing the data stored volume shadow copies.

Before discussing my approach I’m pointing out two current approaches. One is to image each VSCs then examining the data in the images. Another approach is to copy the data - including metadata - from all or select VSCs so it can be examined outside the VSCs. The approach I’ve been using is to examine the data while it’s still stored in the volume shadow copies. There are numerous benefits doing it this way such as reducing the amount of time needed or being able to work on both live systems and forensic images. I think the technique’s true power is the ability to see the same data at different points in time since shows how the data changed over time. This has been critical for me on a few different cases.

To help me examine VSCs in this manner I wrote a few different scripts. The material I’m putting together will not only explain my logic behind the scripts’ functionality but will show how it can be easily extended by anyone to meet their own needs. Yes, I'll also release the scripts as well. Plus, if I can pull off a video or two it should be cool for people to see it in action.

What’s TR3Secure?

At some point over the next few months you may see me start referencing and sharing some work I completed for something called TR3Secure. I’ll be the sole author of any work I share (mostly scripts) but I wanted to briefly discuss what TR3Secure is since I’ll be tagging my work with it. A few co-workers and a colleague of mine are working on setting up a training group for us to collaborate and develop our information security skills together. We are trying to create an environment to bring together security testers, incident responders, and digital forensic practitioners. We envision doing different activities including conducting live simulations and this is where bringing together the three different skillsets will shine. The live simulations will be conducted with select people attacking a test network while a second group responds, triages the situation, and if necessary contains the attack. Afterwards, the examiners will collect and examine any evidence to document the attack artifacts. When it’s all said and done then everyone will share their experiences and knowledge about the atack and if necessary train other members on any actions they completed during the simulation.

We are still in the early stages setting the group up and once established it initially has to be a closed group. I’m only mentioning TR3Secure here because I’m going to write various scripts (Perl and Batch) to help with certain aspects of the live simulations. If my scripts work well especially for training then I’ll share it for others to use for self training purposes. The scripts will solely be my own work but I’m still tagging everything with TR3Secure since I’m working with some great individuals. The first item coming down the pipeline is a cool dual purpose volatile data collection script that doubles as a training and incident response tool.

Linkz 4 Exploits to Malware

Wednesday, November 30, 2011 Posted by Corey Harrell 0 comments
In this edition of linkz the theme goes from exploitation to infection to detection. Some linkz discussed include: providing clarity about my exploit artifacts, a spear-phishing write-up, a malware analysis checklist, and thoughts about automated vs in-depth malware analysis.

Picking Vulnerabilities

Over the past year I’ve been conducting research to document attack vector artifacts. Vulnerabilities and the exploits that target them are one component to an attack vector. Some may have noticed I initially focused most of my efforts on vulnerabilities present in Adobe Reader and Java. I didn’t pick those applications by flipping a coin or doing “eeny, meeny, miny, moe”. It is not a coincidence I’m seeing exploit artifacts left on systems that target those applications. This has occurred because I pick vulnerabilities based on the exploits contained in exploit packs.

Exploit packs are toolkits that automate the exploitation of client-side vulnerabilities such as browsers, Adobe Reader, and Java. Mila Parkour over at Contagio maintains an excellent spreadsheet outlining the exploits available in different exploit packs on the market. The reference by itself is really informative. The screenshot below shows part of the vulnerabilities section in the spreadsheet.


Notice how many Java and Adobe vulnerabilities are on the list. Maybe now it’s a little more clearer why I wrote about Adobe/Java exploits and why I wasn’t surprised when system after system I keep finding artifacts associated with those exploits. The spreadsheet shows what applications exploit packs are targeting. I’ve been using the document as a reference to help me decide what exploit artifacts to document. Down the road when I start looking into Word, Excel, and flash exploits then at least there will be a little more clarity as to what I’m choosing to document.

Another Adobe Flash Exploit

Since I mentioned both Adobe, flash, and exploit in the same paragraph then I might as well mention them in the same sentence. Zscaler ThreatLab blog recently posted Adobe Flash “SWF” Exploit still in the Wild. The short write-up discusses how a vulnerability in Adobe flash (CVE-2011-0611) is being exploited by embedding a .swf file into Microsoft Office documents or html pages. I wanted to highlight one specific sentence "this exploit code embeds a “nb.swf” flash file into a webpage, which is then executed by the Adobe Flash player". That one sentence identified numerous potential artifacts one could find on a system indicating this attack vector was used. First there will be Internet browser activity followed by a flash file being accessed. The system may then show a swf file being created in a temporary folder and there may also be indications that Adobe flash executed shortly thereafter. The write-up doesn't go into what artifacts are left on a system since its focus is on how the attack worked. At this point those potential artifacts are just that potential. However, flash exploits are third on my list for what I'm going to start documenting. It shouldn’t come as a surprise by now that Mila’s spreadsheet also shows a few exploit packs targeting the flash vulnerability.

Walking through a Spearphishing Attack

The Kahu Security blog post APEC Spearfish does an excellent job walking the reader through an actual spearphishing attack. The spearphish was an email targeted at a single individual and contained a malicious PDF attachment. There was a flash object in the PDF file that exploited a vulnerability in Adobe Flash Player (yup … CVE-2011-0611). The end result was a malware infection providing backdoor access for the attackers. The post isn't written from the DF perspective; it doesn't outline the artifacts on a system indicating a spearphish occurred or a flash exploit caused the malware infection. However, it does a great job breaking down the attack from the person receiving the email to the PDF file launching to malware getting dropped. One image I liked was the one showing "what’s happening behind the scenes" since it helps DF readers see the potential artifacts associated with the method use to infect the system.

Finding Malware Checklist

Last month Harlan posted about his experience at PFIC 2011. At the end of his post he shared his Malware Detection Checklist which outlines the examination steps to locate malware on the system. I think it’s a great list and I like how the checklist is focused on a specific task; finding malware. I added some of the activities in Harlan’s checklist to my own because I either wasn’t doing it or I wasn’t deliberate about doing it. One step I wasn’t doing was scanning for packed files and thinking about it I can see how it can help reduce the amount of binaries to initially examine. On the other end, one step I wasn’t deliberate about was examining the user’s temporary directories. I was examining these directories through timeline analysis but I wasn’t deliberate about searching the entire folder for malware or exploit artifacts.

The cool thing about Harlan’s checklist is that he already did the heavy lifting. He put together a process that works for him (including the tools he uses) and is sharing it with the community. It wouldn’t be hard for anyone to take what he already did and incorporate it into their own examination process. Plus, his blog post mentioned the checklist came from Chapter 6 in WFA 3 so the book (once released) can be a reference to better understand how to go about finding malware on a system.

Another Angle at Finding Malware

Along the same lines about trying to identify malware on a system Mark Morgan over at My Stupid Forensic Blog discussed the topic in his post How to Identify Malware Behavior. Mark first proceeded to explain the four main characteristics of malware which are: an initial infection vector, malware artifacts, propagation mechanism, and persistence mechanism. (Harlan also described these characteristics on his malware webpage). The characteristics are important since there artifacts associated with them and those artifacts can help identify the malware. Mark provided a great example about how the persistence mechanism played a role in one of his cases. He even went on to explain a few different ways to track down malware and its persistence mechanism.

Analyzing Malware

At some point during the examination malware will be identified on the system. Some can just start analyzing the malware since they are fortunate enough to know how to reverse engineer its functionality or know someone who can. The rest of us may not have that luxury so we should just upload the sample to online scanners such as VirusTotal or Sandboxes right? Well, I wouldn’t be too fast in pulling the trigger without understanding the risks involved. The Hexacorn blog put together the excellent post Automation vs. In-depth Malware Analysis. In the author’s own words the "post is my attempt to summarize my thoughts on the topic of both automated malware analysis in general and consensual submission of files to a web site owned by a third party". There are times when submitting samples to a third party service are not the best choice to make. I first learned about the risk when a discussion occurred in the Win4n6 group sometime ago but the post goes into more depth. For anyone dealing with malware and considering using third party services -such as VirusTotal or ThreatExert - than I highly recommend reading this post to help you make an informed decision.

On a side note, the Hexacorn blog started to post forensic riddles every Friday followed by posting the answer every Monday. The riddles are entertaining and educational. My stat count so far is zero for two (I wasn’t even close).
Labels: ,