Man versus AntiVirus Scanner

Wednesday, August 22, 2012 Posted by Corey Harrell 7 comments
Knowing what programs ran on a system can answer numerous questions about what occurred. What was being used to communicate, what browsers are available to surf the web, what programs can create documents, were any data spoliation programs ran, or is the system infected. These are only a few of the questions that can be answered by looking at program execution. There are different artifacts showing program execution; one of which is the application compatibility cache. Mandiant’s whitepaper Leveraging the Application Compatibility Cache in Forensic Investigations (blog post is here and paper is here) explains what the cache is in detail and why it’s important to digital forensics. One important aspect about the cache is it stores information about files such as names, size, and last modified times; all of which may be useful during a digital forensic examination. The application compatibility cache has provided additional information I wouldn’t have known about without it. As such I’m taking some time to write about this important new artifact.

I wanted to highlight the significance of the cache but I didn’t want to just regurgitate what Mandiant has already said. Instead I’m doing the DFIR equivalent of man versus the machine. I’m no John Henry but like him we are witnessing the impact modernization has on the way people do their jobs. One such instance is the way people try to determine if a system is infected with malware. A typical approach is to scan a system with antivirus software to determine if it is infected. There is a dependency on the technology (antivirus software) to do the work and in essence the person is taken out of the process. Seems very similar to what John Henry witnessed with the steam powered hammer replacing the human steel drivers. John Henry decided to demonstrate man’s might by taking the steam powered hammer head on in a race. I opted to do the same, to take on one of my most reliable antivirus scanners (Avast) in a head on match to see who can first locate and confirm the presence of malware on a system. I didn’t swing a hammer either. My tools of choice were RegRipper with the new appcompatcache plugin to parse the application compatibility cache along with the Sleuthkit and Log2timeline to generate a timeline containing filesystem metadata. Maybe, just maybe in some distant future in IT and security shops across the land people will be singing songs about the race of the century. When Man took on the Antivirus Scanner.

The Challenge


The challenge was to find malware that an organization somewhere in the land is currently facing. Before worrying about what malware to use I first configured the test system. The system was a Windows XP fresh install with Service Pack 3. I only installed Adobe Reader version 9.3 and Java version 6 update 27. These applications were chosen to make it easier to infect the system through a drive-by. I wanted to use unknown malware as a way to level the playing field; I didn’t need nor want any advantages over the antivirus scanner. To find the malware I looked at the recently listed URLs on the Malware Domain List to find any capable of doing a drive-by. I found two potential URLs as shown below.


The first URL pointed to a Blackhole exploit pack. I entered the URL into Internet Explorer and after waiting for a little bit the landing page appeared as captured below.


I gave Blackhole some more time to infect the computer before I entered the second URL. That was when I saw the first indication the system was successfully infected with an unknown malware.


The race was now officially on. Whoever finds the malware and any other information about the malware first wins.

On Your Mark, Get Set


I mounted the system to my workstation using FTK Imager in order for tools to run against it. I downloaded and installed the latest Avast version followed by updating to the latest virus signature definitions. I configured Avast to scan the mounted image and all that was left was to click “Scan”. With my challenger all set I made sure I had the latest RegRipper Appcompatcache plugin. Next I fired up the command prompt and entered the following command:

rip.pl –p appcompatcache –r F:\Windows\System32\config\system > C:\appcompt.txt

The command is using RegRipper’s command-line version and says to run the appcompatcache plugin against the system registry hive in the mounted image’s config folder. To make it easier to review the output I redirected it to a text file.

My challenger is all set waiting at the starting line. I’m all set just waiting for one little word.

Go!


The Avast antivirus scan was started as I pressed enter to run the RegRipper’s appcompatcache plugin against the system registry hive.

0 minutes 45 seconds


I opened the text file containing the parsed application compatibility cache. One cool thing about the plugin is that Harlan highlights any executables in a temporary folder. In the past I quickly found malware by looking at any executables present in temp folders so I went immediately to the end of the output. I found the following suspicious files which I inspected closer.

Temp paths found:

C:\Documents and Settings\Administrator\Local Settings\Temp\gtbcheck.exe
C:\Documents and Settings\Administrator\Local Settings\Temp\install_flash_player_ax.exe
C:\Documents and Settings\Administrator\Local Settings\Temp\install_flashplayer11x32ax_gtbd_chrd_dn_aih[1].exe
C:\Documents and Settings\Administrator\Local Settings\Temp\gccheck.exe
C:\Documents and Settings\Administrator\Local Settings\Temporary Internet Files\Content.IE5\4967GLU3\install_flashplayer11x32ax_gtbd_chrd_dn_aih[1].exe
C:\Documents and Settings\Administrator\Local Settings\Temp\install_flashplayer11x32ax_gtbd_chrd_dn_aih[1].bat

3 minutes 4 seconds


My hopes of a quick win came crashing down when I found out the executables in the temporary folders were no longer present on the system. I went back to the beginning of the application compatibility cache’s output and started working my way through each entry one at a time. Avast was scanning the system at a fast pace because the image was so small.

5 minutes 10 seconds


Avast was still scanning the system but it still didn’t find the malware. That was good news for me because I found another suspicious entry in the application compatibility cache.

C:\Documents and Settings\Administrator\Local Settings\Application Data\armfukk.exe
ModTime: Tue Aug 21 20:34:04 2012 Z
UpdTime: Tue Aug 21 20:38:03 2012 Z
Size : 495616 bytes

The file path drew my attention to the program and a check on the system showed it was still there. I quickly uploaded armfukk.exe to VirusTotal as stared at the Avast scan waiting to see if it would flag it before the VirusTotal scan completed.


VirusTotal delivered the verdict: 9 out of 42 antivirus scanners detected the armfukk.exe file as malware. Going head to head against Avast I located a piece of malware in about 5 minutes while Avast was still scanning. As you probably expected Avast still didn’t flag any files as being malicious.

Avast was still running the race as it kept scanning the system. I continued my examination by turning to my next tool of choice; a timeline. A timeline would provide a wealth of information by showing the activity around the time the armfukk.exe file was created on the system. I ran the following Sleuthkit command to create a bodyfile containing the filesystem metadata:

fls.exe -m C: -r \\.\F: > C:\bodyfile

9 minutes 30 seconds


Avast was still chugging along scanning but it still didn’t flag any files. The bodyfile was finally created but I needed to convert it into a more readable format. I wanted the timeline in log2timeline’s csv format so I next ran the command:

log2timeline.pl -z local -f mactime -w timeline.csv C:\bodyfile

11 minutes 22 seconds


I imported the timeline into Excel and sorted the output. Just as I was getting ready to search on the “armfukk.exe” keyword Avast finally completed its scan with zero detections.


Shortly There After


The race was over but I wasn’t basting in the glory of winning. I wanted to know how the malware actually infected the computer since I was so close to getting the answer. I searched on the armfukk.exe filename and found the entry showing when the file was created on the system.


There was activity showing Java was running and five seconds before the armfukk.exe file was created I came across an interesting file in the Java cache. VirusTotal gave me all the confirmation I needed.


Moral of the Story


As I said before, maybe, just maybe in some distant future in IT and security shops across the land people will be singing songs about the race of the century. Remembering the day when man demonstrated they were needed in the process to locate malware on a system. Putting antivirus technology into perspective as a tool; a great tool to have available in the fight against malware. Remembering the day when man stood up and said "antivirus technology is not a replacement for having a process to respond to malware incidents nor is it a replacement for the people who implement that process".

Linkz for Tools

Wednesday, August 15, 2012 Posted by Corey Harrell 2 comments
In this Linkz edition I’m mentioning write-ups discussing tools. A range of items are covered from the registry to malware to jump lists to timelines to processes.

RegRipper Updates


Harlan has been pretty busy updating RegRipper. First RegRipper version 2.5 was released then there were some changes to where Regripper is hosted along with some nice new plugins. Check out Harlan’s posts for all the information. I wanted to touch on a few of the updates though. The updates to Regripper included the ability to run directly against volume shadow copies and parse big data. The significance to parsing big data is apparent in his new plugin that parses the shim cache which is an awesome artifact (link up next). Another excellent addition to RegRipper is the shellbags plugin since it parses Windows 7 shell bags. Harlan’s latest post Shellbags Analysis highlights the forensic significance to shell bags and why one may want to look at the information they contain. I think these are awesome updates; now one tool can be used to parse registry data when it used to take three separate tools. Not to be left out the community has been submitting some plugins as well. To only mention a few Hal Pomeranz provided some plugins to extract Putty and WinSCP information and Elizabeth Schweinsberg added plugins to parse different Run keys. The latest RR plugin download has the plugins submitted by the community. Seriously, if you use RegRipper and haven’t checked out any of these updates then what are you waiting for?

Shim Cache


Mandiant’s post Leveraging the Application Compatibility Cache in Forensic Investigations explained the forensic significance of the Windows Application Compatibility Database. Furthermore, Mandiant released the Shim Cache Parser script to parse the appcompatcache registry key in the System hive. The post, script, and information Mandiant released speaks for itself. Plain and simple, it rocks. So far the shim cache has been valuable for me on fraud and malware cases. Case in point, at times when working malware cases programs execute on a system but the usual program execution artifacts (such as prefetch files) doesn’t show it. I see this pretty frequently with downloaders which are programs whose sole purpose is to download and execute additional malware. The usual program execution artifacts may not show the program running but the shim cache was a gold mine. Not only did it reflect the downloaders executing but the information provided more context to the activity I saw in my timelines. What’s even cooler than the shim cache? Well there are now two different programs that can extract the information from the registry.

Searching Virus Total


Continuing on with the malware topic, Didier Stevens released a virustotal-search program. The program will search for VirusTotal reports using a file’s hash (MD5, SHA1, SHA256) and produces a csv file showing the results. One cool thing about the program is it only performs hash searches against Virustotal so a file never gets uploaded. I see numerous uses for this program since it accepts a file containing a list of hashes as input. One way I’m going to start using virustotal-search is for malware detection. One area I tend to look at for malware and exploits are temporary folders in user profiles. It wouldn’t take too much to search those folders looking for any files with an executable, Java archive, or PDF file signatures. Then for each file found perform a search on the file’s hash to determine if VirusTotal detects it as malicious. Best of all, this entire process could be automated and run in the background as you perform your examination.

Malware Strings


Rounding out my linkz about malware related tools comes from the Hexacorn blog. Adam released Hexdrive version 0.3. In Adam’s own words the concept behind Hexdrive is to “extract a subset of all strings from a given file/sample in order to reduce time needed for finding ‘juicy’ stuff – meaning: any string that can be associated with a) malware b) any other category”. Using Hexdrive makes reviewing strings so much easier. You can think of it as applying a filter across the strings to initially see only the relevant ones typically associated with malware. Then afterwards all of the strings can be viewed using something like Bintext or Strings. It’s a nice data reduction technique and is now my first go to tool when looking at strings in a suspected malicious file.

Log2timeline Updates


Log2timeline has been updated a few times since I last spoke about it on the blog. The latest release is version 0.64. There have been quite a few updates ranging from small bug fixes to new input modules to changing the output of some modules. To see all the updates check out the changelog.

Most of the time when I see people reference log2timeline they are creating timelines using either the default module lists (such as winxp) or log2timeline-sift. Everyone does things differently and there is nothing wrong with these approaches. Personally, both approaches doesn’t exactly meet my needs. The majority of the systems I encounter have numerous user profiles stored on them which mean these profiles contain files with timestamps log2timeline extracts. Running a default module list (such as winxp) or log2timeline-sift against the all the user profiles is an issue for me. Why should I include timeline data for all user accounts instead of the one or two user profiles of interest? Why include the internet history for 10 accounts when I only care about one user? Not only does it take additional time for timeline creation but it results in a lot more data then what I need thus slowing down my analysis. I take a different approach; an approach that better meets my needs for all types of cases.

I narrow my focus down to specific user accounts. I either confirm who the person of interest is which tells me what user profiles to examine. Or I check the user profile timestamps to determine which ones to focus on. What exactly does this have to do with log2timeline? The answer lies in the –e switch since it can exclude files or folders. The –e switch can be used to exclude all user profiles I don’t care about. There’s 10 user profiles and I only care about 2 profiles but I only want to run one log2timeline command. No problem if you use the –e switch. To illustrate let’s say I’m looking at the Internet Explorer history on a Windows 7 system with five user profiles: corey, sam, mike, sally b, and alice. I only need to see the browser history for the corey user account but I don’t want to run multiple log2timeline commands. This is where the –e switch comes into play as shown below:

log2timeline.pl -z local -f iehistory -r -e Users\\sam,Users\\mike,"Users\\sally b",Users\\alice,"Users\\All Users" -w timeline.csv C:\

The exclusion switch eliminates anything containing the text used in the switch. I could have used sam instead of Users\\sam but then I might miss some important files such as anything containing the text “sam”. Using a file path limits the amount of data that is skipped but will still eliminate any file or folder that falls within those user profiles (actually anything falling under the C root directory containing the text Users\username). Notice the use of the double back slashes (\\) and the quotes; for the command to work properly this is needed. What’s the command’s end result? The Internet history from every profile stored in the Users folder except for the sam, mike, sally b, alice, and all user profiles is parsed. I know most people don’t run multiple log2timeline commands when generating timelines since they only pick one of the default modules list. Taking the same scenario where I’m only interested in the corey user account on a Windows 7 box check out the command below. This will parse every Windows 7 artifact except for the excluded user profiles (note the command will impact the filesystem metadata for those accounts if the MFT is parsed as well).

log2timeline.pl -z local -f win7 -r -e Users\\sam,Users\\mike,"Users\\sally b",Users\\alice,"Users\\All Users" -w timeline.csv C:\

The end result is a timeline focused only on the user accounts of interest. Personally, I don't use the default module lists in log2timeline but I wanted to show different ways to use the -e switch.

Time and Date Website


Daylight savings time does not occur on the same day each year. One day I was looking around the Internet for a website showing the exact dates when previous daylight savings time changes occurred. I came across the timeanddate.com website. The site has some cool things. There’s a converter to change the date and time from one timezone to another. There’s a timezone map showing where the various timezones are located. A portion of the site even explains what Daylight Savings Time is. The icing on the cake is the world clock where you can select any timezone to get additional information including the historical dates of when Daylight Savings Time occurred. Here is the historical information for the Eastern Timezone for the time period from the year 2000 to 2009. This will be a useful site when you need to make sure that your timestamps are properly taken into consideration Daylight Savings Time.

Jump Lists


The day has finally arrived; over the past few months I’ve been seeing more Windows 7 systems than Windows XP. This means the artifacts available in the Windows 7 operating system are playing a greater role in my cases. One of those artifacts is jump lists and Woanware released a new version of Jumplister which parses them. This new version has the ability to parse out the DestList data and performs a lookup on the AppID.

Process, Process, Process


Despite all the awesome tools people release they won’t be much use if there isn’t a process in place to use them. I could buy the best saws and hammers but they would be worthless to me building a house since I don’t know the process one uses to build a house. I see digital forensics tools in the same light and in hindsight maybe I should have put these links first. Lance is back blogging over at ForensicKB and he posted a draft to the Forensic Process Lifecycle. The lifecycle covers the entire digital forensic process from the Preparation steps to triage to imaging to analysis to report writing. I think this one is a gem and it’s great to see others outlining a digital forensic process to follow. If you live under a rock then this next link may be a surprise but a few months back SANS released their Digital Forensics and Incident Response poster. The poster has two sides; one outlines various Windows artifacts while the other outlines the SANs process to find malware. The artifact side is great and makes a good reference hanging on the wall. However, I really liked seeing and reading about the SANs malware detection process since I’ve never had the opportunity to attend their courses or read their training materials. I highly recommend for anyone to get a copy of the poster (paper and/or electronic versions). I’ve been slacking updating my methodology page but over the weekend I updated a few things. The most obvious is adding links to my relevant blog posts. The other change and maybe less obvious is I moved around some examination steps so they are more efficient for malware cases. The steps reflect the fastest process I’ve found yet to not only find malware on a system but to determine how malware got there. Just an FYI, the methodology is not only limited to malware cases since I use the same process for fraud and acceptable use policy violations.

Welcome to Year 2

Sunday, August 12, 2012 Posted by Corey Harrell 1 comments
This past week I was vacationing with my family when my blog surpassed another milestone. It has been around for two years and counting. Around my blog’s anniversary I like to reflect back on the previous year and look ahead at the upcoming one. Last year I set out to write about various topics including: investigating security incidents, attack vector artifacts, and my methodology. It shouldn’t be much of a surprise then when you look at the topics in my most read posts from the past year:

1. Dual Purpose Volatile Data Collection Script
2. Finding the Initial Infection Vector
3. Ripping Volume Shadow Copies – Introduction
4. Malware Root Cause Analysis
5. More About Volume Shadow Copies
6. Ripping VSCs – Practitioner Method

Looking at the upcoming year there’s a professional change impacting a topic I’ve been discussing lately. I’m not talking about a job change but an additional responsibility in my current position. My casework will now include a steady dose of malware cases. I’ve been hunting malware for the past few years so now I get to do it on a regular basis as part of my day job. I won’t directly discuss any cases (malware, fraud, or anything else) that I do for my employer. However, I plan to share the techniques, tools, or processes I use. Malware is going to continue to be a topic I frequently discuss from multiple angles in the upcoming year.

Besides malware and any other InfoSec or DFIR topics that have my interest, there are a few research projects on my to-do list. First and foremost is to complete my finding fraudulent documents whitepaper and scripts. The second project is to expand on my current research about the impact virtual desktop infrastructure will have on digital forensics. There are a couple of other projects I’m working on and in time I’ll mention what those are. Just a heads up, at times I’m going to be focusing on these projects so expect some time periods when there isn’t much activity with the blog. As usual, my research will be shared either through my blog or another freely available resource to the DFIR community.

Again, thanks to everyone who links back to my blog and/or publicly discusses any of my write-ups. Each time I come across someone who says that something I wrote helped them in some way makes all the time and work I do for the blog worth the effort. Without people forwarding along my posts then people may not be aware about information that could help them. For this I’m truly grateful. I couldn’t end a reflection post without thanking all the readers who stop by jIIr. Thank you and you won’t be disappointed with what I’m gearing up to release over the next year.
Labels: