Triaging with the RecentFileCache.bcf File

Monday, April 21, 2014 Posted by Corey Harrell 0 comments
When you look at papers outlining how to build an enterprise-scale incident response process it shows the text book picture about what it should look like. It's not until you start building out the incident response (IR) process and/or responding to security events/incidents when you can truly see what the critical pieces are in the process. In most of the "text book style" IR documentation I've read (for my Masters, enjoyment, and research) they tend to gloss over the triage. Triage is only mentioned as necessary to confirm an indicator and determine if a security event is an incident. Whether if you are building out an IR process or doing internal IR work, triage is not an activity that should be glossed over. In my opinion, it is one of the most important steps. Not only does it need confirm an indicator and determine if a security event is an incident but it also needs to provide guidance to staff for security events that are not incidents. In addition, a good percentage of one's work will be triaging indicators/events to determine what needs to be done - if anything. Seeing how important triage is I dedicated a lot of time to improving and refining techniques to triage security indicators and events. In this post I'm explaining a single technique, which is how the Recentfilecache.bcf file can be used to quickly identify an infected system.

It's All About Triage


I'm not going to disclose my exact triage process nor any other specifics about activities conducted during triaging. However, I will explain the logic behind it to show my thought process and how it was structured. My goal for triage was to find a balance between the amount of resources and time one spends on triaging indicators and alerts. Too much time results in less items being looked at and resources being wasted on trivial things. Not enough time results in items potentially slipping through the cracks. The balance needs to be just enough time spent on triaging an indicator to perform a sufficient analysis so as many indicators/events as possible can be triaged. Lastly, the triage process has to be understandable since I'm teaching others how to do it.

To satisfy these goals I went with a tiered approach to triage. There are different levels in the triage process that does different activities. The lower levels are used to initially look into an indicator. Based on what is learned will determine the next step. Either stop work on the indicator (and update monitoring as needed) or do a more in-depth triage. At the next level the process repeats. Continuing triaging to determine the next steps and take action as necessary. There are four layers in the triage process with the top one being the most in-depth.

Each layer leverages various data and examines that data using various techniques. This way multiple techniques and various data sources are used to provide a more accurate picture. If attempts are made to bypass one technique one of the others will catch it. One technique in one of the layers is the one I'm discussing in this post. Collecting and examining the Recentfilecache.bcf file from a system suspected of being impacted with malicious code to confirm if it is infected and where malware may be located.

Revisiting the RecentFileCache.bcf File


It's probably important to first revisit what the RecentFilecache.bcf file is and why it is important. In my post Revealing the RecentFileCache.bcf File I explained the significance of this file as it relates to digital forensics and incident response. The file records executables that executed on a Windows 7 system and the relevance of the listed executables mean the following:

        1.  The program is probably new to the system.
        2.  The program executed on the system.
        3.  The program executed on the system some time after the ProgramDataUpdater task was last ran.

A good percentage of the time when a machine is compromised with malicious code  an executable is initially created on the system and executed. This is how both downloaders and droppers typically work and why both of these tools end up listed in the RecentFileCache.bcf file. So when you are looking into an malicious code indicator checking this artifact is a fast way to determine if a new executable was created and executed on the system. I've been using this technique for some time and it is very effective in confirming a system infected with malware that occurred fairly recently (within last 24 hours or since the ProgramDataUpdater task last ran).

Still Digesting Harlan Carvey's Latest Work


If you haven't noticed Harlan Carvey has released some new things for the community. There is the new RegRipper package which includes a bunch of new plug-ins. His new book Windows Forensic Analysis Toolkit: Advanced Analysis Techniques for Windows 8 was published and he released the book materials online. Needless to say I'm still digesting all of this. However, I wanted to take a look at what is new. After reading the RegRipper update file to see the new plug-ins I went through his book archive. That's when I saw one of the tools in the Chapter 4 folder named rfc.exe. To see what the tool does you can read the Perl source code (wfa4e\ch4\source rfc.pl) but I'll say what the tool does. It parses the RecentFileCache.bcf file.

Until now, there are not many (if any) tools that parse this file. You can read the programs listed in this artifact with a hex editor but doing so takes time. Using a parser is faster since all you need to do is execute one command to see the listed programs. For example, running the rfc.exe tool against a Recentfilecache.bcf file from a machine recently infected shows the following:

rfc.exe G:\Windows\AppCompat\Programs\RecentFilecache.bcf

c:\windows\system32\schtasks.exe
c:\program files\common files\java\java update\jusched.exe
c:\users\lab\appdata\local\microsoft\windows\temporary internet files\content.ie5\i87xk24w\n6up2014[1].com
c:\program files\common files\java\java update\jucheck.exe



I highlighted the one suspicious file; the com file that executed from a user profile's temporary internet files folder. As you can see, rfc.exe makes a quick way to locate malware even quicker.

Triaging with RecentFileCache.bcf File and rfc.exe


As I demonstrated, combining the RecentFileCache.bcf file with Harlan's rfc.exe (or rfc.pl) tool is a quick way to locate malware. This technique is excellent for use in triage to determine if a system is infected. All that needs to be done is to access or collect the RecentFileCache.bcf file then parse it to see what is listed in it.

Another useful thing to know about the RecentFileCache.bcf file is it is not a locked file. This means the file can be accessed, collected, and parsed without another tool to unlock it. If you are triaging an indicator on a live system all you need to do is run the following command:

E:\DFIR-Tools\rfc.exe C:\Windows\AppCompat\Programs\RecentFileCache.bcf

The command will work but who wants to interact with the live system through its keyboard. This requires finding the system, walking to the system, explaining things to a person, and then triaging the system. It's not a fast method. The better approach is to do your triage remotely against the system. It works regardless if the system is on a different floor or in a different city. This is made even easier if the environment you are in has Windows administrative shares. These shares "are hidden network shares created by Windows NT family of operating systems that allow system administrators to have remote access to every disk volume on a network-connected system." You can access them by typing in the full UNC path in Windows Explorer or the Run dialog box. This will bring up the remote system's volume in Windows Explorer which is a GUI and not a fast method. The faster method is to access the remote system's volume by creating a mapped network drive to it. After the drive is mapped you can either parse the RecentFileCache.bcf file directly or copy the file to your analysis machine for offline parsing. Let's walk through how to do this.

The command below maps the remote C volume to the drive letter R and the mapped drive is not persistent.

net use R: \\192.168.1.1\c$ /PERSISTENT:NO


The command below parses the RecentFileCache.bcf file remotely.

C:\DFIR-Tools\rfc.exe R:\Windows\AppCompat\Programs\RecentFileCache.bcf


The command below is to collect the file for offline parsing (the asterisk tells xcopy the object is a file and the /h switch is to copy system and hidden files)

xcopy R:\Windows\AppCompat\Programs\RecentFileCache.bcf C:\Collection\RecentFileCache.bcf* /h


After the RecentFileCache.bcf is either parsed or copied then the drive letter to the remote C volume can be deleted with the following command:

net use R: /delete /y


If you have access to Encase Enterprise (or Encase Forensic with F-response) and prefer to stay within Encase then use Lance's RecentFileCache.bcf enscript.

Wrapping Things Up


What I illustrated is the basic technique for leveraging the Recentfilecache.bcf file to quickly confirm if a system is infected and where malware may be located. The technique is even faster by automating it in a script. As you might have guessed, I use this technique remotely across the enterprise when triaging indicators. It can be a standalone technique but I don't use it as such. It's just one technique out of many I incorporated into my layered triage process. Hopefully, people find this useful and it provides a little more context to my tweet last week.



Holding the Line

Sunday, April 6, 2014 Posted by Corey Harrell 3 comments
You end up having to talk to a range of people when building out an internal incident response process. It's a natural consequence because the way people did things in the past is changing and these changes will impact the way they do things going forward. The people you need to communicate with is dependent upon the organization and what the changes actually are. In my case, I ended up discussing incident response with a cross section from the information technology department including: helpdesk, server groups, security units, and management. At some point during the discussions the "why" incident response is needed has to be addressed in order to get buy-in to implement the changes. Thinking about the "why" and the various audiences who need to hear it makes it more clear how concise your explanation needs to be. The message has to be clear and convey the reality we find ourselves in with the threats we face without adding any additional FUD (fear, uncertainty, and doubt) in attempts to influence people's decisions.

Prevention Will Eventually Fail


Despite our best efforts and everything we try to do to secure organizations the end result will be the same. The preventative controls we put in place to protect organizations from the threats we face will fail. The gravity of the situation is illustrated in a three year old FireEye quote:

"today’s cyber criminals are nearly 100% effective at breaking through traditional security defenses in every organization and industry, from the security savvy to security laggards"

The defense in-depth strategies of applying layers of security controls to protect data is incapable of preventing compromises and data breaches. The continuous stream of news about breached organizations from retail stores to universities to public sector organizations is our constant reminder that prevention will not prevent threats from accomplishing what they are trying to do.

Incident Response - The Last Line of Defense


Anton Chuvakin framed the conundrum we find ourselves in with his paper Security Incident Response in the Age of APT (behind a pay wall):

"First, prevention and preventative security controls will fail. Prevention fails on a daily basis at many organizations; it will suffice to look at antivirus tools and contrast their 99%-plus deployment rates with widespread ongoing malware infection rates."

"Second, detection also fails on a frequent basis. A copy of Verizon Data Breach Investigations Report reveals plentiful evidence of that."

"What remains of the entire realm of information security. Only incident response."

"Thus, IR simply has to be there because this is where the security of an organization will fall after all else fails - and it will."

In essence, after every other security control fails an organization's last line of defense is incident response. A line that needs to: investigate, contain, remediate, detect further compromises across the enterprise, and reduce future compromises. This last line of defense is solely dependent on having the right people (incident responders) to carry it out. As incident responders, our job is to hold this line against the threats our organizations face on a daily basis despite everything else failing around us.

To be a catalyst for change this is the message we must convey:

Prevention will fail and when it does, the last line of defense to thwart the threats we are up against is the incident response process and its staff.
Labels:

Exploring the Program Inventory Event Log

Sunday, March 23, 2014 Posted by Corey Harrell 3 comments
The Application Experience and Compatibility feature ensures compatibility of existing software between different versions of the Windows operating system. The implementation of this feature results in some interesting program execution artifacts that are relevant to Digital Forensic and Incident Response (DFIR). I spent a lot of time talking about these artifacts in my posts: Revealing the RecentFileCache.bcf File, Revealing Program Compatibility Assistant HKCU AppCompatFlags Registry Keys, and Exploring Windows Error Reporting. In this short post I'm discussing another source containing program execution information, which is the Application-Experience Program Inventory Event Log.

Where Is the Program Inventory Event Log


Similar to the other event logs on a Windows system, the program inventory event log (Microsoft-Windows-Application-Experience%4Program-Inventory.evtx) is located in the C:\Windows\System32\winevt\Logs folder as shown below.


In the Windows event viewer the log can be found at: Applications and Services Logs\Microsoft\Application-Experience\Program-Inventory as shown below.



Program Inventory Event Log Relevance to DFIR


The DFIR relevance of the events recorded in this log has been mentioned by others. The Cylance Blog briefly mentions it in their post Uncommon Event Log Analysis for Incident Response and Forensic Investigations. The NSA document Spotting the Adversary with Windows Event Log Monitoring references the log in the Recommended Events to Collect section (pg 27). The document outlined the following event IDs: 800 (summary of software activities), 903 & 904 (new application installation), 905 & 906 (updated application), and 907 & 908 (removed application). Harlan provides more context on how the events in this log can be useful in his post HowTo: Determine Program Execution. He shared how he used this log to determine an intruder installed a tool on a compromised system.  Now let's take a closer look at these event IDs to see what information they contain.

Event ID 800 (summary of software activities)


Event IDs 900 & 901 (new Internet Explorer add-on)



Event IDs 903 & 904 (new application installation)



Event ID 905  (updated application)


Event IDS 907 & 908 (removed application).


 

Lose Yourself in the DFIR Music

Sunday, March 9, 2014 Posted by Corey Harrell 3 comments
"Look, if you had one shot, or one opportunity,
To seize everything you ever wanted. One moment
Would you capture it or just let it slip?"


~ Eminem


Everybody has a story. Everybody has a reason about why they ended up in the Digital Forensic and Incident Response (DFIR) field. Sharing these experiences is beneficial to those looking to follow in your footsteps; students fresh out of college, career changers, or people looking to do something different in DFIR. In this post I'm sharing my story, my story about how I became an incident responder. A path that has been very challenging while rewarding at the same time. A path that started with the mindset seen in the "Lose Yourself" lyrics below.


"You better lose yourself in the music, the moment
You own it, you better never let it go
You only get one shot, do not miss your chance to blow
The opportunity comes once in a lifetime"


Formulate Your Plot


At the time I was working in a security unit doing network penetration testing and digital forensics support for investigations. How I ended up in this unit in the first place was due to the same mindset I'm about to describe. I enjoyed the offensive side of the house but I knew it wasn't my passion. Digital forensics was at one point challenging but it became very repetitive mostly working fraud investigations. I wanted something more. I wanted something where you are constantly challenged; I wanted to do incident response. I set my sights on incident response being the end goal and knew everything I would do was to help me reach that goal. I didn't know where this path would lead but I thought about my preferences which were in this order: incident responder in my own organization, incident responder with a specific organization in the NYS public sector, or joining an established rock solid IR team.

Focus on the Process


In DFIR and information security in general, people have a tendency to focus on the tools one should use. The better approach and the one I take is to initially focus on the process one uses to leverage tools to accomplish something. Within incident response there are numerous processes that are dictated by an incident's classification. To make it more manageable as I started my journey into incident response I focused on one specific incident type (malicious code incidents). I set out to learn everything about what examination steps one uses to investigate a machine compromised with malicious code, what artifacts to parse, and the tools one uses.

My plan wasn't to only be skilled at malicious code incidents since my focus was on the larger incident response field. In addition to learning the technical skills and knowledge, I spent considerable time better understanding the larger incident response process. How the process should work, how to design the process, how to build and manage a CSIRT, and how to manage incidents. I even focused on incident response while I was going for my Masters of Science in Information Assurance. I took the incident response management track as well as made this my focus on assignments where we had flexibility with choosing our own topics.

Focus on the Skill Set


Learning the processes is only the first step; my next step was to develop my skill set carrying out those processes. I spent considerable time practicing the malicious code  investigation process by compromising test systems followed by examining them. In a future post I'll share how I did this so others can follow suit. I did this for months. In the beginning it was to learn the process then it was to be more efficient then it was to be faster.

As I was working towards my goal I kept my eyes open for the opportunities that come once in a lifetime. I knew I wasn't ready to approach my organization about doing IR work since I had to own it when I did. However, other opportunities presented themselves when family members and friends reached out to me as their "IT support guy" because their systems were infected. This opportunity allowed me to continue building my skill set while helping others. In addition to practicing on test systems, I began making it known to family and friends that I will fix their infected computers for free.

Search for the Opportunity


Opportunities have a tendency to just appear but sometimes you have to seek them out. At the time I was well prepared with my knowledge and skill set in incident response so I was confident I could own certain opportunities if I found them. I started to pursue my first preference for doing IR work, which was for my current organization. I didn't ask them to send me to training or to let me help them with their incident response process. Instead I wanted them to see the value in what IR can do for an organization besides putting out fires but I had to do it in a way to compliment my skills.

I got the word out to the other security units that I could assist them with any infected systems. I made two things clear. First, I would tell them what the root cause was so they can start to mitigate infections by strengthen their controls. I knew root cause analysis wasn't consistently being done and for the security units to have access to this new skill set was instant value for them. My second point was a calculated risk but I made it clear I would be faster than their current process as well as the IT shops who re-image infected systems. If I was going to be doing the work it had to be faster than their current processes. If it wasn't then why should they even bother with me. I knew being faster would add value to the organization by freeing up FTEs (full time employees) to do other work.

I occasionally kept putting out reminders to the security units about my offer as well as getting my supervisor to remain on board for me to do this work. I can't remember how long this selling went on for (maybe a month or two) but my opportunity finally presented itself. There was an infected machine and they wanted to know the root cause. This was my shot and I knew there were two outcomes. If I came back with nothing or if my response was I can't do this work without training then they probably wouldn't had come back to me for help again. If I nailed it and showed them the value in root cause analysis for minor malicious code events then maybe I would do this work more frequently. Needless to say, the preparation I did on my own enabled me to nail the examination and I came through on the two points I sold to them to get their buy-in. Nailing the first examination wasn't enough because I had to own this and lose myself in the DFIR music.

Own the Opportunity


I and my organization had a taste of using the IR skill set for security events that were not considered to be incidents. Now I had to own this opportunity. I continued working to improve my skill set through compromising test systems and helping anyone who asked. I continued buying and reading DFIR books as well as blogs, papers, articles, etc.. I continued to hone my process to make it faster. I sacrificed my free personal time to live and breathe DFIR. The request for malicious code assistances kept coming in and each time I was better than the last. I kept getting faster and I kept showing my organization more value in what IR can do.

As I said, opportunities have a tendency to present themselves. After some time building up this working relationship there was a priority security incident. A highly visible website was potentially compromised and a determination about what happened had to be done as soon as possible. The case was mine if I wanted it and I knew I was prepared due to the months I lost myself in the DFIR music. This opportunity was different and had more at stake. My organization leveraged a third party IR service for priority incidents. In this incident, my organization used this service in addition to my assistance. To make the stakes even higher, initially we (myself and the third party) were not allowed to communicate with each other. This was an opportunity for me to not only reassure myself my place in the IR field but for me to own my place in my organization's incident response process. I worked the case with my co-worker (who was a network penetration tester with zero DFIR experience) and we were able to come back with answers before the third party service. In the end, the server wasn't compromised and everyone can stand down.

I continued losing myself in the DFIR music and owned each new opportunity that presented itself. This journey has lead to where I am today. I'm building out my organization's enterprise-wide incident response capability, developing our CSIRT, and improving our response capability by making it faster. I'm improving our detection capability by architecturing and managing our SIEM deployment as well as combining our detection and response capabilities.

Lose Yourself in the DFIR Music


The path that lead me to become an incident responder has been very challenging but rewarding. It required sacrifices and a lot of work to be prepared for the opportunities that God put in my path. It requires constant motivation so I will be better tomorrow than I am today. It requires me to approach my career as if each opportunity may be the last. It requires me to have the mindset seen in the "Lose Yourself" lyrics.

"You better lose yourself in the music, the moment
You own it, you better never let it go
You only get one shot, do not miss your chance to blow
The opportunity comes once in a lifetime"


Labels:

Exploring Windows Error Reporting

Monday, February 24, 2014 Posted by Corey Harrell 2 comments
The Application Experience and Compatibility feature ensures compatibility of existing software between different versions of the Windows operating system. The implementation of this feature results in some interesting program execution artifacts that are relevant to Digital Forensic and Incident Response (DFIR). I already highlighted a few of these in my posts Revealing the RecentFileCache.bcf File and Revealing Program Compatibility Assistant HKCU AppCompatFlags Registry Keys. There are more artifacts associated with this feature and the Windows Error Reporting (WER) are one of them. Over the past few months WER has been discussed frequently due to the potential data it exposes when data is sent to Microsoft. However, WER can be a useful program execution artifact for incident response since malicious code - such as malware and exploited applications - can crash on systems. This short post provides discusses WER and illustrates how it is helpful to track malware on a system.

What is Windows Error Reporting


Windows Error Reporting is basically a feature to help solve problems associated with programs crashing on the Windows operating system. The Windows Internals, Part 1: Covering Windows Server 2008 R2 and Windows 7  goes into more detail by stating:

"WER is a sophisticated mechanism that automates the submission of both user-mode process crashes as well as kernel-mode system crashes."

The service analyzes the crashed application's state and builds context information surrounding the crashed program. The book continues by saying:

On default configured systems, an error report (a minidump and XML file with various details, such as the DLL version numbers loaded in the process) is sent to Microsoft's online crash analysis server. Eventually, as the service is notified of a solution for a problem, it will display a tooltip to the user informing her of steps that should be taken to solve the problem.

How Does Windows Error Reporting Work?


There are two registry keys responsible for WER's configuration. These keys are listed below; the first key affects system-wide behavior while the second is user specific.

HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Windows Error Reporting
HKEY_CURRENT_USER\Software\Microsoft\Windows\Windows Error Reporting


The best resource I found explaining how WER works is a paper written by 0xdabbad00. Their paper is titled Notes on Windows Error Reporting and the actual PDF can be found here. The paper "attempts to better explain what is and is not possible and to generalize the attack classes for all error reporting" and touches on the following key points:

        - What traffic is sent unencrypted and what is sent encrypted
        - What data is in the unencrypted traffic

I won't try to rehash what is written in the paper since it really goes into great detail. Anyone who wants to know more about WER should read this.

What Artifacts Are Left By Windows Error Reporting?


One item I really liked about the Notes on Windows Error Reporting paper is its Appendix. The focus of the paper is on explaining the WER feature but the Appendix provides some useful DFIR tidbits about the WER artifacts present on the system. These artifacts are important because they show a program was running on the system and it eventually crashed. In the past, WER artifacts have given me more context about the other program execution artifacts located on a system. The WER artifacts outlined in the Appendix include: event logs, WER folder, AppCompat.txt file, and WERInternalMetadata.xml file.

WER records an entry in the event log when a crashed application is analyzed and then another event log entry is recorded if information is sent to Microsoft. The Appendix shows what this event log looks like including the information it contains. The event log also shows that the WER folder is located at C:\Users\username\AppData\Local\Microsoft\Windows\WER.

The paper also explains what the AppCompat.txt and WERInternalMetadata.xml files are while the Appendix shows the information stored in these files. Either one of the files provide a wealth of information about the program that crashed such as the parent process, parent process command line, and process path. 

Additional about Windows Error Reports


I wanted to provide additional information about one WER artifact mentioned in the paper. Specifically, the actual Windows Error Report themselves. A Windows Error Report records a ton of information about a program that was running at some point in the past. To illustrate I'll walk through a WER for a piece of malware that crashed on a system. The screenshot below shows the beginning of a report and some of the information shown is when the program crashed and program was 32-bit (notice the WOW64).



The next portion of the report starts to provide information about the crashed program.


A little bit further down in the report you can see part of the user interface message as shown below.


The report even recorded the program's loaded modules at the time of the crash. This section contains the file path to the crashed application and in this instance the program is highly suspicious (executable launching from a temp folder).


The end of the report contains the last piece of useful information about the crash.


A search on the AppName in the Malware Analysis Search provides some leads about what malware was present on the system. It leads to VirusTotal reports and sandbox reports showing malware crashing such as this one.

WER's Relevance


WER provides more artifacts that show program execution. Overall this artifact is not as beneficial as the other program execution artifacts but once in a while malicious code will crash or cause an application to crash. When this occurs WER provides more context about what occurred on the system and the relevance of the executable listed means the following:

1.  The program executed on the system.
2.  The program crashed on the system.
3.  The data in the WER artifacts is information about the program at the time it was running and crashed on the system.


Linkz 4 Mostly Malware Related Tools

Tuesday, February 11, 2014 Posted by Corey Harrell 2 comments
It's been awhile but here is another Linkz edition. In this edition I'm sharing information about the various tools I came across over the past few months.

Process Explorer with VirusTotal Integration

By far the most useful tool released this year is the updated  Process Explorer program since it now checks running processes against VirusTotal. This added feature makes it very easy to spot malicious programs and should be a welcome toolset addition to those who are constantly battling malware. To turn on the functionality all you need to do is to select the "Check Virustotal" option from the Options menu.

 
After it is selected then the new Virustotal column appears showing the results as shown below:


The hyperlinks in the VirusTotal results can be clicked, which brings you to the VirusTotal report. The new functionality is really cool and I can see it making life easier for those who don't have DFIR skills to find malware such as IT folks. Hence, my comment about this being the most useful tool released. The one thing I should warn others about is to think very hard before enabling is the "Submit Unknown Executables" to VirusTotal since this will result in files being uploaded to Virustotal (and thus available for others to download).

Making Static Analysis Easier

I recently became aware about this tool from people tweeting about. PEStudio "is a tool that can be used to perform the static investigation of any Windows executable binary." It quickly parses an executable file presenting you with indicators, VirusTotal results, imports, exports, strings, and a whole lot more as shown below.




Automating Researching URLs, Domains, and IPs

The next tool up to bat automates researching domains, IPs, hashes, and URLs. It's a pretty slick tool and I can see it being an asset when you need to get information quickly. TekDefense describes Automater as "a URL/Domain, IP Address, and Md5 Hash OSINT tool aimed at making the analysis process easier for intrusion Analysts." If you are tasked with doing this type of analysis then you will definitely want to check out this tool. The screenshot below is part of the report generated for the MD5 hash ae2fc0c593fd98562f0425a06c164920; the hash was easily obtained from PEStudio.


Norben - Portable Dynamic Analysis Tool

The next tool makes the dynamic analysis process a little easier. "Noriben is a Python-based script that works in conjunction with Sysinternals Procmon to automatically collect, analyze, and report on runtime indicators of malware. In a nutshell, it allows you to run your malware, hit a keypress, and get a simple text report of the sample's activities." To see this tool in action you can check out Brian Baskin's post Malware with No Strings Attached - Dynamic Analysis; it's an excellent read. In order to get a screenshot, I ran the previous sample inside a virtual machine with Noriben running.


I'm Cuckoo for Cocoa Puffs

You can probably guess what my kids ate for breakfast but this next tool is not child's play. Version 1 of the Cuckoo Sandbox has been released. The download is available on their download page. For those who don't want to set up their own in-house sandbox then you can use Malwr (the online version).

Pinpointing

The next tool comes courtesy of Kahu Security. The best way to explain the tool is to use the author's own words from his post Pinpoint Tool Released.

"There are many times where I come across a drive-by download, especially malvertisements, and it takes me awhile to figure out which file on the compromised website is infected. I wrote Pinpoint to help me find the malicious objects faster so I can provide that info to webmasters for clean-up purposes."

The post provides some examples on the tool's use as well as their most recent post Pinpointing Malicious Redirects (nice read by the way.) You can grab the tool from the tools page.

What You See May Not Be What You Should Get

I thought I'd share a post highlighting shortcomings in our tools while I'm on the topic about malware. Harlan posted his write-up Using Unicode to hide malware within the file system and it is a well written post. The post discusses an encounter with a system impacted by malware and the anti-forensic techniques used to better hide on the system. One method used was to set the file attributes to hidden and system; to hide a folder from the default view settings. The second method and more interesting of the two use the use of Unicode in the file name path. What the post really highlighted was how multiple tools - tools that are typically in the DFIR toolset - do not show the Unicode in the file path. This would make it possible for anyone looking at a system to overlook the directory and possibly miss an important piece of information. This is something to definitely be aware about for the tools we use to process our cases.

Do You Know Where You Are? You're In The NTFS Jungle Baby

If you haven't visited Joakim Schicht's MFT2CSV website lately then you may have missed the tools he updated on his downloads page. The tools include: LogFileParser that parses the logfile (only open source logfile parser available), mft2csv that parses the $MFT file, and UsnJrnl2Csv that parses the Windows Change Journal. The next time you find yourself in the NTFS jungle you may want to visit the mft2csv site to help you find your way out.

Still Lost in the NTFS Jungle

Rounding out this linkz post are a collection of tools from Willi Ballenthin. Willi had previously released tools such as his INDX parser and python-registry. Over the past few months he has released some more NTFS tools. These include: list-mft to timeline NTFS metadata, get-file-info to inspect $MFT records, and fuse-mft to mount an $MFT. I haven't had the time to test out these tools yet but it's at the top of my list.
Labels: ,

My Journey into Academia Part Two

Tuesday, January 28, 2014 Posted by Corey Harrell 1 comments
I have always maintained a strong separation between jIIr -which is my personal blog - and the work I do for my employers. For one time, for one post I'm blurring the lines between my personal publishing platform and work I did for an employer. I previously posted about My Journey into Academia where I discussed why I went from DFIR practitioner to DFIR educator. I'm continuing the journey by discussing the course I designed for Champlain College's Master of Science in Digital Forensic Science (MSDFS) program. My hope is by sharing this information it will be beneficial to those developing DFIR academia curriculum and those going from DFIR practitioner to DFIR educator.

Why Academia Recap


My Journey into Academia post goes into detail about the issue where some academic programs are not preparing their students for a career in the digital forensic and incident response fields. How students coming out of these programs are unable "to analyze and evaluate DFIR problems to come up with solutions." In the words of Eric Huber in his post Ever Get The Feeling You’ve Been Cheated?:

"It’s not just undergraduate programs that are failing to produce good candidates. I have encountered legions of people with Masters Degrees in digital forensics who are “unfit for purpose” for entry level positions much less for positions that require a senior skill level."

As I said before, my choice was simple: to use my expertise and share my insight to improve curriculum; to put together a course to help students in their careers in the DFIR field.

Why Champlain College


Years ago I went to my first DFIR conference where I met Champlain's MSDFS program director at the time. One of the discussions we had was about the program being put together. A program that was supposed to not only help those looking to break into the field but to benefit those already in the field by covering advanced forensic topics.

Years later when an opportunity presented itself for me to develop the Malware Analysis course for Champlain's MSDFS program I remembered this discussion. I always heard great things about Champlain's programs and it was humbling to be offered this chance. It was even better when I took a look at the MSDFS curriculum. Seeing courses like: Operating System Analysis, Scripting for Digital Forensics, Incident Response and Network Forensics, Mobile Device Analysis, and then adding Malware Analysis to the mix. The curriculum looks solid covering a range of digital forensic topics; a program I could see myself associated with.

For those who don't want to pursue a Master's degree can have access to the same curriculum through their Certificates in Digital Forensic Science option. An option I learned about recently.

DFS540 - Malware Analysis Course


How can you reverse malware if you are unable to find it? How can you find malware without identifying what the initial infection vector is? How can you identify the initial infection vector without knowing what artifacts to look for? How can you consistently explore artifacts without a repeatable process to follow? How can you carry out a repeatable process with having the tools to do so? I could go on but this adequately reflects the thought process I went through when developing the curriculum. Basically, the course had to explore malware in the context that a DFIR practitioner will encounter it. To me, the best way to approach building the course was by starting with the course objectives and the following were the ones created (posted with permission):

     -  Evaluate the malware threat facing organizations and individuals
     -  Identify different types of malware and describe their capabilities including propagation and persistence mechanisms, payloads and defense strategies
     -  Categorize the different infection vectors used by malware to propagate
     -  Examine an operating system to determine if it has been compromised and evaluate the method of compromise
     -  Use static and dynamic techniques to analyze malware and determine its purpose and method of operation
     -  Write reports evaluating malware behavior, methods of compromise, purpose and method of operation

Course Textbooks


After the course objectives were created the next item I took into consideration was the reading materials. One common question people ask when discussing academic courses is what the courses’ textbooks are. I usually have mixed feelings about this question since the required readings always extend beyond textbooks. The Malware Analysis course is no different; the readings include white papers, articles, blogs, reports, and research papers. However, knowing the textbooks does provide a glimpse about a course so the ones I selected (as of the date this post was published) were:

Szor, P. (2005). The Art of Computer Virus Research and Defense. Upper Saddle River: Symantec Press.

Russinovich, M., Soloman, D., & Ionescu, A. (2012). Windows Internals, Part 1 (6th ed.). Redmond: Microsoft Press.

Honig, A, & Sikorski, M. (2012). Practical Malware Analysis: The Hands on Guide to Dissecting Malicious Software. San Francisco: No Starch Press.

Ligh, M., Adair, S., Hartstein, B., & Richard, M. (2011). Malware Analyst’s Cookbook and DVD: Tools and Techniques for Fighting Malicious Code. Indianapolis: Wiley Publishing.

Course Curriculum


I started creating the curriculum after the objectives were all set, the readings were identified, and most of my research was completed (trust me, I did plenty of research). When building out the curriculum its helpful to know what format you want to use. Similar to my collegiate experience (both undergraduate and graduate), I selected the traditional teaching methods for the course such: lectures, readings, discussions, laboratories, and assignments. I made every effort to make sure the weekly material covered by each teaching method tied together as well as the material from each week.

Developing Weekly Content


To illustrate how the weekly material ties together I thought it would be useful to discuss one week in the course; the course's second week material which explores anti-forensics and Windows internals. Each week (eight in total) begins with a recorded lecture. The lectures either convey information that is not discussed in the readings - such as a process -  or re-enforces the information found in the readings and lab. The week two lecture explores anti-forensics to include: what it is, its strategies, and its techniques to defeat post mortem analysis, live analysis, and executable analysis. When dealing with malware whether on a live system (including memory images), powered down system, or the malware itself it's critical to know and understand the various anti-forensics techniques it may leverage. Without doing so may result in an analyst missing something or reaching false conclusions. Plus, anti-forensics techniques provide details about malware functionality and can be used as indicators to find malware.

The week two readings continue exploring anti-forensics as well as self protecting strategies used by malware. The readings also go in-depth on Windows internals to explore covering items such as system architecture, registry, processes, and threads. All of the readings explore topics crucial for malware forensics and analysis.

Even though this is an online course I made a strong emphasis on the weekly labs so students explore processes, techniques, and tools. The week two lab ties together the week's other material by exploring how two malware samples use the Windows application programming interface to conceal data with different anti-forensic techniques.

The remaining weeks in the course ties together the material in a similar way I described for the second week. (There is more to the second week's material but I didn't think it was necessary to disclose every detail). The topics explored in the other weeks include but isn't limited to:

     -  Malware trends
     -  Malware characteristics
     -  Memory forensics
     -  Malware forensics
     -  Timeline analysis
     -  Static and dynamic malware analysis

Assignments


The curriculum wouldn't be complete without requiring the students to complete work. In addition to weekly discussions about engaging topics there needs to be assignments to engage and challenge students. The assignments need to tie back to the course objectives and this was another area I made sure of. I'd rather not disclose the assignments I put together for the course or my thought process behind creating them. However, I will discuss one assignment that I think truly reflects the course; the final assignment. Students are provided with a memory image and a system forensic image and then are tasked with meeting certain objectives. Some of these objectives are: finding the malware, identifying the initial infection vector, analyzing any exploits, analyzing the malware, and conveying everything they did along with their findings in a digital forensic report.

Summary


In the end, the Malware Analysis course is just one of the courses in Champlain's MSDFS and certificate in digital forensic science programs; a course I would have killed to take in my collegiate career. A course with material that cannot be found in any training.

Developing the course has been one of the most challenging and rewarding experiences in my DFIR career. Not only did I develop this course but I'm also the instructor. One of the best experiences so far about my journey into academia has been watching the growth of students. Seeing students who never worked a malware case successfully find malware, identify the initial infection vector, analyze the malware, and then communicate their findings. Seeing students who already had DFIR experience become better at working a malware case and explaining everything in a well written report. My posts about academia may come to a close but the journey is only just beginning.

 
Labels: