Building Timelines – Tools Usage

Sunday, September 25, 2011 Posted by Corey Harrell 3 comments
Tools are defined as anything that can be used to accomplish a task or purpose. For a tool to be effective some thought has to go into how to use it. I have a few saws in my garage but before I try to cut anything with them I first come up with a plan on what I’m trying to accomplish. Timeline tools are no different and their usage shouldn’t solely consist of running commands. The post Building Timelines – Thought Process Behind It discusses an approach to develop a plan on the way timeline tools will be used. This post is the second part where the tools to build timelines is discussed.

There is not a single tool for building timelines since tools vary based on the DFIR practitioner’s needs and preferences. When I first started learning about timeline analysis I read as much as I could about the technique and downloaded various tools to test their capabilities to see what worked best for me. I’m discussing my current method and a few tools that I build timelines with. The method is different from what I was doing last month and will probably change down the road as tools are updated, new tools are released, and my needs/preferences vary.

I’m trying to show different ways timelines can be built in addition to building my own timeline for an infected Windows XP SP3 test system. The artifacts selected for my timeline are:  event logs, Internet Explorer history, XP firewall logs, prefetch files, Windows restore points, select registry keys, entire registry hives, and the file system metadata. The user specific artifacts (ie history and registry keys from the NTUSER.DAT hive) only need to be parsed for the administrator user account. The extraction of the timestamps from those artifacts will be accomplished in the following activities:

        -  Artifact Timestamps
        -  File System Timestamps
        -  Registry Timestamps

Tools’ Output

Before a timeline can be created one must first choose what format to use for the tools’ output. Selecting the format up front ensures multiple tools’ outputs can go into the same timeline. Three common output types are: bodyfile, TLN, and comma-separated value (csv). The bodyfile format shows file activity and separates the output into different sections. The version in use will determine what the sections are but the Sleuthkit Wiki bodyfile page explains the differences and provides an example. The TLN format breaks the data up into five sections: time, source, host, user, and description. Harlan provided a great description about his format in the post Timeline Analysis...do we need a standard? and in Appendum for the post TimeLine Analysis, pt III. The csv format stores data so it is separated by rows and columns. This format works well for viewing the timeline data in spreadsheets. However, unlike the bodyfile and TLN formats csv is not a standard format. The csv schema from tools may differ resulting in the need for additional processing for the outputs to go into the same timeline. Kristinn’s post Timeline Analysis 201 – review the timeline explains the csv schema used in his Log2timeline tool.

I mostly review timelines with spreadsheet programs so I opted for Log2timeline’s csv format. I use Log2timeline to convert other tools’ outputs into proper csv schema. My timeline in this post uses the csv format and I demonstrate how to convert between different formats.

Artifact Timestamps

I couldn’t come up with a good name when I was thinking about how to explain the different activities I do when creating timelines. What I mean when I say artifact timestamps is everything expect for the last write times from dumped registry hives and timestamps from the file system. The different tools to extract timestamps from artifacts include Harlan’s timeline tools and Log2timeline. Harlan accompanies his tools posted on the Win4n6 yahoo group with a great step by step guide about building timelines with his tools. I cover how to use Log2timeline and the following is a brief explanation about the tool’s syntax:

log2timeline.pl -z timezone -f plugin/plugin_ file -r -w output-file-name log_file/log_dir

        -z defines the timezone for the computer where the artifacts came from
        -f specifies the plugin or pluging file to run against the file/directory
        -w specifies the file to write the output to
        -r makes log2timeline work in recursive mode so the folder specified and its subfolders are all examined for artifacts

Options to Extract Timestamps with Single Plugin or Default Plugin File

Log2timeline is plugin based and the tool can execute a single plugin against a single file/directory or execute a plugin file against multiple files in directories. I prefer to use custom plugins for my timelines but first I wanted to show the single plugin and default plugin file methods. The command below will execute the evt plugin to parse the Security windows event log and the output will be written to a file named fake-timeline.csv.

log2timeline.pl -z local -f evt -w fake-timeline.csv F:\WINDOWS\system32\config\SecEvent.Evt

The single plugin method requires multiple commands to extract timestamps from different artifacts in a system. Plugin files address the multiple command issue since the file contains a list of plugins to run. Log2timeline comes with a few default plugin files and the best one that fits my selected artifacts is the winxp plugin file. The command below runs the winxp plugin file against the entire mounted forensic image (the red text highlights what is different from the previous command).

log2timeline.pl -z local -f winxp -w fake-timeline.csv –r F:\

The winxp plugin file makes things a lot easier since only one command has to be typed. However, the file parses a lot more data then I actually need. The plugins executed are: chrome, evt, exif, ff_bookmark, firefox3, iehistory, iis, mcafee, opera, oxml, pdf, prefetch, recycler, restore, setupapi, sol, win_link, xpfirewall, wmiprov, ntuser, software, and system. I only wanted to parse IE history but winxp is doing every browser supported by log2timeline. I only wanted to parse artifacts in the administrator’s user profile but the above command is parsing artifacts from every profile on the system. I wanted to limit my timeline to specific artifacts but winxp is giving me everything. Not exactly what I’m looking for.

Single plugins and default plugin files are viable methods for building timelines. However, neither let’s me easily build a timeline containing only my selected artifacts that were tailored to the case and system I’m processing. This is where custom plugin files come into play and why I use them instead.

Extracting Timestamps for my Timeline with Custom Plugin Files

Kristinn deserves all the credit for why I know about the ability to create custom plugin files. I’m just the guy who asked him the question and decided to blog the answer he gave me. A custom plugin file is a text file that lists one plugin per line and is saved with the .lst file extension. The picture is a custom file named test.lst and it contains plugins for prefetch files, event logs, and system restore points.

Custom Plugin File Example

The custom file is placed in the same directory where the default plugin files are located. On a Windows system with Log2timeline 0.60 installed the directory is C:\Perl\lib\Log2t\input\.

I only want to parse artifacts in the administrator user profile instead of all user profiles stored on the system. At the time I wrote this post, Log2timeline doesn’t have the ability to exclude full paths (such as unwanted user profiles) when running in recursive mode. As a result I create two custom plugin files; one file parses the artifacts in a user profile while the other parses the remaining artifacts throughout the system. This lets me control what user profiles to extract timestamps from since I can run the user plugin file against the exact ones I need.

The user custom plugin file is named custom_user.lst and contains the iehistory and ntuser plugins. The other custom plugin file is named custom_system.lst and contains the evt, xpfirewall, prefetch, and restore plugins. The two commands below execute the custom_user.lst against the administrator’s user account profile and custom_system.lst against the entire drive while saving the output to the file timeline.csv.

log2timeline.pl -z local -f custom_user -w C:\win-xp\timeline.csv –r “F:\Documents and Settings\Administrator”

log2timeline.pl -z local -f custom_system -w C:\win-xp\timeline.csv –r F:\

The commands extracted the timestamps from all of the artifacts on my list except for the entire registry hives last write times and file system timestamps. The picture shows the timeline built so far. The timeline is sorted and the section shown is where the prefetch file I referenced in the post What’s a Timeline is located.

Timeline Data Added by Custom Plugin File

Filesystem Timestamps

The filesystem timestamps is concerned about adding the activity involving files and directories to the timeline. There are different tools that extract the information including FTK Imager, AnalyzeMFT, Log2timeline, and the Sleuthkit. I’m demonstrating two different methods to add the data to my timeline to show the differences between the two. The tools for the first method include the Sleuthkit and Log2timeline while the second method only uses Log2timeline.

The fls.exe program in the Sleuthkit will list the files and directories in an image. The command below creates a bodyfile containing the files/directories’ activity in the test forensic image and stores the output in the file named fls-bodyfile.txt. (the –m switch makes the output format mactime, -r is for recursive mode, and –o is the sector offset where the filesystem starts)

fls.exe -m C: -r -o 63 C:\images\image.dd >> C:\win-xp\fls-bodyfile.txt

Fls.exe’s output is in the bodyfile format but my timeline is in Log2timeline’s csv format. Log2timeline has plugins to parse output files in the TLN and bodyfile formats. This means the tool can be used to convert one format into another. The command below parses the fls-bodyfile.txt file and adds the data to my timeline.

log2timeline.pl -z local -f mactime -w C:\win-xp\timeline.csv C:\win-xp\ fls-bodyfile.txt

The picture highlights the new entries to the section of my timeline. Doesn’t the story about what occurred become clearer?

Timeline Data Added by fls.exe

The file system in the Windows XP test system is NTFS. NTFS stores two sets of timestamps which are the $FILE_NAME attribute and $STANDARD_INFORMATION timestamps. Fls.exe along with the majority of the other forensic tools shows the $STANDARD_INFORMATION timestamps. However, there may be times when it’s important two include both sets of timestamps in a timeline. One such occurrence is when there’s a concern that timestamps might have been altered. Parsing the Master File Table ($MFT) can add both sets of timestamps to a timeline. The command below shows Log2timeline parsing the $MFT and adding the output to the file timeline-copy.csv.

log2timeline.pl -z local -f mft -w timeline.csv F:\$MFT

The picture below highlights the new entries for the data extracted from the $MFT. Notice the difference between the timeline only containing the $STANDARD_INFORMATION timestamps compared to containing both timestamps. Quick side note: the mft plugin could be added to a custom plugin file.

Timeline Data Added by $MFT

Registry Timestamps

In the artifact timestamps section Log2timeline extracted data from select registry keys. However, there are times when I want all registry keys’ last write times from registry hives. So far I want this ability when dealing with malware infections since it helps identify the persistence mechanism and registry modifications. The tools to extract the last write times from registry hives include Harlan’s regtime.pl script (I obtained it from the Sift 2.0 workstation) and Log2timeline. For my timeline I’m interested in the System, Software, and administrator’s NTUSER.DAT registry hives. The commands below has regtime.pl extracting the last write times from each hive and storing it in the bodyfile file named reg-bodyfil.txt (the –m switch prepends the text to each line and the –r switch is the path to the registry hive).

regtime.pl –m HKLM/system –r F:\Windows\System32\config\system >> C:\win-xp\reg-bodyfile.txt

regtime.pl –m HKLM/software –r F:\Windows\System32\config\software >> C:\winxp\reg-bodyfile.txt

regtime.pl –m HKCU/Administrator –r "F:\Documents and Settings\Administrator\NTUSER.DAT" >> C:\win-xp\reg-bodyfile.txt

Regtime.pl’s output is in the bodyfile format so Log2timeline makes the format conversion as shown in the command below.

log2timeline.pl -z local -f mactime -w C:\win-xp\timeline.csv C:\win-xp\reg-bodyfile.txt

The picture highlights the new data added to the timeline with the Sleuthkit. The timeline now highlights the malware’s persistence mechanisms (run and services registry keys)

Timeline Data with Registry Keys' Last Write Times

Sorting the Timeline

When new data is added to a timeline it’s placed at the end of the file which means the timeline needs to be sorted prior to viewing it. There are different sorting options such as the mactime.exe program in the Sleuthkit to bodyfile format timelines. A quick method I use is my spreadsheet program’s sort feature. The settings below will make Excel sort from the oldest time to the most recent.

Excel 2007 Sort Feature

Summary

The approach described in my Building Timeline series is just one way out of many to create timelines. The DFIR community has provided a wealth of information on the topic. Look at the following examples which are only a drop in the bucket of knowledge. Harlan Carvey created and released tools for creating timelines in addition to regularly posting on his blog (a few posts are HowTo: Creating Mini-Timelines and A Bit More About Timelines...). Kristinn Gudjonsson is very similar in that he created and released log2timeline in addition to providing information on his websites (a few posts are Timeline Analysis 101 and Timeline Analysis 201 – review the timeline). Rob Lee has shared his approach in the way he builds timelines and two of his posts are SUPER Timeline Analysis and Creation and Shadow Timelines And Other VolumeShadowCopy Digital Forensics Techniques with the Sleuthkit. Chris Pogue has shared his method to create timelines on his blog and a few posts are Log2Timeline and Super Timelines and Time Stomping is for Suckers. The last author I’ll directly mention is Don Weber who released his scripts for creating timelines and blogged about creating timelines (one post is Hydraq Details Revealed Via Timeline Analysis). These are only a few tools, blog posts, and authors who have taken the time to share their thoughts on timeline analysis. To see more try the keyword “timeline” in the Digital Forensic Search to see what’s out there.

For anyone looking to become more proficient at the timeline analysis then I recommend to do what I did. Read everything you can find on the topic, download and test the different tools people talk about, and try out different approaches to see how the resulting timelines differ. It won’t only teach you about timeline analysis but will help identify what method and tools work best for you.

Building Timelines – Thought Process Behind It

Saturday, September 17, 2011 Posted by Corey Harrell 0 comments
Timelines are a valuable technique to have at your disposal when processing a case. They can reveal activity on a system that may not be readily apparent or show the lack of certain activity helping rule theories out. Timelines can be used on case ranging from human resource policy violations to financial investigations to malware infections to even auditing. Before the technique can be used one must first know how to build them.

This is the first post in my two part series on building timelines. Part 1 discusses the thought process behind building timelines while Part 2 demonstrates different tools and methods to build them.

Things to Consider

There are two well known approaches to building timelines. On the one hand is the minimalist approach; only include the exact data needed. On the other hand is the kitchen sink approach; include all data that is available. My approach falls somewhere in the middle. I put the data I definitely need into timelines and some data I think I may need. The things I take into consideration when selecting data is:

        - Examination’s Purpose
        - Identify Data Needed
        - Understand Tools’ Capabilities
        - Tailor Data List to System

Examination’s Purpose

The first thing I consider when building timelines is what is the examination’s purpose. Every case should have a specific purpose or purposes the DF analyst needs to accomplish. For example, did an employee violate an acceptable usage policy, how was a system infected, how long was a web server compromised, or locate all Word documents on a hard drive?

Identify Data Needed

The next area to consider is what data is needed to accomplish the purpose(s). This is where I make a judgment about the artifacts I think will contain relevant information and the artifacts that could contain information of interest. A few potential data sources and their artifacts are:

        - Hard drives: file system, web browsing history, registry hives, Windows short cut files, firewall logs, restore points, volume shadow copies, prefetch files, email files, or Office documents

        - Memory: network connections, processes, loaded dlls, or loaded drivers

        - Network shares: email files (including archives), office documents, or PDFs

        - Network logs: firewall logs, IDS logs, proxy server logs, web server logs, print/file server logs, or authentication server logs

I take into account the case type and examination's purpose(s) when picking the artifacts I want. To illustrate the affect case type has on my choice I'll use a malware infected system and Internet usage policy violation as examples. The malware infected system would definitely be interested in the artifacts showing program execution, firewall logs, antivirus logs, and file system metadata. The additional items I'd throw into a timeline would be the user's web browsing history, removable media usage, and registry keys last write times since those artifacts might show information about the initial infection vector and persistence mechanism. For an Internet usage policy violation I'd only include the file system metadata and web browsing history since my initial interest is limited to the person’s web browsing activities.

The examination purpose(s) will point to other artifacts of interest. Let's say if the Internet usage policy violation's purpose was to determine if an employee was surfing pornographic websites and if they were saving pornographic images to the company issued thumb drive. In addition to file system metadata and web history, I’d now want to include artifacts showing recent user activity such as Windows shortcut files or the userassist registry key.

I try to find a balance between the data I know I'll need and the data that may contain relevant information. I don't want to put everything into the timeline (kitchen sink approach) but I'm trying to avoid frequently adding more data to the timeline (minimalist approach). Finding a balance between the two lets me create one main timeline with the ability to create mini timelines using spreadsheet filters. Making the call about what data to select is not going to be perfect initially. Some data may not contain any information related to the examination while other left out data is going to be important. The important thing to remember is building timelines is a process. Data can be added or removed at later times which means thinking about data to incorporate into a timeline should occur continuously. This is especially true as more things are learned while processing the case.

Understand Tools’ Capabilities

After the examination’s purpose(s) are understood and the potential data required to accomplish it is identified then the next consideration is understanding my tools’ capabilities. Timeline tools provide different support for the artifacts they can parse. I review the items I want to put into my timeline against the artifacts supported by my tools to identify what in my list I can’t parse. If any items are not supported then I decide if the item is really needed and is there a different tool that will work. Another benefit to making this comparison is that helps to identify artifacts I might not have thought about. The picture below shows some artifacts supported by the tools I’ll discuss in the post Building Timelines – Tools Usage.


Some may be wondering why I don’t think about the tools’ capability before I consider the data I need to accomplish the examination’s purpose(s). My reason is because I don’t want to restrict myself to the capability provided by my tools. For example, none of my commercial tools are able to create the timelines I’m talking about. If I based my decision on how to accomplish what I need to do solely on my commercial tools then timelines wouldn’t even be an option. I’d rather first identify the data I want to examine then determine if my tools can parse it. This helps me see the shortcomings in my tools and lets me find other tools to get the job done.

Tailor Data List to System

At this point in the thought process potential data has been identified to put into a timeline. A timeline could be built now even though the artifact list is pretty broad. My preference is to tailor the list to the system under examination. To see what I mean I’ll discuss a common occurrence I encounter when building timelines which is including a user account’s web browser history. Based on my tools supported artifacts, the web browsing artifacts could be from: Google Chrome, Firefox 2, Firefox 3, Internet Explorer, Opera, or Safari. Is it really necessary to have my tools search for all these artifacts? If the system only has Internet Explorer (IE) installed then why spend time looking for the other items. If the same system has 12 loaded user profiles but the examination is only looking at one user account then why parse the IE history for all 12 user profiles? To minimize the time building timelines and reduce the amount of data in them the artifact list needs to be tailored to the system. A few examination checks will be enough narrow down the list. The exact checks will vary by case but one step that holds across all cases is obtaining information about the operating system (OS) and its configuration.

I previously discussed this examination step in the post Obtaining Information about the Operating System and it covers the three different information categories impacting the artifact list. The first category is the General Operating System Information and it shows the operating system version. The version will dictate whether certain artifacts are actually in the system since some are OS specific. The second category is the User Account Information which shows the user accounts (local accounts as well as accounts that logged on) associated with the system. When building a timeline it’s important to narrow the focus for the user accounts under examination; this is even more so on computers shared by multiple people. Identifying the user accounts can be done by confirming the account assigned to person, looking at the user account names, or looking at when the user accounts were last used. The third and final category is the Software Information. The category shows information about programs installed and executed on the system. The software on a system will dictate what artifacts are present. Quickly review the artifacts supported by my tools (picture above) to see how many are associated with specific applications. This one examination step can take a broad list and make it more focused to the environment where the artifacts are coming from.

Select Data for the Timeline

I reflect on the things I considered when coming up with a plan on how to build the timeline The examination's purpose outlined what I need to accomplish, potential data I want to examine was identified, my tool's capabilities were reviewed to see what artifacts can be parsed, and then checks were made to tailor the artifact list to the system I’m looking at. The list I’m left with afterwards is what gets incorporated into my first timeline. Working my way through this thought process reduces the amount of artifacts going into a timeline; thus reducing the amount of data I’ll need to weed through.

Thought Process Example

The thought process I described may appear to be pretty extensive but that is really not the case. The length is because I wanted to do a good job explaining it since I feel it’s important. The process only takes a little time to complete and most of it is already done when processing a case. Follow along a DF analyst on a hypothetic case to see how the thought process works in coming up with a plan to build the timeline. Please note, the case only mentions a few artifacts to get my point across but an actual case may use more.

Friend: “Damm … Some program keeps saying I’m infected and won’t go away. Let me call the DF analyst since he does something with computers for a living. He can fix it

Phone rings and DF analyst picks up

Friend: “DF analyst … Some program keeps saying I’m infected with viruses and blocks me from doing anything.”

DF analyst: “Do you have any security programs installed such as antivirus software, and if so is that what you’re seeing

Friend: “I think I have Norton installed but I’ve never seen this program before. Wait … hold on … Oh man, now pornographic sites are popping up on my screen

DF analyst: “Yup, sounds like you’re infected

Friend: “I know I’m infected. That’s what I told you this program has been telling me

DF analyst: “Umm .. The program saying you are infected is actually the virus.”

Friend: “Hmmmm….”

DF analyst: “Just power down the computer and I’ll take a look at later today.

Computer powering down

DF analyst: “When did you start noticing the program?

Friend: “Today when I was using the computer.

DF analyst: “What were you doing?

Friend: “Stuff… Surfing the web, checking email, and working on some documents. I really need my computer. Can you just get rid of the virus and let me know if my wife or kids did this to my computer?

Later that day

DF analyst has the system back in the lab. He thinks about what he needs to do which is to remove the malware from the system and determine how it got there. The potential data list he came up with to accomplish those tasks was: known malware files, system’s autostart locations, programs executed (prefetch, userassist, and muicache), file system metadata, registry hives, event logs, web browser history, AV logs, and restore points/volume shadow copies.

Wanting to know what launches when his friend logs onto the computer the DF analyst uses the Sysinternals autorun utility in offline mode to find out. Sitting in one run key was an executable with a folder path to his friend’s user profile. A Google search using the file’s MD5 hash confirmed the file was malicious and his friend’s system was infected. DF analyst decided to leverage a timeline to see what else was dropped onto the system and what caused it to get dropped in the first place.

DF analyst pulls out his reference showing the various artifacts supported by his timeline tools. He confirms that all the potential data he identified is supported. Then he moves on to his first examination step which is examining the hard drive’s layout. Two partitions, one is the Dell recovery formatted with Fat32 while the other is for the operating system formatted with NTFS. DF analysts just added NTFS artifacts ($MFT) to his potential data list. To get a better idea about the system he uses Regripper to rip out the general operating system information. Things he learned from the Regripper reports and the decisions he made based on the information:

         - OS version is XP (restore points are in play while shadow copies are out. Need to parse event logs with evt file extensions)

        - Three user accounts were used in the past week (initial focus for certain artifacts will be from friend’s user account since malware was located there. The two other user accounts may be analyzed depending on what the file system metadata shows)

        - Internet Explorer was only web browser installed (all other web browser artifacts won’t be parsed at this time)

        - Kaspersky antivirus software was installed (tools don’t support this log format. AV log will be reviewed and entries will be put into the timeline manually)

DF analyst performs a few other checks. Prefetch folder has files in it and his friend’s user account recycle bin has numerous files in it. Both were added to the timeline artifact list. The final list contains items from the system and one user account. The system data has: prefetch files, event logs (evt), system restore points, Master file Table. The artifacts from one user account are: userassist registry key, muicache registry key, IE history, and the Recycle bin contents. DF analyst is ready to build his timeline …. Stay tuned for the post "Building Timelines – Tools Usage" to see one possbile way to do it.


I'd like to hear feedback about how other's approach building timelines; especially if it's different than what I wrote. It's helpful to see how other analysts are building timelines.
Labels:

Linkz 4 Advice

Monday, September 12, 2011 Posted by Corey Harrell 2 comments
There won’t be any links pointing to Dr. Phil, Dear Abby, or Aunt Cleo. Not that there’s anything wrong that… They just don’t provide advice on a career in DFIR.

Getting Started in DFIR

Harlan put together the post Getting Started which contains great advice for people looking to get into DF. I think his advice even applies to folks already working in the field. DF is huge with a lot of areas for specialization. Harlan’s first tip was to pick something and start there. How true is that advice for us since we aren’t Abby from NCIS (a forensic expert in everything)? People have their expertise: Windows, Macs, cell phones, Linux, etc. but there is always room to expand our knowledge and skills. The best way to expand into other DF areas is to “pick something and start there”.

Another tip is to have a passion for the work we do. In Harlan’s words “in this industry, you can't sit back and wait for stuff to come to you...you have to go after it”. I completely agree with this statement and DF is not the field to get complacent in. There needs to be a drive deep down inside to continuously want to improve your knowledge and skills. For example, it would be easy to be complacent to maintain knowledge only about the Windows XP operating system if it’s the technology normally faced. However, it would be ignoring the fact that at some point in the near future encounters with Windows 7 boxes and non-Windows system will be the norm. A passion for DF is needed to push yourself so you can learn and improve your skills on your own without someone (i.e. an employer) telling you what you should be doing.

I wanted to touch on those two tips but the entire post is well worth the read, regardless if you are looking to get into DF or already arrived.

Speaking about a Passion

Little Mac over at the Forensicaliente blog shared his thoughts about needing a drive to succeed in DF. I’m not musically inclined but he uses a good analogy to explain what it takes to be successful. Check out his post Is Scottish Fiddle like Digital Forensics?.

Breaking into the Field

Lenny Zelster discussed How to Get Into Digital Forensics or Security Incident Response on his blog last month. One issue facing people looking to break into the field is that organizations may not be willing to spend the time and resources to train a person new to the field. Lenny suggested people should leverage their current positions to acquire relevant DFIR skills.

Lenny’s advice doesn’t apply to how I broke into the field since DFIR was basically dropped into my lab when I was tasked with developing the DF capability for my organization. However, his advice is spot on for how I was able to land my first position in the information security field (which is what lead me into DFIR). I was first exposed to security during my undergraduate studies when I took a few courses on the topic. It was intriguing but the reality was there weren’t a lot of security jobs in my area which meant my destination was still IT operations. I continued down the track pushing me further into IT but I always kept my desire for security work in mind. After graduation I took a position in an IT shop where I had a range of responsibilities including networking and server administration. In this role, I wanted to learn how to secure the technology I was responsible for managing and what techniques to use to test security controls. This is due diligence as being a system admin but it also allowed me to get knowledge and some skills in the security field. In addition to operational security, I even tried to push an initiative to develop and establish an information security policy. Unfortunately, the initiative failed and it was my first lesson in nothing will be successful without management’s support. All was not lost because the experience and my research taught me a lot about security being a process that supports the business. This is a key concept about security and up until that point my focus was on security's technical aspects.

I leveraged the position I was in to acquire knowledge and skills about my chosen field (security). My actions weren’t completely self serving since my employer benefited from having someone to help secure their network. I didn’t realize how valuable it was to expand my knowledge and skills until my first security job interview. Going in I thought I lacked the skills and knowledge but over the course of the interview I realized I had a lot more to offer. I took the initiative to expand my skillset and it was an important factor in helping me land in the security field. My experience is very similar to the Lenny’s advice except his post is about getting into the DFIR field.

Get a plan before going into the weeds

Rounding out the links providing sound guidance, Bill over at the Unchained Forensics blog gave some good advice in his recent post Explosions Explosions. He shared his thoughts on how he approaches examinations. One comment he made that I wanted to highlight was “more and more of my most efficient time is being used at the case planning stage”. He mentions how he thinks about his plan to tackle the case, including identifying potential data of interest, before he even starts his examination. I think it’s a great point to keep reinforcing for people new and old to DFIR.

I remember when I was new to the field. I had a newly established process and skillset but I lacked certain wisdom in how to approach cases. As expected, I went above and beyond in examining my first few cases. I even thought I was able to do some “cool stuff” the person requesting DF assistance would be interested in. There was one small issue I overlooked. The person was only interested in specific data’s content while I went beyond that, way beyond that. I wasted time and the cool stuff I thought I did was never even used. I learned two things from the experience. First was to make sure I understand what I’m being asked to do; even if it means asking follow-up questions or educating the requestor about DF. The second lesson was to think about what I’m going to do before I do it. What data do I need? What steps in my procedures should I complete? What procedural steps can be omitted? What’s my measure for success telling me when the examination is complete? Taking the time beforehand to gather your thoughts and develop a plan helps to keep the examination focused on the customer’s needs while limiting the “cool stuff” that’s not even needed.

Books On demand

If someone were to ask me what is the best training I have every taken I know exactly what I would say. A book, computer, Google, and time. That’s it and the cost is pretty minimal since only a book needs to be purchased. I’m not knocking training courses but classes cannot compare to educating yourself through reading, researching, and testing. I never heard about Books24x7 until I started working for my current employer. Books24x7 is virtual library providing access to “in-class books, book summaries, research reports and best practices”. The books in my subscription include topics on: security, DFIR, certification, business, programming, operating systems, networking, and databases. I can find the information I’m looking for by searching numerous books whether I’m researching, testing, or working. A quick search for DFIR books located: Malware Forensics: Investigating and Analyzing Malicious Code, Windows Registry Forensics: Advanced Digital Forensic Analysis of the Windows Registry, Windows Forensic Analysis Toolkit Second Edition, Malware Analyst's Cookbook: Tools and Techniques for Fighting Malicious Code, EnCase Computer Forensics: The Official EnCE: EnCase Certified Examiner Study Guide, and UNIX and Linux Forensic Analysis Toolkit. That’s only a few books from the pages and pages of search results for DFIR. Talk about a wealth of information at your fingertips.

The cost may be a little steep for an individual but it might be more reasonable for an organization. If an organization’s employees have a passion for their work and take the initiative to acquire new skills then Books24x7 could be an option as a training expense. Plus, it could save money from not having to purchase technical books for staff. Please note, I don’t benefit in any way by mentioning this service on my blog. I wanted to share the site since it’s been a valuable resource when I’m doing my job or self training to learn more about DFIR and security.
Labels: ,

What’s a Timeline

Wednesday, September 7, 2011 Posted by Corey Harrell 3 comments
Timeline analysis is a great technique to determine the activity that occurred on a system at a certain point in time. The technique has been valuable for me on examinations ranging from human resource policy violations to financial investigations to malware infections. Here is an analogy I came up with to explain what timelines are.

Not Even Close To a Timeline

The picture below shows how data looks on a hard drive using the operating system. It does a decent job if you are using the computer but the method doesn’t work for a forensic examination. There’s a lot of missing data such as: file system artifacts, hidden files/folders, and the metadata stored in files/folders.


In technical books cabinets are used to explain how hard drives function since they store items similar to how a drives store data. Using the operating system to view data on a hard drive is the equivalent to looking at the cabinet as pictured below. You are unable to see what lies beneath.


Getting Closer To a Timeline

The picture below shows how data on a hard drive looks using a digital forensic tool. The tool does a better job than the operating system since it displays a lot more data. File system artifacts, hidden files/folders, and file system metadata can now be examined. However, the tool does not readily show some data such as the metadata stored inside of files. The picture highlights the need for additional steps to extract the data inside prefetch files.


The cabinet’s contents can now be seen since the doors are opened. There are containers, pots, and pans. However, additional steps need to be taken to determine what is inside those items. Just like more steps are required in Encase to see prefetch files’ metadata.


This is What I’m Talking About

The picture below shows how data looks on a hard drive using a timeline. It might not look as pretty as a Graphical User Interface but it provides so much more data. The timeline section shown contains: both timestamps from the Master File Table (MFT), data stored in prefetch files, events from an event log, and registry keys.


The opened cabinet doors allowed the pots, pans, and containers’ contents to be examined. To the untrained eye it might look like chaos but to the knowledgeable observer they can now see what was stored in the cabinet including the now visible measuring cups. It's kind of like how a timeline makes visible activity on a system that may not have been readily apparent.

Labels: