Cleaning Out the Linkz Hopper
Wednesday, April 25, 2012
4
comments
Volume Shadow Copies has been my main focus on the blog for the past few months. I took the time needed to share my research because I wanted to be thorough so others could use the information. As a result, the interesting linkz I’ve been coming across have piled up in my hopper. In this Linkz post I’m cleaning out the hopper. There are linkz about: free DFIR e-magazines, volume shadow copies, triage, timeline analysis, malware analysis, malware examinations, Java exploits, and an interesting piece on what you would do without your tools. Phew …. Let’s roll
Into The Boxes is an e-magazine discussing topics related to Digital Forensic and Incident Response. When the magazine was first released a few years ago I saw instant value in something like this for the community. A resource that not only provides excellent technical articles about DFIR but also compliments what is already out there in the community. I really enjoyed the first two editions but a third issue was never released…. That is until now. The ITB project is back up and running as outlined in the post Into The Boxes: Call for Collaboration 0×02 – Second Try.
It looks like ITB won’t be the only free DFIR magazine on the block. Lee Whitfield is starting up another free magazine project called Forensic 4cast Magazine. His magazine will also be discussing topics related to Digital Forensic and Incident Response.
It’s great to see projects like these but they will only be successfully with community support such as feedback and more importantly writing articles. Without support then efforts like these will go to where great ideas go to die. I’m willing to step up to the plate to be a regularly contributor of original content. I’ll be writing for ITB and my first article discusses how to find out how a system was infected after I.T. tried to clean the infection. Cleaning a system makes it harder to answer the question of how but it doesn’t make it impossible. Stay tuned to see what artifacts are left on a cleaned system in an upcoming ITB edition.
This link is cool for a few reasons. Sometime ago Cheeky4n6Monkey sent me an email introducing himself and asking if I had any project ideas. I knew who Cheeky was even before his introductory email because I’ve been following his outstanding blog. I thought this was really cool; he is looking to improve his DFIR skills by trying to reach out and help others. He isn’t taking a passive approach waiting for someone to contact him but he is doing the complete opposite. I went over my idea hopper and there was one thing that has been on my to-do list for some time. At times I wanted to review the RegRipper profiles to update the plugins listed. However, I didn’t want to manually review every plugin to determine what the profile was missing. A better approach would be to flag each plugin not listed which would then reduce the number of plugins I had to be manually review. I mentioned the idea to Cheeky and he ran with it. Actually he went warp speed with the idea because he completed the script within just a few days. To learn more about his script and how to use it check out the post Creating a RegRipper Plugins Maintenance Perl Script.
The one thing I like about the DFIR community is the people who willingly share information. Sharing information not only educates us all thus making us better at our jobs but it provides opportunities for others to build onto their work. Case in point, I didn’t start from scratch with my Ripping VSCs research since I looked at and built on the work done by Troy Larson, Richard Drinkwater, QCCIS, and Harlan. I was hoping others would take the little research I did and take it another step forward. That is exactly what Jason Hale from Digital Forensics Stream did. Jason put together the VSC Toolset: A GUI Tool for Shadow Copies and even added additional functionality as outlined in the post VSC Toolset Update. The VSC Toolset makes it extremely easy for anyone to rip VSCs and to add additional functionality to the tool. Seriously, it only takes one line in a batch file to extend the tool. Jason lowered the bar for anyone wanting to examine VSCs using this technique.
When I put together the Tr3Secure Data Collection script I was killing two birds with one stone. First and foremost, the script had to work when responding to security incidents. Secondly, the script had to work for training purposes. I built the script using two different books so people could reference them if they had any questions about the tools or the tools’ output. As such, the one limitation with the Tr3Secure Data Collection is it doesn’t work remotely against systems. Michael Ahrendt (from Student of Security) released his Automated Triage Utility and has since updated his program. One capability Automated Triage Utility has is being able to run against remote systems. To see how one organization benefited by Michael’s work check out Ken Johnson (from Random Thoughts of Forensic) post Tools in the Toolbox – Triage. If you are looking for triage scripts to collect data remotely then I wouldn’t overlook Kludge 3.0. The feedback about Kludge in the Win4n6 Yahoo group has been very positive.
Speaking about Triage, Adam over at Hexacon recently released his HMFT tool in the post HMFT – Yet Another $MFT extractor. I was testing out the script and it grabbed an MFT off a live Windows 7 32 bit Ultimate system within a few seconds. One area where I think HMFT will be helpful is in triage scripts. Having the ability to grab a MFT could provide useful filesystem information including the ability to see activity on a system around a specific time of interest. I plan on updating the Tr3secure Data Collection script to incorporate HMFT.
While I’m talking about Adam I then I might as well mention another tool he released. Sometime ago he released the HAPI – API extractor. The tool will identify all the Windows APIs present in a file’s strings. I’ve been working my way through Practical Malware Analysis (except a full review soon) and one of the steps during static analysis is reviewing a file’s strings. Identifying the Windows APIs in strings may give a quick indication about the malware’s functionality and HAPI makes it so much easier to find the APIs. I added the tool to my toolbox and it will be one of the tools I run whenever I’m static analysis against malware.
Harlan recently discussed the need to perform analysis on infected systems as a means to gather actionable intelligence. His first post where this was mentioned was The Need for Analysis in Intelligence-Driven Defense while the second one was Updates and Links. Alright, Harlan made a lot of great points in those both besides the need to analysis infected systems and they are both definitely worth the read. I’ve heard discussions among digital forensic practitioners about performing analysis on infected systems to determine how the infection occurred. A few responses included: it’s too hard, too time consuming, or most of the time you can’t tell how the infection occurred. People see the value in the information learned by performing an examination but there is no follow through by actually doing the exam. It makes me wonder if one of the roadblocks is that people aren’t really sure what they should be looking for since they don’t know what the Attack Vector Artifacts look like.
Sometime time ago William Ballenthin released his INDXParse script that can be used to examine NTFS INDX files. To get a clearer picture about the forensic significance of INDX files you can check out Chad Tilbury’s post NTFS $I30 Index Attributes: Evidence of Deleted and Overwritten Files in addition to the information provided by William Ballenthin. INDXParse comes with an option to use a bodyfile as the output (-b switch) and this can be used to add the parsed information to a timeline. Don’t forget that next week William Ballenthin is presenting about his INDX files research in a DFIROnline special edition.
Rob Lee created a timeline template to automate colorizing a timeline when imported into Excel. His explanation about the template can be found on his post Digital Forensic SIFTing: Colorized Super Timeline Template for Log2timeline Output Files. Template aside, the one thing I like about the information Rob shared is the color coding scheme to group similar artifacts. To name a few: red for program execution, orange for browser usage or yellow for physical location. Using color in a timelines is a great idea and makes it’s easier to see what was occurring on a system with a quick glance.
Rounding out the posts about time is Lee Whitfield’s slide deck Rock Around the Clock. In the presentation, Lee talks about numerous artifacts to check to help determine if the time on the system was altered. After reading his slides over and the information he provided makes a great checklist one could follow if a system’s time comes into question. The next time I need to verify if someone changed the system clock then I’ll follow these steps as outlined by Lee. I copied and pasted my personal checklist so if any information is listed below that didn’t come from Lee’s slide deck then I picked it up from somewhere else.
- NTFS MFT entry number
* New files are usually created in sequence. Order files by creation then by identifier. Small discrepancies are normal but large require further investigation
- Technology Advancement
* Office, PDF, Exif images, and other items' metadata show program used to create it. Did the program exist at that time?
- Windows Event Logs
* Order event logs in order then review the date/time stamps that are out of order
* XP Event ID 520 in security log "the system time was changed" (off by default) Vista, 7 Event ID 1 in system log "the system time has changed to ..." and event id 4616 in security log "the system time was changed"
- NTFS Journal
* Located in the $J stream of $UsnJrnl and may hold few hours or days of data. Entries sequentially stored
- Link files
* XP each link file has a sequence number (fileobjectid). Sort by creation date then review sequence number
- Restore Points
* XP restore points named sequentially. Sort by creation date then review RP names for out of sequence
- Volume Shadow Copies
* VSC GUIDs are similarly named for specific times
* Sort by creation data and then review the VSC names to identify ones out of place
- Web pages (forums, blogs, or news/sports sites)
* Cached web pages may have date/time
- Email header
- Thumbnails
* XP one repository for each folder and Vista/7 one for all folders. Both store items sequentially.
* Sort by file offsets order then review for out of place dates
I don’t have much narration about Java exploits since I plan on blogging about a few case experiences involving it. I had these links under my exploits category and wanted to get rid of them so I can start fresh. Towards the end of last year a new Java vulnerability was being targeted and numerous attacks started going after it. DarkReading touched on this in the article The Dark Side Of Java and Brian Krebs did as well in the post New Java Attack Rolled Into Exploit Kits. The one interesting thing about the new Java attack from the DFIR perspective is it looks the same on a system as other Java exploits going after different vulnerabilities. It’s still good to be informed about what methods the attackers are using. Another link about Java was over at the Zscaler Threatlab blog. There’s an excellent write-up showing how a Java Drive-by Attack looks from the packet capture perspective.
The Security Shoggoth blog's post Tools and News provided some food for thought. The post goes into more depth on the author’s tweet: Want to find out how good someone is? Take away all their tools and say, "Now do it.". When I first got started in DFIR I wanted to know the commercial tool I had available inside and out. I learned as much as I could about the tool except learning how to write enscripts. Then one day I thought to myself, could I do forensics for another shop if they don’t have Encase and the answer was unfortunately no. I think there are a lot of people in our field who fall into the on commercial tool boat. They can do wonders with their one tool but if they don’t have access to it or if the tool can’t do something then they get stuck. I made the decision to improve my knowledge and skills so I could do my job regardless of the tools I had available. The change didn’t happen overnight and it took dedication to learn how to do my job using various tools for each activity. Try to answer two of the questions the author mentioned in his post and if you are unable to fully answer them then at least you know an area needing improvement.
Imagine for a moment that you didn't have the tool(s) you use most in your job - how would you perform your job? What alternatives are available to you and how familiar you are with them?
Into The Boxes has Returned
Into The Boxes is an e-magazine discussing topics related to Digital Forensic and Incident Response. When the magazine was first released a few years ago I saw instant value in something like this for the community. A resource that not only provides excellent technical articles about DFIR but also compliments what is already out there in the community. I really enjoyed the first two editions but a third issue was never released…. That is until now. The ITB project is back up and running as outlined in the post Into The Boxes: Call for Collaboration 0×02 – Second Try.
It looks like ITB won’t be the only free DFIR magazine on the block. Lee Whitfield is starting up another free magazine project called Forensic 4cast Magazine. His magazine will also be discussing topics related to Digital Forensic and Incident Response.
It’s great to see projects like these but they will only be successfully with community support such as feedback and more importantly writing articles. Without support then efforts like these will go to where great ideas go to die. I’m willing to step up to the plate to be a regularly contributor of original content. I’ll be writing for ITB and my first article discusses how to find out how a system was infected after I.T. tried to clean the infection. Cleaning a system makes it harder to answer the question of how but it doesn’t make it impossible. Stay tuned to see what artifacts are left on a cleaned system in an upcoming ITB edition.
RegRipper Plugins Maintenance Perl Script
This link is cool for a few reasons. Sometime ago Cheeky4n6Monkey sent me an email introducing himself and asking if I had any project ideas. I knew who Cheeky was even before his introductory email because I’ve been following his outstanding blog. I thought this was really cool; he is looking to improve his DFIR skills by trying to reach out and help others. He isn’t taking a passive approach waiting for someone to contact him but he is doing the complete opposite. I went over my idea hopper and there was one thing that has been on my to-do list for some time. At times I wanted to review the RegRipper profiles to update the plugins listed. However, I didn’t want to manually review every plugin to determine what the profile was missing. A better approach would be to flag each plugin not listed which would then reduce the number of plugins I had to be manually review. I mentioned the idea to Cheeky and he ran with it. Actually he went warp speed with the idea because he completed the script within just a few days. To learn more about his script and how to use it check out the post Creating a RegRipper Plugins Maintenance Perl Script.
VSC Toolset
The one thing I like about the DFIR community is the people who willingly share information. Sharing information not only educates us all thus making us better at our jobs but it provides opportunities for others to build onto their work. Case in point, I didn’t start from scratch with my Ripping VSCs research since I looked at and built on the work done by Troy Larson, Richard Drinkwater, QCCIS, and Harlan. I was hoping others would take the little research I did and take it another step forward. That is exactly what Jason Hale from Digital Forensics Stream did. Jason put together the VSC Toolset: A GUI Tool for Shadow Copies and even added additional functionality as outlined in the post VSC Toolset Update. The VSC Toolset makes it extremely easy for anyone to rip VSCs and to add additional functionality to the tool. Seriously, it only takes one line in a batch file to extend the tool. Jason lowered the bar for anyone wanting to examine VSCs using this technique.
Triage Script
When I put together the Tr3Secure Data Collection script I was killing two birds with one stone. First and foremost, the script had to work when responding to security incidents. Secondly, the script had to work for training purposes. I built the script using two different books so people could reference them if they had any questions about the tools or the tools’ output. As such, the one limitation with the Tr3Secure Data Collection is it doesn’t work remotely against systems. Michael Ahrendt (from Student of Security) released his Automated Triage Utility and has since updated his program. One capability Automated Triage Utility has is being able to run against remote systems. To see how one organization benefited by Michael’s work check out Ken Johnson (from Random Thoughts of Forensic) post Tools in the Toolbox – Triage. If you are looking for triage scripts to collect data remotely then I wouldn’t overlook Kludge 3.0. The feedback about Kludge in the Win4n6 Yahoo group has been very positive.
HMFT – Yet Another $MFT extractor
Speaking about Triage, Adam over at Hexacon recently released his HMFT tool in the post HMFT – Yet Another $MFT extractor. I was testing out the script and it grabbed an MFT off a live Windows 7 32 bit Ultimate system within a few seconds. One area where I think HMFT will be helpful is in triage scripts. Having the ability to grab a MFT could provide useful filesystem information including the ability to see activity on a system around a specific time of interest. I plan on updating the Tr3secure Data Collection script to incorporate HMFT.
Strings for Malware Analysis
While I’m talking about Adam I then I might as well mention another tool he released. Sometime ago he released the HAPI – API extractor. The tool will identify all the Windows APIs present in a file’s strings. I’ve been working my way through Practical Malware Analysis (except a full review soon) and one of the steps during static analysis is reviewing a file’s strings. Identifying the Windows APIs in strings may give a quick indication about the malware’s functionality and HAPI makes it so much easier to find the APIs. I added the tool to my toolbox and it will be one of the tools I run whenever I’m static analysis against malware.
Need for Analysis on Infected Systems
Harlan recently discussed the need to perform analysis on infected systems as a means to gather actionable intelligence. His first post where this was mentioned was The Need for Analysis in Intelligence-Driven Defense while the second one was Updates and Links. Alright, Harlan made a lot of great points in those both besides the need to analysis infected systems and they are both definitely worth the read. I’ve heard discussions among digital forensic practitioners about performing analysis on infected systems to determine how the infection occurred. A few responses included: it’s too hard, too time consuming, or most of the time you can’t tell how the infection occurred. People see the value in the information learned by performing an examination but there is no follow through by actually doing the exam. It makes me wonder if one of the roadblocks is that people aren’t really sure what they should be looking for since they don’t know what the Attack Vector Artifacts look like.
NTFS INDX Files
Sometime time ago William Ballenthin released his INDXParse script that can be used to examine NTFS INDX files. To get a clearer picture about the forensic significance of INDX files you can check out Chad Tilbury’s post NTFS $I30 Index Attributes: Evidence of Deleted and Overwritten Files in addition to the information provided by William Ballenthin. INDXParse comes with an option to use a bodyfile as the output (-b switch) and this can be used to add the parsed information to a timeline. Don’t forget that next week William Ballenthin is presenting about his INDX files research in a DFIROnline special edition.
Colorized Timeline Template
Rob Lee created a timeline template to automate colorizing a timeline when imported into Excel. His explanation about the template can be found on his post Digital Forensic SIFTing: Colorized Super Timeline Template for Log2timeline Output Files. Template aside, the one thing I like about the information Rob shared is the color coding scheme to group similar artifacts. To name a few: red for program execution, orange for browser usage or yellow for physical location. Using color in a timelines is a great idea and makes it’s easier to see what was occurring on a system with a quick glance.
Checklist to See If A System's Time Was Altered
Rounding out the posts about time is Lee Whitfield’s slide deck Rock Around the Clock. In the presentation, Lee talks about numerous artifacts to check to help determine if the time on the system was altered. After reading his slides over and the information he provided makes a great checklist one could follow if a system’s time comes into question. The next time I need to verify if someone changed the system clock then I’ll follow these steps as outlined by Lee. I copied and pasted my personal checklist so if any information is listed below that didn’t come from Lee’s slide deck then I picked it up from somewhere else.
- NTFS MFT entry number
* New files are usually created in sequence. Order files by creation then by identifier. Small discrepancies are normal but large require further investigation
- Technology Advancement
* Office, PDF, Exif images, and other items' metadata show program used to create it. Did the program exist at that time?
- Windows Event Logs
* Order event logs in order then review the date/time stamps that are out of order
* XP Event ID 520 in security log "the system time was changed" (off by default) Vista, 7 Event ID 1 in system log "the system time has changed to ..." and event id 4616 in security log "the system time was changed"
- NTFS Journal
* Located in the $J stream of $UsnJrnl and may hold few hours or days of data. Entries sequentially stored
- Link files
* XP each link file has a sequence number (fileobjectid). Sort by creation date then review sequence number
- Restore Points
* XP restore points named sequentially. Sort by creation date then review RP names for out of sequence
- Volume Shadow Copies
* VSC GUIDs are similarly named for specific times
* Sort by creation data and then review the VSC names to identify ones out of place
- Web pages (forums, blogs, or news/sports sites)
* Cached web pages may have date/time
- Email header
- Thumbnails
* XP one repository for each folder and Vista/7 one for all folders. Both store items sequentially.
* Sort by file offsets order then review for out of place dates
Attackers Are Beating Java Like a Red Headed Stepchild
I don’t have much narration about Java exploits since I plan on blogging about a few case experiences involving it. I had these links under my exploits category and wanted to get rid of them so I can start fresh. Towards the end of last year a new Java vulnerability was being targeted and numerous attacks started going after it. DarkReading touched on this in the article The Dark Side Of Java and Brian Krebs did as well in the post New Java Attack Rolled Into Exploit Kits. The one interesting thing about the new Java attack from the DFIR perspective is it looks the same on a system as other Java exploits going after different vulnerabilities. It’s still good to be informed about what methods the attackers are using. Another link about Java was over at the Zscaler Threatlab blog. There’s an excellent write-up showing how a Java Drive-by Attack looks from the packet capture perspective.
What Can You Do Without Your Tools
The Security Shoggoth blog's post Tools and News provided some food for thought. The post goes into more depth on the author’s tweet: Want to find out how good someone is? Take away all their tools and say, "Now do it.". When I first got started in DFIR I wanted to know the commercial tool I had available inside and out. I learned as much as I could about the tool except learning how to write enscripts. Then one day I thought to myself, could I do forensics for another shop if they don’t have Encase and the answer was unfortunately no. I think there are a lot of people in our field who fall into the on commercial tool boat. They can do wonders with their one tool but if they don’t have access to it or if the tool can’t do something then they get stuck. I made the decision to improve my knowledge and skills so I could do my job regardless of the tools I had available. The change didn’t happen overnight and it took dedication to learn how to do my job using various tools for each activity. Try to answer two of the questions the author mentioned in his post and if you are unable to fully answer them then at least you know an area needing improvement.
Imagine for a moment that you didn't have the tool(s) you use most in your job - how would you perform your job? What alternatives are available to you and how familiar you are with them?
Labels:
drive-by,
examination steps,
exploits,
java,
malware analysis,
regripper,
timeline,
triage,
volume shadow copies