skip to main |
skip to sidebar
Tuesday, August 30, 2011
Posted by
Corey Harrell
“Give a man a fish; you have fed him for today. Teach a man to fish; and you have fed him for a lifetime”—Author unknown.
My ability to use my weak kung fu to put together batch scripts has only been a recent occurrence. For the most part I was always constricted by my tools. If my tool wasn’t able to automate a process then I’d adapt and take a little bit more time to complete a task. If my tools didn’t perform a task then I’d search for another tool or script to accomplish what I needed. Basically, I had to adapt to my tools to perform a task instead of making my tools adapt to the task at hand. Things changed when I spent a week working on a case when I realized knowing how to script was a necessity. I’m sharing the references I came across that did a decent job of teaching me how to write batch files.
The first reference was what taught me how to fish. Batch Files (Scripts) in Windows provides an introductory overview about batch files. The article starts out explaining what a batch file is and how to construct one before it covers more advanced topics. A few topics include explanations about using if statements and for loops in scripts. The author provides links pointing to explanations about terms the reader may want more information on. The article taught me the basics of writing batch files and afterwards I was able to write simple scripts without needing to do anymore research. In a way the article converted me from being a person who receives fish from others (scripts) to one who is able to catch my own fish (write my own scripts).
The scripts I’ve been writing automate repetitive tasks such as running the same command against different folders. The for loop is one option to complete repetitive tasks and this is where the next reference comes into play. ss64.com’s For loop webpages breaks down the syntax for the different ways to implement a for loop. The information on the site gave me a better understanding on how to write for loops. If Batch File (Scripts) in Windows taught me how to fish then ss64 helped me to improve my casting.
Despite having a pretty decent cast, I’m still fishing with a bobber. Beginner fishermen may have a tough time knowing when to set the hook in the fish’s mouth so a bobber helps them. Bobbers are a visual indicator that a fish is biting your line which alerts the fisherman when to set the hook. Similar to a beginner fisherman, I still need to learn a lot more. Rob van der Woude’s Scripting Pages website has a few pages discussing batch scripting. So far the site has helped me solve a few scripting problems I encountered but there’s still a wealth of information I haven’t even read.
One item that makes batch scripting a little easier is native Windows commands can be used in addition to third party tools. Microsoft’s Command-line reference A-Z is a great resource for learning about commands. The command-line reference A-Z is the equivalent to adding additional lures and bait to your tackle box so you can catch bigger and better fish.
The last reference and one that shouldn’t be overlooked is having a person to bounce ideas off of. The person doesn’t need to be an expert either. My coworker is in the same boat as me and is trying to learn how to write batch files. It’s been helpful to have someone to provide feedback on what I’m trying to do and to help me work through complex code. A person is like a fishing buddy who can provide you with some tips, better ideas, or helps you become a better fisherman.
Learning how to write batch scripts has been an awaking. I’m leveraging my tools to extract data in different ways and I'm cutting the time required to complete some tasks in half. I constantly reflect on what tasks can be automated with scripting and how I can present extracted data to better suite my needs. Paraphrasing the quote I referenced through out my post is the best way to illustrate how I benefited from learning how to script.
“Give a man a script; you have solved his issue for today. Teach a man to script; and you help him solve his own issues for a lifetime.”
Monday, August 22, 2011
Posted by
Corey Harrell
Every year brings a new round of reports outlining the current trends. Information security threats, data breaches, and even cyber crime are covered in the reports. The one commonality across every report is they are lacking the digital forensic perspective. The reports address the question of what are the current threats potentially affecting your information and systems. However, the DFIR point of view asks the follow-up questions: how would you investigate a current threat that materialized on your systems and what would the potential artifacts look like? If a DF Threat Report existed then I think those two questions would be answered.
What The DF Threat Report Could Contain?
I’d like to see the report use case examples to illustrate a specific threat on a system. I find it easier to understand an investigative method and potential artifacts by following along an examination from start to finish. A simple way to demonstrate the threats would be to just replicate it on a test system. The use of test systems would enable the threat to be discussed in detail without revealing any specific case details. The current trend reports would just be guides highlighting what threats to focus on.
To see what I’m talking about I’ll walk through the process of how threats can be identifed for the DF Threat Report. The threats can then be simulated against test systems in order to answer the questions I'm bringing up. In the past two weeks I read the Sophos Security Threat Report Mid-Year 2011 and Securelist Exploit Kits Attack Vector – Mid-year Update reports. I’m using both reports since they are fresh in my mind but the areas I’m highlighting as lacking are common to most threat reports I’ve read (I’m not trying to single out these two organizations).
Example DF Threat Report Topics
The Sophos Security Threat Report Mid-year 2011 talked about the different ways malware is distributed. The threats covered included web threats, social networking, and email SPAM / spearphishing. The Securelist Exploit Kits Attack Vector Mid-year Update discussed the popular exploit kits in use and what vulnerabilities are targeted by the two new kits in the list (Blackhole and Incognito). There are threats in both reports that merit further discussion and would fit nicely in a DF Threat Report.
Sophos stated they “saw an average of 19,000 new malicious URLs every day” with more than 80% of the URLs belonging to legitimate companies whose websites were hacked. The report provided some statics on the URLs before moving on to the next threat. The DF Threat report could take two different angles in explaining the web threat; the server or client angle. If the phone rang at your company and the person on the other end said your website was serving up malware then how would you investigate that? What are the potential artifacts to indicate if malware is actually present? How would you determine the attack vector used to compromise the website? Now for the client angle, a customer comes up to you saying there is a rogue program holding their computer hostage. What approach would you use to identify the initial infection vector? What are the potential artifacts on the system to indicate the malware came from a compromised website as opposed to an email? These are valid follow-up questions that should be included in the web threat’s explanation.
The next threat in the Sophos report was Blackhat search engine optimization (SEO). SEO is a marketing technique to draw visitors to companies’ websites but the same technique can be used to lure people to malicious websites. “Attackers use SEO poisoning techniques to rank their sites highly in search engine results and to redirect users to malicious sites”. As expected the report doesn’t identify what the potential artifacts are on a system to indicate SEO poisoning. I could guess what the system would look like based on my write-up on the potential artifacts from Google image search poisoning. However, answering the question by examining a test system is a better option than making an assumption.
Another threat in the Sophos report was the ongoing attacks occurring in Facebook, Twitter, and Linkdin. “Scams on Facebook include cross-site scripting, clickjacking, survey scams and identity theft”. On Twitter attackers are using shortened URLs to redirect people to malicious websites. LinkedIn malicious invitation reminders contain links to redirect people to malicious websites. Again, the investigative method and artifacts on a system were missing. The same question applies to this threat as well. What are the potential artifacts on a system to indicate Blackhat SEO?
Rounding out the Sophos threats I’m discussing is SPAM /spearphishing. A few of the high profile breaches this year were covered and a few of them involved spearphishing attacks. Unfortunately, there was no mention explaining the artifacts tying malware to a specific email containing an exploit. Nor was there a mention of how your investigation method should differ if there is even a possibility spearphishing was involved.
The Securelist Exploit Kits Attack Vector Mid-year Update report identified the top exploit kits used in the first half of the year. One interesting aspect in the report was the comparison between the vulnerabilities targeted by the Blackhole and Incognito exploit kits. The comparison showed the kits pretty much target the same vulnerabilities. The DF Threat Report may not be able to cover all the vulnerabilities in the list but it could dissect one or two of them to identify the potential artifacts left on a system from exploitation.
Conclusion
The process I walked through to identify content for the DF Threat Report used reports related to security threats. However, a DF Threat report could cover various topics ranging from security to cybercrime. The report’s sole purpose would be to make people more aware about how to investigate a threat that materialized on your systems and what might the potential artifacts look like? In my short time researching and documenting attack vector artifacts I’ve found the information valuable when examining a system. I’m more aware about what certain attacks look like on a system and this helps me determine the attack vector used (and not used). I think the DF Threat Report could have a similar effect on the people who read it.
It would take an effort to get an annual/semi-annual DF Threat report released. People would be needed to organize its creation, research/test/document threats, edit the report, and to release the report. I wouldn’t only be an occasional author to research/test/document threats but I’d be a reader eager to see the DF Threat Report with each new year. Maybe this is just wishful thinking on my part that one day when reading a report outlining the year’s trends there will actually be useful DFIR information that could be used when investigating a system.
Wednesday, August 10, 2011
Posted by
Corey Harrell
In this edition of Links I think there is a little bit of something for everyone regardless if your interest is forensics, malware, InfoSec, security auditing or even good a rant ….
Digital Forensic Search Updates
The Digital Forensic Search index has been slowly growing since it was put together four months ago. Last Sunday’s update brought the sites in the index to: 103 DFIR blogs, 38 DFIR websites, 13 DFIR web pages, and 2 DFIR groups. The initial focus of DFS was to locate information related to specific artifacts as opposed to locating tools to parse those artifacts. My reasoning was because I didn’t want to weed through a lot of irrelevant search hits. Most tools’ websites only provided a high level overview of an artifact the tool parses instead of in-depth information. It made sense to leave out tool specific sites to reduce the amount of noise but things change.
A question I ask myself at times is what tool can parse artifact XFZ. I’m not alone asking the question because I see others asking the same thing. To make things easier in locating tools I’m now adding tool specific sites to the Digital Forensic Search. So far 15 websites and 7 web pages are indexed. I ran a few tests and the search results seem to be a good mixture of hits for information and tools. My testing was limited so if anyone sees too much noise then just shoot me an email telling me who the culprit is.
Let me know of any links missing from DFS
Windows Shortcut File Parser Update
My post Triaging My Way mentions a need I had for a command line tool to parse Windows Shortcut files. In my quest for a tool I modified the lslnk.pl perl script to produce the output I wanted. One of the modifications I made to the script was to examine all of the files in a folder and to only parse files with the lnk file extension. I was running lslnk-directory-parse.pl (modified script) against some shortcut files when the script would abruptly stop. The parsed information from the last file only contained the file system timestamps. Examination of the file showed that it was empty and this was what caused lslnk-directory-parse.pl to die. I made a slight modification to lslnk-directory-parse.pl so the script checks each files’ header to confirm it is indeed a Windows shortcut file. I uploaded the new scripts (lslnk-directory-parse.pl and lslnk-directory-parse2.pl) to the Yahoo Win4n6 group and added a version number (v1.1) in the comments.
There are always different ways to accomplish something. When faced with trying to parse all of the Window shortcut files in a folder I opted to modify an existing script to meet my needs. The Linux Sleuthing blog took a different approach in the post Windows Link Files / Using While Loops. The author uses a while loop with an existing script to parse all of the shortcut files in a folder. Their approach is definitely simpler and quicker than what I tried to do. I learned a lot from the approach I took since I had to understand what modifications to make to an existing script in order to get the output I wanted.
How to Mount a Split Image
Speaking of the Linux Sleuthing blog. They provided another useful tip in the post Mounting Split Raw Images. As the name of the post implies it is about how to mount a split image in a Linux environment. I can’t remember the last time I dealt with a split image since I no longer break up images. However, when I used to create split images I remember asking myself how to mount it in Linux. To others the question may be simple but I didn’t have a clue besides concatenating to make a single image. The Mounting Split Raw Images post shows that sharing information – no matter how simple it may appear – will benefit someone at some point in time.
$UsnJrnl Goodness
Bugbear over at Security Braindump put together a great post Dear Diary: AntiMalwareLab.exe File_Created. I recommend anyone who will be encountering a Windows Vista or 7 system to read the post even if malware is not typically encountered during examinations. The $UsnJrnl record is an NTFS file system artifact which is turned on by default in Vista and 7. Bugbear discusses what the $UsnJrnl record is and how to manually examine it before discussing tools to automate the examination.
What I really like about the post is the way he presented the information. He explains an artifact, how to parse the artifact, a tool to automate the parsing and then shares an experience of how the artifact factored into one of his cases. I think the last part is important since sharing his experience provides context to why the artifact is important. His experience involved files created/deleted on the system as a result of a malware infection. Providing context makes it easier to see the impact of $UsnJrnl on other types of investigations. For example, a reoccurring activity I need to determine on cases is what files were deleted from a system around a certain time. Data in the $UsnJrnl record may not only show when the files of interest were deleted but could highlight what other files were deleted around the same time.
Memory Forensic Image for Training
While I’m on the topic of malware I wanted to pass along a gem I found in my RSS feeds and seen others mention. The MNIN Security Blog published the Stuxnet's Footprint in Memory with Volatility 2.0 back in June but I didn’t read it until recently. The post demonstrates Volatility 2.0’s usage by examining a memory image of a system infected with Stuxnet. A cool thing about the write-up is the author makes available the memory image they used. This means the write-up and the memory image can be used as a guide to better understand how to use Volatility. Just download Volatility, download the memory image, read the post, and follow along by running the same commands against the memory image. Not bad for a free way to improve your Volatility skills.
Easier Way to Generate Reports from Vulnerability Scans
Different methods are used to identify known vulnerabilities on systems. Running various vulnerability scanners, web application scanners, and port scanners are all options. One of the more tedious but important steps in the process is to correlate all of the tools’ outputs to identify: what vulnerabilities are present, their severity, and their exposure on the network. Obtaining this kind of information from the scans was a manual process since there wasn’t a way to automate it. James Edge over at Information Systems Auditing is trying to address this issue in something he calls the RF Project (Reporting Framework Project). RF Project is able to take scans from Nessus, Eeye Retina, Nmap, HP WebInpect, AppScan AppDetective, Kismet, and GFI Languard so custom reports can be created. Want to know the potential vulnerabilities detected by Nessus, Retina, and Nmap against server XYZ? Upload the scans to the reporting framework and create a custom report showing the answer instead of manually going through each report to identify the vulnerabilities. I tested an earlier version of the framework when it only supported Nessus and Retina a few years ago. It’s great to see he continued with the project and added support for more scans.
Jame’s site has some useful stuff besides the RF project. He has a few hacking tutorials and some technical assessment plans for external enumeration, Windows operating system enumeration, and Windows passwords.
Good InfoSec Rant
I like a good rant ever once in awhile. Assuming the Breach’s I do it for the Lulz explains the reason the author works in security. It’s not about the money, job security, or prestige; he works in security because it’s a calling. The post was directed at the InfoSec field but I think the same thing applies to Digital Forensics. Take the following quote:
“Technology, and especially information security has always been more than a job to me. More than even a career. It's a calling. Don't tell my boss, but I'd do this even if they didn't pay me. It's what I do. I can't help it.”
I can’t speak for others but digital forensics is the most changing field I’ve ever worked in. Technology (hardware and software) is constantly changing in how it stores data and the tools I use to extract information are also evolving. Digital forensics can’t be treated as a normal 8 to 4 job with any chance of being successful. Five days a week and eight hours each day is not enough time for me to keep my knowledge and skills current about the latest technology, tool update, threat, or analysis technique. It’s not a job; it’s my passion. My passion enables me to immerse myself in DFIR so I can learn constantly and apply my skills in different ways outside of work for my employer.
I wouldn’t last if digital forensics was only a day job. Seriously, how could I put myself through some of the things we do if there is no passion? We read whitepapers dissecting artifacts and spend countless hours researching and testing to improve our skills. Doing either of these things would be brutal to someone who lacks passion for the topic. For example, I couldn’t hack it being a dentist because I lack the passion for dentistry. I wouldn’t have the will power to read a whitepaper explaining some gum disease or spend hours studying different diagnosis. Dentistry would just be an 8 to 4 day job that pays the bills until I could find something else. DFIR on the other hand is another story as I spend my evening blogging about it after spending the day working on a case.
Saturday, August 6, 2011
Posted by
Corey Harrell
It’s hard to believe a year has gone by since I launched my blog. I didn’t know what to expect when I took an idea and put it into action. All I knew was I wanted to talk about investigating security incidents but at the time I didn’t have the IR skillset. I also wanted to provide useful content but I was short on personal time to research, test, and write. I went ahead anyway despite the reasons discouraging me from blogging.
The experience has been rewarding. I’m a better writer from explaining various topics in a way that others can learn from my successes and failures. I have a better understanding about DFIR from the feedback I received. The feedback also helps to validate what 'm thinking and doing. Different opportunities arose -such as talking with other forensicators- as a direct result of my willingness to share information.
The top six posts of the year covered a range of topics from detecting security incidents to examining an infected system to a book review. The most read posts of the year were:
1. Google the Security Incident Detector
2. Introducing the Digital Forensics Search
3. Reviewing Timelines with Excel
4. Review of Digital Forensics with Open Source Tools
5. Smile for the Camera
6. Anatomy of a Drive-by Part 2
I’m looking forward to another year and there is a range of ideas in the hopper. I’ll still touch on investigating security incidents as well as researching attack vector artifacts. However, my focus will gradually extend from the artifacts on a single system to the artifacts located on different network devices. Besides IR, I’m planning on talking about supporting financial investigations, Windows 7 (and Server 2008) artifacts, my methodology, different information security topics, and random DFIR thoughts inspired by things I come across along the way.
Thanks to everyone who keeps stopping by jIIr. There’s no need to be a stranger when there’s a comment feature to let me know what you think. ;) A special thank you to all of the other bloggers and authors who link to my blog and share their thoughts about my posts. I'm thankful for the additional traffic you send my way since it helps to let others know about the blog.
Labels: