Changing Perspectives

Tuesday, January 3, 2017 Posted by Corey Harrell 1 comments
In the Fall I was staring out my back window seeing my yard covered in orange leaves. This sight is one I see each year and I have always viewed as my yearly chore. The chore of cleaning up the leaves that have fallen from the trees. At times I would see some joy the leaves would bring as my kids would play in them but mostly I viewed the leaves with disdain. Knowing I would be spending hours upon hours cleaning it up. I came to accept this yearly chore as something that doesn’t change since it came with the territory of owning a property with trees. This was until I became more knowledgeable about a subject and this knowledge changed my perspective on how I see these leaves.

For over the past year I took some time to get refocused in life. During this time I was reflecting on different things; one of those things was I have never grown my food. My food typically came from stores, farmer markets, or local farmers. Thinking about it I realized my food has always came from someone else’s labor. I had no clue how to grow food nor what was involved with growing food. I decided I wanted to change this and I jumped head first into becoming more knowledgeable about organic gardening.

I won’t go into detail about my approach; basically I read books, researched on websites, spoke to friends who garden, and I spoke to local farmers who I buy food from. I tried to cover all of my bases to know as much as I could about the entire plant life cycle. My goal is to be fully self-reliant so to avoid having to constantly buy compost I started to learn about composting. As I went deeper into the art of composting by reading and seeing what others have done before me, the more knowledgeable I started to become. The more knowledgeable I started to become the more my perspective started to change. Staring out of my back window each Fall I only saw a chore. However, this year as I was staring out of the window I saw something else. I saw enough brown material that I could use to make compost the next spring. To create the rich compost loaded with nutrients to feed my vegetable plants. I saw the potential for cover material I could put on my raised beds to protect the soil during the winter months. I saw what a blessing each Fall is since it is when nature provides you with a wealth of material you can use to improve your soil to grow better vegetable plants.

As I stared out the window I also reflected on the similarities between my journey into composting and a security analyst’s journey into DFIR. When I’m building up a security analyst to do DFIR work the approach is the same. The first few months I allow them to be paid to learn; there job is to gain knowledge so their perspective looking at data changes. I want to give them knowledge about what they are looking for, different files and folders on the system, different log sources, and the analysis process. I try to give them enough knowledge to change how they see data and what that data means. To change them from seeing just a bunch of files and folder names to seeing select artifacts and log files. To change them from seeing just a bunch of activity to seeing the malicious activity. To change them from seeing alerts and alarms to seeing what the exact attack vector is.

Knowledge is the key to changing one’s perspective; applying the knowledge is what makes the change reality.

"Knowledge without application is like a book that is never read"

~ Christopher Crawford

Labels:

Thanks a Million

Tuesday, May 24, 2016 Posted by Corey Harrell 5 comments
Last week a new member on my $DayJob’s team reached the point in his in-house training where they started to read articles on jIIr. After I cracked a joke about the blog’s author he mentioned how my blog had over one million page views. To be honest, I haven’t looked at my jIIr’s statistics for months and I didn’t even know about the page views. The milestone really made me reflect on my journey and how it wouldn’t had been possible without others so I wanted to take the time to say thank you.

Thanks to everyone who has stopped by jIIr to read my content. Thanks to all the other bloggers who had linked back to my site or posted links directing their readers to my site. Thanks to everyone who posted links to my content on websites, social media, forums, and DFIR email lists to direct people to my posts. I especially wanted to thank those who took the time to leave a comment or contact me by email about something I wrote whether if it is positive or criticism. I wanted to give a shout out to Harlan for the advice he provided to me. I was just a random person who reached out to him looking for advice on starting a blog. Not only did he provided me with great advice (which showed me I was really over thinking things) but he also mentioned jIIr on his own blog, which helped my content gain more exposure. Lastly, I wanted to thank the Christian men’s group I was in all those years ago who walked with me on how we could use the passions God blessed us with to serve others.

In addition to saying thanks I also wanted to apologize. I wanted to apologize to those who left comments on my blog over the past few months and I never responded. To those who contacted me by email and I either took an extremely long time to respond or never responded at all. To those who may had been visited my blog only to be disappointed due to the lack of new content being posted on jIIr since last September. This was not the way I would had preferred to hit this milestone compared to hitting the milestone due to a great article that pushed me over a million page views. Sitting where I am today I wouldn’t had done it any other way. I needed some time to focus on my walk with Christ and spend more time in God’s word. In essence, I realigned priorities in my life and how I was spending my time. Outside of my commitments (family, $DayJob, $AcademiaJob, and church) I pretty much disconnected from everything else to focus on my faith. The DFIR community and jIIr was part of this everything else category that I temporarily put on hold while I spent time refocusing. Stay tuned as I start working my way through my blog idea hopper that has built up over the months.

It’s been a long journey to reach this milestone. I started out as a digital forensic analyst/ vulnerability assessor looking to get into the incident response field to becoming a security analyst who built and manages a Computer Security Incident Response Team (CSIRT) performing security monitoring and incident response. jIIr has been a place where I have shared my thoughts during this journey in hopes that someone somewhere would find the content useful and helpful. God willing, I’ll continue publishing content and my research for another six years to help those their own journeys.


But He answered and said, “It is written, ‘MAN SHALL NOT LIVE ON BREAD ALONE, BUT ON EVERY WORD THAT PROCEEDS OUT OF THE MOUTH OF GOD.’”

~ Matthew 4:4
Labels:

Breaking Out of Routines

Thursday, May 19, 2016 Posted by Corey Harrell 0 comments
I was digging a hole to plant my blackberries plants when I kept hearing a noise of something moving around the corner of my house. I stopped digging and walked around the house to see what was making the noise. I didn’t see anything anywhere so I shrugged it off and went back to digging the hole. Shortly thereafter I heard the noise again so I went back to look around the corner. Again, I didn’t see anything so I went back to work thinking maybe it was the wind. After a few minutes I heard the noise for a third time and this time I was determined to figure out what was making the noise. I went around the corner of my house but I still didn’t see anything. Then I looked down to my right to my basement window well that sits below ground and saw what was making the noise. Sitting next to my window inside the window well was a squirrel, which wasn’t moving since it saw me standing right above it.

I walked a few feet away so the squirrel couldn’t see me but I could still see it. I stood on top of my air condition unit to see what the squirrel was doing. After a minute, the squirrel started to move around. Not just in any manner but it started to walk the boundary of the window well making a circle. As I stood there watching the squirrel I realize what occurred. I built up the soil on that side of my house to prepare for our garden but this caused the soil to be close to the top of my window well. The squirrel must had been walking and fell into the window well before I was able to buy window well covers. The trapped squirrel searching for a way out turned it into a routine. The routine of walking in circles trying to find a way to escape but not finding one. The squirrel keeps walking searching for a way out. In the end, the squirrel is just walking in a small circle. As I was watching the squirrel I could see it had been trapped for some time; maybe for hours or maybe the entire day.

I thought about how I could help the squirrel escape without it biting me. My first attempt was to put a branch into the window well. This way the squirrel could climb up the branch to escape. I dropped the branch down into the window well and went back to my spot to watch what happens. The squirrel started to walk the circle and approached the branch. Then the squirrel walked over the branch and continued looking for a way out. My first thought was maybe the branch was too small so I replaced it with a piece of lumber. The same thing occurred with the squirrel walking right over the lumber and not seeing that the wood was its way out from being trapped. I stood there watching the squirrel and thought to myself the squirrel is trapped in its own routine. For hours the branch and lumber were not there so the squirrel was walking right past it since it was not expecting it. My neighbor came over to help me get the squirrel out. It took a few minutes but he was able to manage to lift the now freaked out squirrel out of the window well with the shovel. The squirrel panicked and jumped right back down into the window well. However, this time the squirrel was no longer trapped in its routine since the experience with the shovel was a jolt to its senses. My neighbor now struggled to get the squirrel on the shovel so he decided to set a brick on the bottom of the window well. The squirrel immediately saw the brick and used it to jump out of the window well to free itself.

At times we can find ourselves trapped in our routines and this is especially true when performing analysis for security monitoring, digital forensics, or incident response. Routines make our job easier because we can perform certain actions without having to think really hard about how to do it. The downside of routines is they tend to put us on auto-pilot, which blinds us to seeing something new that is right in front of us. Similar to the squirrel’s routine blinding it to seeing the way to escape. Every now and then when you are performing routine analysis tasks take the time to stop and think about what you are doing, what you are trying to accomplish, and what you are seeing. If you don’t then you may never see what you are missing because we don’t have the luxury of someone giving us a jolt to break us out of our routines.
Labels:

Triage Practical Solution – Malware Event – Proxy Logs Prefetch $MFT IDS

Tuesday, April 5, 2016 Posted by Corey Harrell 6 comments
Staring at your Mountain Dew you think to yourself how well your malware triage process worked on triaging the IDS alert. It’s not perfect and needs improvement to make it faster but overall the process worked. In minutes you went from IDS alerts to malware on the system. That’s a lot better than what it used to be; where IDS alerts went into the black hole of logs never to be looked at again. Taking a sip of your Mountain Dew you are ready to provide your ISO with an update.


Triage Scenario


To fill in those readers who may not know what is going on the following is the abbreviated practical  scenario (for the full scenario refer to the post Triage Practical – Malware Event – Proxy Logs Prefetch $MFT IDS):

The junior security guy noticed another malware infection showing up in the IDS alerts. They grabbed a screenshot of the alerts and sent it to you by email. As soon as you received the email containing the screenshot shown below you started putting your malware triage process to the test.




Below are some of the initial questions you had to answer and report back to the ISO.

        * Is this a confirmed malware security event or was the junior analyst mistaken?
        * What do you think occurred on the system to cause the malware event in the first place?
        * What type of malware is involved and what capabilities does it have?
        * What potential risk does the malware pose to your organization?
        * What recommendation(s) do you make to the security team to strengthen its security program to reduce similar incidents occurring in the future?


Information Available


Despite the wealth of information available to you within an enterprise, only a subset of data was provided for you to use while triaging this malware event. The following artifacts were made available:

        * IDS alerts for the timeframe in question (you need to replay the provide pcap to generate the IDS alerts. pcap is not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)

        * Parsed index.dat files to simulate proxy web logs (the parsed index.dat information was modified to remove items not typically found in a web server’s proxy logs)

        * Prefetch files from the system in question (inside the Prefetch.ad1 file)

        * Filesystem metadata from the system in question (the Master File Table is provided for this practical)


Information Storage Location within an Enterprise


Each enterprise’s network is different and each one offers different information for triaging. As such, it is not possible to outline all the possible locations where this information could be located in enterprises. However, it is possible to highlight common areas where this information can be found. To those reading this post whose environments do not reflect the locations I mention then you can evaluate your network environment for a similar system containing similar information or better prepare your network environment by making sure this information starts being collected in systems.

        * Proxy web logs within an enterprise can be stored on the proxy server itself and/or in a central logging system. In addition to proxy web logs, the potentially infected system will have web usage history for each web browser on the system

        * IDS alerts within an enterprise can be stored on the IDS/IPS sensors themselves or centrally located through a management console and/or central logging system (i.e. SIEM)

        * Prefetch files within an enterprise can be located on the potentially infected system

        * File system metadata within an enterprise can be located on the potentially infected system


Collecting the Information from the Storage Locations


Knowing where information is available within an enterprise is only part of the equation. It is necessary to collect the information so it can be used for triaging. Similar to all the differences between enterprises’ networks, how information is collected varies from one organization to the next. Below are a few suggestions for how the information outlined above can be collected.

        * Proxy web logs will either be located on the proxy server itself or a centralized logging solution. The collection of the logs can be as simple as running a filter in a web graphical user interface to export the logs, copying an entire text file containing log data from the server, or viewing the logs using the interface to the central logging system.

        * IDS alerts don’t have to be collected. They only need to be made available so they can be reviewed. Typically this is accomplished through a management console, security monitoring dashboard, or a centralized logging solution.

        * Prefetch files are stored on the potentially infected system. The collection of this artifact can be done by either pulling the files off remotely or locally. Remote options include an enterprise forensic tools such as F-Response, Encase Enterprise, or GRR Rapid Response, triage scripts such as Tr3Secure collection script, or by using the admin share since Prefetch files are not locked files. Local options can use the same options.

        * File system metadata is very similar to Prefetch files because the same collection methods work for collecting it. The one exception is the file can’t be pulled off by using the admin share.


Potential DFIR Tools to Use


The last part of the equation is what tools one should use to examine the information that is collected. The tools I’m outlining below are the ones I used to complete the practical.

        * Excel to view the text based proxy logs
        * Winprefetchview to parse and examine the prefetch files
        * MFT2CSV to parse and examine the $MFT file


Others’ Approaches to Triaging the Malware Event


Placeholder since none were known at the time of this post


Partial Malware Event Triage Workflow


The diagram below outlines the jIIr workflow for confirming malicious code events. The workflow is a modified version of the Securosis Malware Analysis Quant. I modified Securosis process to make it easier to use for security event analysis



Detection: the malicious code event is detected. Detection can be a result of technologies or a person reporting it. The workflow starts in response to a potential event being detected and reported.

Triage: the detected malicious code event is triaged to determine if it is a false positive or a real security event.

Compromised: after the event is triaged the first decision point is to decide if the machine could potentially be compromised. If the event is a false positive or one showing the machine couldn’t be infected then the workflow is exited and returns back to monitoring the network. If the event is confirmed or there is a strong indication it is real then the workflow continues to identifying the malware.

Malware Identified: the malware is identified two ways. The first way is identifying what the malware is including its purpose and characteristics. The second way is identifying and obtaining the malware from the actual system.

Root Cause Analysis: a quick root cause analysis is performed to determine how the machine was compromised and to identify indicators to use for scoping the incident. This root cause analysis does not call for a deep dive analysis taking hours and/or days but one only taking minutes.

Quarantine: the machine is finally quarantined from the network it is connected to. This workflow takes into account performing analysis remotely so disconnecting the machine from the network is done at a later point in the workflow. If the machine is initially disconnected after detection then analysis cannot be performed until someone either physically visits the machine or ships the machine to you. If an organization’s security monitoring and incident response capability is not mature enough to perform root cause analysis in minutes and analysis live over the wire then the Quarantine activity should occur once the decision is made about the machine being compromised.  


Triage Analysis Solution


I opted to triage this practical similar to a real security event. As a result, the post doesn’t use all of the supplied information and the approach is more focused on speed. The triage process started with the IDS alert screenshot the junior security analyst saw then proceeded to the proxy logs before zeroing in on the system in question.


IDS alerts


The screenshot below is the one supplied by the junior security analyst. In this practical it was not necessary to replay the packet capture to generate these IDS alerts since the screenshot supplied enough information.



Typically you can gain a lot of context about a security event by first exploring the IDS signatures themselves. Gaining context around a security event solely using the IDS signature names becomes second nature by doing event triage analysis on a daily basis. Analysts tend to see similar attacks triggering similar IDS alerts over time; making it easier to remember what attacks are and the traces they leave in networks. For other analysts this is where Google becomes their best friend. The screenshot shows three distinct events related to the malware in this security incident.

1.  ET CURRENT_EVENTS Possible Dyre SSL Cert: these signatures indicate a possible SSL certificate for the Dyre malware. Dyre is a banking Trojan and it uses SSL to encrypt its communication. The practical did not include this but another way to detect this activity is by consuming the SSL Blacklist and comparing it against an organization’s firewall logs or netflow data to see if any internal hosts are communicating with known IP addresses associated with Dyre

2. ET POLICY Internal Host Retrieving External IP via icanhazip[.]com: this signature flags an internal host that contacts a known website associated with identifying the public IP address. Depending on the organization this may or may not be normal behavior for web browsing and/or end users. However, some malware tries to identify the public facing IP address, which will trigger this IDS signature

3. ET TROJAN Common Upatre Header Structure: this signature flags traffic associated with the Upatre Trojan. Upatre is a downloader that installs other programs.

One of the IDS alerts could had been a false positive but it is unlikely for this sequence of alerts all to be false positives. This confirms what the junior analyst believed about the machine being compromised. Specifically, the machine was infected with the Upatre downloader, which then proceeded to install the Dyre banking Trojan.


Web Proxy Logs


IDS alerts provide additional information that can be used in other data sources. The practical doesn’t provide the date and time when the IDS signatures fired but it does provide the destination IP addresses and domain name the machine communicated with. These IP addresses were used to correlate the IDS alerts to activity recorded in the web proxy logs. The web_logs.csv file was imported into Excel and the data was sorted using the date. This puts the log entries chronological order making it easier to perform analysis.

The web logs provided with the practical were very basic. The only information recorded was the date/time, URL, and username. Unfortunately, the destination IP address was not recorded, which is typical with web proxies. As a result, the logs did not contain any entries for the IP addresses 72.175.10.116 and 104.238.141.75.

The search on the domain icanhazip[.]com also came up empty. At this point the web proxy logs provide no additional information with a date and time to go on. This analysis did reveal these web proxy logs suck and the organization needs to make it a priority to record more information to make analysis easier. The organization also needs to ensure the network routes all HTTP/HTTPs web traffic through the proxy so it gets recorded and prevents users and programs from bypassing it.


Prefetch files


At this point in the analysis the machine in question needs to be triaged. Reviewing programs executing on a system is a quick technique to identify malicious programs on a system. The high level indicators I tend to look are below:

        * Programs executing from temporary or cache folders
        * Programs executing from user profiles (AppData, Roaming, Local, etc)
        * Programs executing from C:\ProgramData or All Users profile
        * Programs executing from C:\RECYCLER
        * Programs stored as Alternate Data Streams (i.e. C:\Windows\System32:svchost.exe)
        * Programs with random and unusual file names
        * Windows programs located in wrong folders (i.e. C:\Windows\svchost.exe)
        * Other activity on the system around suspicious files

The collected prefetch files were parsed with Winprefetchview and I initially sorted by process path. I reviewed the parsed prefetch files using the general indicators I mentioned previously and I found the suspicious program highlighted in red.



The first suspicious program was SCAN_001_140815_881[1].SCR executing from the Temporary Internet Files directory. The program was suspicious because it is executing from the lab user profile and its name resembles a document name instead of a screensaver name. To gain more context around the suspicious program I then sorted by the Last Run time to see what else was executing around this time.



The SCAN_001_140815_881[1].SCR program executed at 8/15/2015 5:49:51 AM UTC. Shortly thereafter another executed named EJZZTA8.EXE executed from user’s Temp directory at 8/15/2015 5:51:03 AM UTC. Both prefetch files did not reference any other suspicious executables in their file handles. At this point not only do I have two suspicious files of interest but I also identified the exact date and time when the security event occurred.


Web Proxy Logs Redux


The date and time of the incident obtained from the Prefetch files can now be used to correlate the IDS alerts and suspicious programs to the activity in the web proxy logs. The picture below shows leading up to when the SCAN_001_140815_881[1].SCR program executed the user was accessing Yahoo email.



The rest of the web logs continued to show the user interacting with Yahoo email around the time the infection occurred. However, the web logs don’t record the entry showing where the SCAN_001_140815_881[1].SCR program came from. This occurred either because the web proxy didn’t record it or the web proxy sucks by not recording it. I’m going with latter since the web proxy logs are missing a lot of information.




File system metadata


At this point the IDS alerts revealed the system in question had network activity related to the Upatre downloader and Dyre banking Trojans. The prefetch files revealed suspicious programs named SCAN_001_140815_881[1].SCR that executed at 8/15/2015 5:49:51 AM UTC and EJZZTA8.EXE that executed at 8/15/2015 5:51:03 AM UTC. The web proxy logs showed the user was accessing Yahoo email around the time the programs executed. The next step in the triage process is to examine the file system metadata to identify any other malicious software on the system and to try to confirm the initial infection vector. I reviewed the metadata in a timeline to make it easier to see the activity for the time of interest.

For this practical I leveraged the MFT2CSV program in the configuration below to generate a timeline. However, an effective and faster technique - but not free - is using the home plate feature in Encase Enterprise against a remote system live. This enables you to triage the system live instead of trying to collect files from the system for offline analysis.



In the timeline I went to the time of interest, which was 8/15/2015 5:49:51 AM UTC. I then proceed forward in time to identify any other suspicious files. The first portion of the timeline didn’t show any new activity of interest around the SCAN_001_140815_881[1].SCR file.



Continuing going through the timeline going forward in time lead me to the next file EJZZTA8.EXE. The activity between these two files only showed files being created in the Temporary Internet Files and Cookies directories indicating the user was surfing the Internet.



At this point the timeline did not provide any new information and the last analysis step is to triage the suspicious programs found.


Researching Suspicious Files


The first program SCAN_001_140815_881[1].SCR was no longer on the system but the second program (EJZZTA8.EXE) was. The practical file file_hash_list.csv showed the EJZZTA8.EXE’s MD5 hash was f26fd37d263289cb7ad002fec50922c7. The first search was to determine if anyone uploaded the file to VirusTotal and a VirusTotal report was available. Numerous antivirus detections confirmed the program was Upatre, which matches one of the triggered IDS signatures.

A Google search of the hash located a Hybrid Analysis report and Malware report. The Hybrid Analysis report confirm the sample sends network traffic. This information could then be used to scope the incident to identify potentially other infected machines.



The resources section in the program contains an icon confirming the file tried to mimic a document. This makes me conclude it was a social engineering attack against the user.



The reports contained other information that one could use to identify other infected systems in the enterprise. However, as it relates to the practical there wasn’t much additional information I needed to complete my triage analysis. The next step would be to contact the end user to get the phishing email and then request for the email to be purged from all users’ inboxes.


Triage Analysis Wrap-up


The triage process did confirm the system was infected with malicious code. The evidence that was present on the system and the lack of other attack vector artifacts (i.e. exploits, vulnerable programs executing, etc.) leads me to believe the system was infected due to a phishing email. The email contained some mechanism to get the user to initiate the infection. The risk to the organization is twofold. One of the malicious programs downloads and installs other malware. The other malicious program tries to capture and exhilarate credentials from the system. The next step would be to escalate the malware event to the incident response process so the system can be quarantined, system can be cleaned, end user can be contacted to get the phishing email and then it can be determined why end users have access to web email instead of using the organization's email system. 
Labels: , , ,

Blaming Others

Monday, February 8, 2016 Posted by Corey Harrell 0 comments
As we marched across the parade deck from the side we looked as one. The sound of about 70 Marines' heels hitting the pavement but sounded as one. The sound of the hoarse drill instructor's voice echoed throughout the 3rd Battalion. The sight from the side must had been one to see. 70 Marines appearing as only a few walking in a single line. In one instant, in one brief moment the few became many. The drill instructor echoed one command followed by quickly correcting himself with a different command. The 70 Marines who were marching as one became many as they tried to adjust. The stress of making a mistake on his first platoon must have added to the pressure. As the Marines marched across the parade deck the drill instructor kept echoing the wrong commands forcing the Marines to adjust. The stress of the Marines striving to take first place must have added to the pressure. They lost their focus and were no longer in sync with the Marine standing next to them. It must have been a sorry sight from the side seeing close to 70 arms and legs marching with the sound of 70 heals hitting the parade deck at different times. Cluster is the most G-rated description one can give seeing the Marines march across the parade deck that afternoon.

The evaluation was over and the 70 Marines filed back into their barracks. The brief moment of reflection in their minds was broken as the sound of a footlocker being kicked broke the silence. The roar of the two other drill instructors’ hoarse voices followed the loud bang of more footlockers being kicked. The blame for the cluster on the parade deck was placed squarely on the recruits. That afternoon the Marines spent quality time doing sandpit hopping across 3rd Battalion in Parris Island. For those not acquainted with this tradition the following is what occurs. Recruits are forced exercise in what seems like a giant sandbox by following the orders barked by their drill instructor. Jumping jacks, mountain climbers, jumping jacks, push ups, mountain climbers, etc.. This goes on for a period of time before the recruits then run to the next sandbox to be smoked again in the same manner before running to the next sandbox. This continues until the drill instructors get bored or the recruits need to be somewhere. Words don’t do justice describing getting smoked so please take a few minutes to see a Pit Stop in action. In the sweltering heat of South Carolina, the recruits had sweat powering down their faces as they were covered in sand with sandfleas biting them. As much as they tried to ignore it they could only focus on the feeling of bugs feasting on them and not being able to do anything about it (one scratch typically ends with a lot longer time being smoked). That afternoon the recruits (me being one of them) thought to ourselves why are we being punished when our drill instructor messed up.

It was easier to blame even though it was hard to tell what even happened. It was easier to blame then it was to take responsibility so it wouldn't happen again. It was easier to blame then it was to admit we messed up; despite the circumstances we lost focus and resembled nasty civilians instead of Marines marching in sync. It was easier to blame to distract us from our current reality of shit.


Moral of the Story

It is wise to direct your anger towards problems - not people; to focus your energies on answers - not excuses.

- William Arthur Ward

Labels:

Triage Practical – Malware Event – Proxy Logs Prefetch $MFT IDS

Wednesday, January 6, 2016 Posted by Corey Harrell 0 comments
The ISO was thrilled and excited about the possibilities after you successfully triaged the previous suspicious network activity. They got a glimpse of the visibility one attains through security monitoring and the information one can get leveraging incident response. As you sit at your desk drinking a Mountain Dew you don’t have time to reflect on the days when your security team was like an ostrich with its head buried in the sand. You are slowly working on improving and formalizing your organization’s security monitoring and detection capabilities as you detect and respond to security events. In the background you hear the junior security guy say “we got another one.” You already know he is referring to a malware infection so you say to him “Grab a screenshot of the alerts and send it to me in an email.” As you wait for the email to arrive you start to wonder is it wrong to get excited and look forward to an alert that means your organization may have a problem. You brushed the thought aside as the email arrives and you see the screenshot below (dates and times have been censored). You put down the Mountain Dew and put your hands to the keyword as you start putting your malware triage process to the test.




Triage Scenario


The above scenario outlines the activity leading up to the current malware security event. Below are some of the initial questions you need to answer and report back to the ISO.

        - Is this a confirmed malware security event or was the junior analyst mistaken?
        - What do you think occurred on the system to cause the malware event in the first place?
        - What type of malware is involved and what capabilities does it have?
        - What potential risk does the malware pose to your organization?
        - What recommendation(s) do you make to the security team to strengthen its security program to reduce similar incidents occurring in the future?


Information Available


In an organization’s network you have a wealth of information available to you for you to use while triaging a security incident. Despite this, to successfully triage an incident only a subset of the data is needed. In this instance, you are provided with the following artifacts below for you to use during your triage. Please keep in mind, you may not even need all of these.

        - IDS alerts for the timeframe in question (you need to replay the provide pcap to generate the IDS alerts. pcap is not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)
        - Parsed index.dat files to simulate proxy web logs (the parsed index.dat information was modified to remove items not typically found in a web server’s proxy logs)
        - Prefetch files from the system in question (inside the Prefetch.ad1 file)
        - Filesystem metadata from the system in question (the Master File Table is provided for this practical)


Supporting References


The below items have also been provided to assist you working through the triage process.

        - The jIIr-Practical-Tips.pdf document shows how to: update the IDS signatures in Security Onion, replay the packet capture in Security Onion, and mount the ad1 file with FTK Imager.

        - The file hash list from the system in question. This is being provided since you do not access to the system nor a forensic image. This can help you confirm the security event and any suspicious files you may find.

        - The file hashes of the practical files for verification purposes



The 2016-01-06_Malware-Event Web Logs Prefetch MFT IDS practical files can be downloaded here

The 2016-01-06_Malware-Event Web Logs Prefetch MFT IDS triage write-up is outlined in the post Triage Practical Solution – Malware Event – Proxy Logs Prefetch $MFT IDS 


For background information about the jIIr practical’s please refer to Adding an Event Triage Drop to the Community Bucket article
Labels: , , ,

Triage Practical Solution – Malware Event – Prefetch $MFT IDS

Wednesday, December 9, 2015 Posted by Corey Harrell 1 comments
You are staring at your computer screen thinking how you are going to tell your ISO what you found. Thinking about how this single IDS alert might have been overlooked; how it might have been lost among the sea of alerts from the various security products deployed in your company. Your ISO tasked with you triaging a malware event and now you are ready to report back.


Triage Scenario


To fill in those readers who may not know what is going on you started out the meeting providing background information about the event. The practical provided the following abbreviated scenario (for the full scenario refer to the post Triage Practical – Malware Event – Prefetch $MFT IDS):

The ISO continued “I directed junior security guy to look at the IDS alerts that came in over the weekend. He said something very suspicious occurred early Saturday morning on August 15, 2015.” Then the ISO looked directly at you “I need you to look into whatever this activity is and report back what you find.” “Also, make sure you document the process you use since we are going to use it as a playbook for these types of security incidents going forward.”

Below are some of the initial questions you need to answer and report back to the ISO.

        * Is this a confirmed malware security event or was the junior analyst mistaken?
        * What type of malware is involved?
        * What potential risk does the malware pose to your organization?
        * Based on the available information, what do you think occurred on the system to cause the malware event in the first place?


Information Available


Despite the wealth of information available to you within an enterprise, only a subset of data was provided for you to use while triaging this malware event. The following artifacts were made available:

        * IDS alerts for the time frame in question (you need to replay the provide pcap to generate the IDS alerts. pcap was not provided for you to use during triage and was only made available to enable you to generate the IDS alerts in question)

        * Prefetch files from the system in question (inside the Prefetch.ad1 file)

        * File system metadata from the system in question (the Master File Table is provided for this practical)


Information Storage Location within an Enterprise


Each enterprise’s network is different and each one offers different information for triaging. As such, it is not possible to outline all the possible locations where this information could be located in enterprises. However, it is possible to highlight common areas where this information can be found. To those reading this post whose environments do not reflect the locations I mention then you can evaluate your network environment for a similar system containing similar information or better prepare your network environment by making sure this information starts being collected in systems.

        * IDS alerts within an enterprise can be stored on the IDS/IPS sensors themselves or centrally located through a management console and/or logging system (i.e. SIEM)

        * Prefetch files within an enterprise can be located on the potentially infected system

        * File system metadata within an enterprise can be located on the potentially infected system


Collecting the Information from the Storage Locations


Knowing where information is available within an enterprise is only part of the equation. It is necessary to collect the information so it can be used for triaging. Similar to all the differences between enterprises’ networks, how information is collected varies from one organization to the next. Below are a few suggestions for how the information outlined above can be collected.

        * IDS alerts don’t have to be collected. They only need to be made available so they can be reviewed. Typically this is accomplished through a management console or security monitoring dashboard.

        * Prefetch files are stored on the potentially infected system. The collection of this artifact can be done by either pulling the files off remotely or locally. Remote options include an enterprise forensic tools such as F-Response, Encase Enterprise, or GRR Rapid Response, triage scripts such as Tr3Secure collection script, or by using the admin share since Prefetch files are not locked files. Local options can use the same options.

        * File system metadata is very similar to prefetch files because the same collection methods work for collecting it. The one exception is the NTFS Master File Table ($MFT) can’t be pulled off by using the admin share.


Potential DFIR Tools to Use


The last part of the equation is what tools one should use to examine the information that is collected. The tools I’m outlining below are the ones I used to complete the practical.

      Security Onion to generate the IDS alerts
        * Winprefetchview to parse and examine the prefetch files
        * MFT2CSV to parse and examine the $MFT file


Others’ Approaches to Triaging the Malware Event


Before I dive into how I triaged the malware event I wanted to share the approaches used by others to tackle the same malware event. I find it helpful to see different perspectives and techniques tried to solve the same issue. I also wanted to thank those who took the time to do this so others could benefit from what you share.

Matt Gregory shared his analysis on his blog My Random Thoughts on InfoSec. Matt did a great job outlining not only what he found but by explaining how he did it and what tools he used. I highly recommend taking the time to read through his analysis and the thought process he used to approach this malware event.

An anonymous person (at least anonymous to me since I couldn’t locate their name) posted their analysis on a newly created blog called Forensic Insights. Their post goes into detail on analyzing the packet capture including what was transmitted to the remote device.


Partial Malware Event Triage Workflow


The diagram below outlines the jIIr workflow for confirming malicious code events. The workflow is a modified version of the Securosis Malware Analysis Quant. I modified Securosis process to make it easier to use for security event analysis.



Detection: the malicious code event is detected. Detection can be a result of technologies alerting on it or a person reporting it. The workflow starts in response to a potential event being detected and reported.

Triage: the detected malicious code event is triaged to determine if it is a false positive or a real security event.

Compromised: after the event is triaged the first decision point is to decide if the machine could potentially be compromised. If the event is a false positive or one showing the machine couldn’t be infected then the workflow is exited and returns back to monitoring the network. If the event is confirmed or there is a strong indication it is real then the workflow continues to identify the malware.

Malware Identified: the malware is identified two ways. The first way is identifying what the malware is including its purpose and characteristics using available information. The second way is identifying and obtaining the malware sample from the actual system to further identify the malware and its characteristics.

Root Cause Analysis: a quick root cause analysis is performed to determine how the machine was compromised and to identify indicators to use for scoping the incident. This root cause analysis does not call for a deep dive analysis taking hours and/or days but one only taking minutes.

Quarantine: the machine is finally quarantined from the network it is connected to. This workflow takes into account performing analysis remotely so disconnecting the machine from the network is done at a later point in the workflow. If the machine is initially disconnected after detection then analysis cannot be performed until someone either physically visits the machine or ships the machine to you. If an organization’s security monitoring and incident response capability is not mature enough to perform root cause analysis in minutes and analysis live over the wire then the Quarantine activity should occur once the decision is made about the machine being compromised.


Triage Analysis Solution


To triage the malware event outlined in the scenario does not require one to use all of the supplied information. The triage process could had started with either the IDS alert the junior security analyst saw or the prefetch files from system in question to see what program executed early Saturday morning on August 15, 2015. For completeness, my analysis touches on each data source and the information it contains. As a result, I started with the IDS signature to ensure I included it in my analysis.


IDS alerts


The screenshot below shows the IDS signatures that triggered by replaying the provided malware-event.pcap file in Security Onion. I highlighted the alert of interest.




The IDS alert by itself provides a wealth of information. The Emerging Threats (ET) signature that fired was "ET TROJAN HawkEye Keylogger FTP" and this occurred when the machine in question (192.168.200.128) made a connection to the IP address 107.180.21.230 on the FTP destination port 21. To determine if the alert is a false positive it’s necessary to explore the signature (if available) and the packet responsible for triggering it. The screenshot below shows the signature in question:



The signature is looking for a system on the $HOME_NET going to an external system on the FTP port 21 and the system has to initiate the connection (as reflected by flow:established,to_server). The packet needs to contain the string “STOR HAWKEye_”. The packet that triggered this signature meets all of these requirements. The system connected to an external IP address on port 21 and the picture below shows the data in the packet contained the string of interest.



Based on the network traffic and the packet data the IDS alert is not a false positive. I performed Internet research to obtain more context about the malware event. A simple Google search on HawkEye Keylogger produces numerous hits. From You Tube videos showing how to use it to forums posting cracked versions to various articles discussing it. One article is TrendMicro’s paper titled Piercing the HawkEye: Nigerian Cybercriminals Use a Simple Keylogger to Prey on SMBs Worldwide and just the pictures in the paper provide additional context (remember during triage you won’t have time to read 34 page paper.) The keylogger is easily customizable since it has a builder and it can delivery logs through SMTP or FTP. Additional functionality includes: stealing clipboard data, taking screenshots, downloading and executing other files, and collecting system information.

Research on the destination IP address shows the AS is GODADDY and numerous domain names map back to the address.



Prefetch files


When I review programs executing on a system I tend to keep the high level indicators below in mind. Over the years, these indicators have enabled me to quickly identify malicious programs that are or were on a system.


  • Programs executing from temporary or cache folders
  • Programs executing from user profiles (AppData, Roaming, Local, etc)
  • Programs executing from C:\ProgramData or All Users profile
  • Programs executing from C:\RECYCLER
  • Programs stored as Alternate Data Streams (i.e. C:\Windows\System32:svchost.exe)
  • Programs with random and unusual file names
  • Windows programs located in wrong folders (i.e. C:\Windows\svchost.exe)
  • Other activity on the system around suspicious files


The collected prefetch files were parsed with Winprefetchview and I initially sorted by process path. I reviewed the parsed prefetch files using my general indicators and I found the suspicious program highlighted in red.



The program in question is suspicious for two reasons. First, the program executed from the temporarily Internet files folder. The second reason and more important one was the name of the program, which was OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE (%20 is the encoding for a space). The name resembles a program trying to be disguised as a document. This is a social engineering technique used with phishing emails. To gain more context around the suspicious program I then sorted by the Last Run time to see what else was executing around this time.



The OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE program executed on 8/15/15 at 5:33:55 AM UTC, which matches up to the time frame the junior security analyst mentioned. The file had a MD5 hash of ea0995d9e52a436e80b9ad341ff4ee62. This hash was used to confirm the file was malicious as reflected in an available VirusTotal reportShortly thereafter another executable ran named VBC.exe but the process path was not reflected in the files referenced in the prefetch file itself. The other prefetch files did not show anything else I could easily tie to the malware event.


File System Metadata


At this point the IDS alert revealed the system in question had network activity related to the HawkEye Keylogger. The prefetch files revealed a suspicious program named OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE and it executed on 8/15/15 at 5:33:55 AM UTC. The next step in the triage process is to examine the file system metadata to identify any other malicious software on the system and to try to identify the initial infection vector. I reviewed the metadata in a timeline to make it easier to see the activity for the time of interest.

For this practical I leveraged the MFT2CSV program in the configuration below to generate a timeline. However, an effective technique - but not free - is using the home plate feature in Encase Enterprise against a remote system. This enables you to see all files and folders while being able to sort different ways. The Encase Enterprise method is not as comprehensive as a $MFT timeline but works well for triaging.



In the timeline I went to the time of interest, which was 8/15/15 at 5:33:55 AM UTC. I then proceeded forward in time to identify any other suspicious files. A few files were created within seconds of the OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE program executing. The files’ hashes will need to be used to determine more information about them since I am unable to view them.



The timeline then shows the VBC.EXE program executing followed by activity associated with a user surfing the Internet.



The timeline was reviewed for about a minute after the suspicious program executed and nothing else jumps out. The next step is go back to 8/15/15 at 5:33:55 AM UTC in the timeline to see what proceeded this event. There was more activity related to the user surfing the Internet as shown below.



I kept working my way through the web browsing files to find something to confirm what the user was actually doing. I worked my way through Yahoo cookies and cache web pages containing the word “messages”. There was nothing definite so I continued going back in time. I worked my way back to around 5:30 AM UTC where cookies for Yahoo web mail were created. This activity was three minutes prior to the infection; three minutes is a long time. At this point additional information is needed to definitely answer how the system became infected in the first place. At least I know that it came from the Internet using a web browser. note: in the scenario the pcap file was meant for IDS alerts only so I couldn’t use it to answer the vector question.


Researching Suspicious Files


The analysis is not complete without researching the suspicious files discovered through triage. I performed additional research on the file OVERDUE INVOICE DOCUMENTS FOR PAYMENT 082015[1].EXE using its MD5 hash ea0995d9e52a436e80b9ad341ff4ee62. I went back to its VirusTotal report and noticed there didn’t appear to be a common name in the various security product detections. However, there were unique detection names I used to conduct additional research. Microsoft’s detection name was TrojanSpy:MSIL/Golroted.B and their report said the malware “tries to gather information stored on your PC”. A Google search of the hash also located a Malwr sandbox report for the file. The report didn’t shed any light on the other files I found in the timeline.

The VBC.EXE file was no longer on the system preventing me from performing additional research on this file. The pid.txt and pidloc.txt files’ hashes were associated with a Hybrid Analysis report for a sample with the MD5 hash 242e9869ec694c6265afa533cfdf3e08. The report had a few interesting things. The sample also dropped the pid.txt and pidloc.txt files as well as executing the REGSVCS.EXE as a child process. This is the same behavior I saw in the file system metadata and prefetch files. The report provided a few other nuggets such as the sample tries to dump Web browser and Outlook stored passwords.


Triage Analysis Wrap-up


The triage process did confirm the system was infected with malicious code. The infection was a result of the user doing something on the Internet and additional information is needed to confirm what occurred on the system for it to become infected in the first place. The risk to the organization is the malicious code tries to capture and exfiltrate information from the system including passwords. The next step would be to escalate the malware event to the incident response process so a deeper analysis can be done to answer more questions. Questions such as what data was potentially exposed, what did the user do to contribute to the infection, was the attack random or targeted, and what type of response should be done.