Monday, 21 January 2013

SANS Forensic Artifact 7: Last Visited MRU

Welcome to 2013. I was fortunate to have some free time towards the end of last year which allowed me to catch up on some of my side projects such as the Malware Domain List script. Overall I had a great response from the community in regards to this script. I think a number of features and improvements could be made to it for added functionality and usability so I'll aim to get back to it at some stage soon.

Yesterday I found myself sitting in front of some ISA web proxy logs that were stored in the W3C format. My first thought is that I'd used Log Parser to do some initial analysis however I wondered what quick wins I might have if I tested my MDL parser over the log file. Surprisingly there were no errors and the tool ran successfully the first time. While I'd need to do some further testing to confirm that it discovered all the URLs it took me less than five minutes to run and provided me with some instant results. Although in this instance none were significant in terms of pivot points it was certainly worth the small effort I put in to achieve a result.

Harlan has recently asked "How well do you know what your tools do for you?" for which he's received a variety of responses. It's a great question and one of the reasons I'm doing the SANS artifact blog series is to understand my tools and the underlying dataset. One way to know your tools is to code your tools. In order to code your tool you'll need to have at least a basic understanding of underlying data set / artifact and most likely view and understand its raw format. Once you understand the data set I think its also beneficial to put some thought into potential pivot points for that data set. If you need to analyse that artifact in the future what could assist you with identifying quick wins in your investigation. In saying that I'll attempt to identify pivot points in the SANS artifacts I write about from this point onwards in the hope that it will assist myself or others in the future.

For anyone that is not I aware I've created a Twitter account @sploited which you may wish to follow. There are a number of interesting conversations on Twitter and I encourage people to get involved. In saying that lets move on with the artifact. SANS lists the following information within the poster.

Tracks the specific executable used by an application to open the files documented in the OpenSaveMRU key. In addition, each value also tracks the directory location for the last file that was accessed by that application.

Notepad.exe was last run using the
C:\Users\<Username>\Desktop folder
XP NTUSER.DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\ComDlg32\LastVisitedMRU
Win7 NTUSER.DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\ComDlg32\LastVisitedPidlMRU
Tracks the application executables used to open OpenSaveMRU and the last file path used.

We've previously covered the SANS Forensic Artifact 1: Open/Save MRU and this artifact is really a continuation of where we finished with Artifact 1. As per the description listed this key tracks the executable that was used to open the files identified in the OpenSaveMRU key. I decided to go back to my original theory of creating two text files again (although I only really needed one) that I'd open through the Open/Save window. Once again to ease my investigation I created a quick script that I could use to parse my live ntuser.dat file and compare results. At this point I'm using an older Windows XP test machine for this analysis. The script is as follows

 FOR /F "tokens=*" %%G IN ('dir /b ^"C:\Documents and Settings\*^"') DO .\tools\hob\hobocopy.exe "c:\Documents and Settings\%%G" .\hives\%%G NTUSER.DAT  
 ::Parse ntuser.dat using RegRipper  
 FOR /F "tokens=*" %%G IN ('dir /b ^"C:\Documents and Settings\*^"') DO (  
      .\tools\rr\rip.exe -r .\hives\%%G\NTUSER.DAT -f ntuser >> output\rr-ntuser-%%G.txt  

I created two files in C:\temp\SANS LastVistedMRU named SANS LastVisitedMRU Test File *.txt where * is the number 1 and 2. I ran the script above to dump my hive and analyse with RegRipper. As expected I received the following results.

The above results have been clipped just to show the necessary artifacts. I decided to do a word search of the RegRipper output for both the folder name and the text file name. What I found interesting was the affect that I'd had on other artifacts in the registry just by doing the small amount of actions on this system. Listed below is the output of that search

 comdlg32 v.20110901  
 (NTUSER.DAT) Gets contents of user's ComDlg32 key  
 comdlg32 v.20110901  
 LastWrite Time Thu Mar 15 01:18:21 2012 (UTC)  
  MRUList = aygdxjewvoqtursfplicnmkbh  
  a -> EXE: notepad.exe  
   -> Last Dir: C:\temp\SANS LastVistedMRU  
 LastWrite Time: Mon Jan 21 23:44:27 2013 Z  
  MRUList = edagchijfb  
  e -> C:\temp\SANS LastVistedMRU\SANS LastVisitedMRU Test File 2.txt  
  d -> C:\temp\SANS LastVistedMRU\SANS LastVisitedMRU Test File 1.txt  
 LastWrite Time: Mon Jan 21 23:44:27 2013 Z  
  MRUList = gbifhadcje  
  g -> C:\temp\SANS LastVistedMRU\SANS LastVisitedMRU Test File 2.txt  
  b -> C:\temp\SANS LastVistedMRU\SANS LastVisitedMRU Test File 1.txt  
  Recentdocs v.20100405  
 (NTUSER.DAT) Gets contents of user's RecentDocs key  
 **All values printed in MRUList\MRUListEx order.  
 LastWrite Time Mon Jan 21 23:51:22 2013 (UTC)  
  87 = SANS LastVistedMRU  
  18 = SANS LastVisitedMRU Test File 2.txt  
  85 = SANS LastVisitedMRU Test File 1.txt  
 LastWrite Time Mon Jan 21 23:51:22 2013 (UTC)  
 MRUListEx = 6,2,3,4,5,7,1,8  
  6 = SANS LastVisitedMRU Test File 2.txt  
  2 = SANS LastVisitedMRU Test File 1.txt  
 LastWrite Time Mon Jan 21 23:51:22 2013 (UTC)  
 MRUListEx = 1,0,9,8,6,7,4,3  
  1 = SANS LastVistedMRU   

Again I've clipped the results to the artifacts of interest. Recently Harlan posted about an Analysis Matrix where he discusses adding event categories to artifacts so that we can add additional context to our investigation or additionally assist with clipping our timelines so they only contain artifacts which relate to a particular category. This got me thinking about creating a "matrix" or "mapping" of the affect that events have on other artifacts within a system. Harlan additional mentions that the presence or the lack of an artifact can be an artifact in itself. To explain I thought I'd look at the incident above where I've completed the following actions:
  1. Created a new folder on my workstation
  2. Created two (2) new text files
  3. Opened notepad
  4. File -> Open and opened the two files created in step 2.
In this particular instance the following artifacts have been created / modified

# File Open/Save (# is the appropriate file extension)
    ->  Software\Microsoft\Windows\CurrentVersion\Explorer\ComDlg32\LastVisitedMRU
    ->  Software\Microsoft\Windows\CurrentVersion\Explorer\ComDlg32\OpenSaveMRU\*
    ->  Software\Microsoft\Windows\CurrentVersion\Explorer\ComDlg32\OpenSaveMRU\#
    ->  Software\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs
    ->  Software\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs\.#
    ->  Software\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs\Folder
    ->  UserAssist - UEME_RUNPIDL:%csidl2%\Accessories\Notepad.lnk
    ->  B...  C:/Documents and Settings/username/Start Menu/Programs/Accessories/Notepad.lnk
    ->  B... C:/Documents and Settings/username/Recent/SANS LastVistedMRU.lnk 
    -> last run (1068)
    ->  M... .//Software/Microsoft/Notepad

By no means is the list above a definitive list and I also believe that it needs some further thought in order to shape it into something usable such as the SANS Forensic Poster.

I put some thought in to how you might automate the discovery of potential pivot points with this category however again it is most likely determined by the incident at hand. For example you might want to understand whether any of the files opened / saved contained a keyword such as credit card number or other sensitive business knowledge. There are a number of ways of doing this however when reviewing just the output above you may do something like the following:
  1. Create a script that parses the output above identifying files or directories
  2. Based on the files found above automatically conduct keyword search across each of the files
  3. Conduct an md5 search of the files identified to see whether a match is found for a file provided by your customer
  4. Produce an output file containing the files discovered that contained the keyword so you can review at a later stage

The advantage is that you're only analysing a subset of data which could mean that you'll have results much sooner. Of course you could do this across a complete forensic image but that might take hours before you achieve results. Something such as the above would certainly be
      a) fairly simple to create and
      b) require minimal effort to execute quickly
      c) identify pivot points for additional analysis and
      d) potentially clip your timeline to something more manageable.

Its not going to work in every scenario but what you're trying to do is to identify quick wins with minimal effort.


Friday, 28 December 2012

Timeline Pivot Points with the Malware Domain List

I thought as its the end of the year it would be a good opportunity to briefly break away from the SANS Forensic Artifact posts I've been writing. In my own time I've been playing around with some code that parses a Timeline file for any URL discovered within and then compares that with the URLs listed in the Malware Domain List (MDL).

If a match is found it lists the malicious URL from MDL and the description which explains why that URL has been listed on MDL. I'm creating this for greater ability to find "Pivot Points" which both Rob Lee and Harlan Carvey mention which serve as an anchor for our investigations. Pivot Points can come in a variety of forms both verbally and technically and will hopefully assist us with a starting point or area of focus. The less time we can spend poking around an image the more time we can spend providing value to our customers or employers.

So to get started I first downloaded a copy of the Malware Domain List. You can get yourself a copy at the following location -> Once you have the list I proceeded to create an SQLlite database and imported the MDL list within it. You can easily install the Firefox addon SQLlite Manager which is the method I've used.
  1. Create a new database in the same directory as the script called malwaredomainlist.sqlite
  2. Import the MDL from CSV into new table called mdomain
  3. See screenshot below for appropriate field names to use for the table

Above are some basic steps to get you up and running and if you review the screenshot you'll see the table and field names I've used. If you decide to use the tool that I post you will want to ensure that your  filename, table name and your field names are the same as mine otherwise you'll generate some errors.

I had a few attempts at tackling how I would compare the domains discovered within my timeline to the ones within the MDL. I felt the only way to do this accurately would be to reduce both URLs down to their domain name including the suffix / tld / gtld. I had a few attempts at coding this but always found that some domain would break the script at a point. In the end I went with a pre packaged module -> To install the module type the following command from command prompt while ensuring you've obviously installed Perl in the first place. Below is the command plus the output

 ppm install Domain::PublicSuffix  
 Downloading Domain-PublicSuffix-0.07...done  
 Downloading Data-Validate-Domain-0.10...done  
 Downloading Net-Domain-TLD-1.69...done  
 Unpacking Domain-PublicSuffix-0.07...done  
 Unpacking Data-Validate-Domain-0.10...done  
 Unpacking Net-Domain-TLD-1.69...done  
 Generating HTML for Domain-PublicSuffix-0.07...done  
 Generating HTML for Data-Validate-Domain-0.10...done  
 Generating HTML for Net-Domain-TLD-1.69...done  
 Updating files in site area...done  
  11 files installed  

The above module make use of a Firefox dat file which it uses to identify the TLD / suffix on the domain. So in order for the script to work you'll need to also download this dat file which you can find at the following -> and save it within the same directory at the script.

Now that we have our database sorted I've created the following script. At this point its still a work in progress and I haven't commented it very well. As always my code is taken "as is" and I provide no additional support or responsibility for the output it provides. I'm no coding guru and always appreciate feedback on a better or more efficient way of doing things so feel free to shout out.

 #! c:\perl\bin\perl.exe   
 use Domain::PublicSuffix;  
 use DBI;   
 use strict;   
 use Getopt::Long;   
 use Regexp::Common qw /URI/;   
 use URI;  
 use List::MoreUtils qw/ uniq /;  
 my %config = ();   
 GetOptions(\%config, qw(file|f=s system|s=s user|u=s help|?|h));   
 if ($config{help} || ! %config) {   
   exit 1;   
 die "You must enter a path.\n" unless ($config{file});   
 #die "File not found.\n" unless (-e $config{file} && -f $config{file});   
 my $file = $config{file};   
 my @uniq_domains;  
 my $suffix = new Domain::PublicSuffix ({  
   'data_file' => 'effective_tld_names.dat'  
 open( my $fh, '<', $file ) or die "Can't open $file: $!";  
 while ( my $line = <$fh> ) {  
      my @url = $line =~ m/($RE{URI}{HTTP}{-scheme => qr(https?)})/g;   
           my $temp_domain = URI->new( $url[0] );  
           my $domain = $temp_domain->host;  
           my $domain1 = getDomain($domain);  
      close $fh;  
      my @unique = uniq @uniq_domains;  
           foreach ( @unique ) {       
                     if($_) {  
                          my $db = DBI->connect("dbi:SQLite:dbname=malwaredomainlist.sqlite","","") || die( "Unable to connect to database\n" );   
                          my $all = $db->selectall_arrayref("SELECT domain,description from mdomain where domain LIKE '%$_%'");   
                          foreach my $row (@$all) {   
                               my ($maldomain,$description) = @$row;        
                               my @splitdomain = split('/',$maldomain);  
                               my @splitdomain = split(':',$splitdomain[0]);                                
                               my $tempmdomain = getDomain($splitdomain[0]);                      
                               if($_ eq $tempmdomain) {  
                                    print $_.",".$maldomain.",".$description."\n";   
 sub getDomain {  
 my $root = $suffix->get_root_domain($_[0]);  
 return $root;  
  sub _syntax {   
  print<< "EOT";   
  Produce list of malware domain hits from timeline output   
  -f file..................path to timeline file   
  -h ......................Help (print this information)   
  **All times printed as GMT/UTC   
  copyright 2012 Sploit   

At this point if you run a command such as the following:

 malwaredomainlist -f timeline.csv > output.txt  

You'll be presented with output in csv format (assuming my instructions made sense) where the fields presented are the domain in question, the complete malware domain url and the description/comments. Here is a sample output:,|,RFI,,RFI,,RFI,,RFI,,Mebroot calls home,,Rogue,,compromised site directs to exploits  

As you can see from the above there are some URLs which will consistently generate false positives such as My script grabs the unique URLs listed within a timeline and typically you'll almost always have listed.

At this point I'm not sure the value in this tool. Its fairly quick to run and if you find yourself with a massive timeline file and you're not sure where to start then potentially this might be your next best bet. While i'm tweaking the code I haven't created the executable version of it yet however I have uploaded the  following code to my Google code repository to save you any issues with copying the source code above.

Hopefully you get some value out of the tool please let me know if you have any success with using it. In the meantime I'll continue to tweak and update the code. At this point it would be nice to have an option to download a fresh MDL and update the database. Overall this wouldn't take long to do manually however it would be nice for it to be automatic.

Thursday, 27 December 2012

SANS Forensic Artifact 6: UserAssist

I'm a little late to say this but firstly Happy Christmas to my readers out there. I've been fortunate enough to have a little time off but still find myself working the Christmas / New Year period. I hope some of you have more time off and can catch up on some of those tasks you've been avoiding.

For today we're moving onto the new category which I think everybody will find of interest which is Program Execution. There have been a huge number of posts on these artifacts and just how valuable they can be. Once again we'll attempt to create a few of the artifacts in different ways and see how that results when using our tools.

I still haven't forgotten about the artifacts we've missed so far and I'm currently working on some posts to cover those so that I have a complete series.

GUI-based programs launched from the desktop are tracked in the launcher on a Windows System.
All values are ROT-13 Encoded
  • GUID for XP 
    • 75048700 Active Desktop 
  • GUID for Win7 
    • CEBFF5CD Executable File Execution
    • F4E57C4B Shortcut File Execution
  • Program Locations for Win7 Userassist
    • ProgramFilesX64 6D809377-…
    • ProgramFilesX86 7C5A40EF-…
    • System 1AC14E77-…
    • SystemX86 D65231B0-…
    • Desktop B4BFCC3A-…
    • Documents FDD39AD0-…
    • Downloads 374DE290-…
    • UserProfiles 0762D272-…
Lets firstly take a look at what we see in my UserAssist registry key so we understand what our tool must export and parse and to be able to understand  which applications have launched and from where. I browsed  to the following "NTUSER.DAT\Software\Microsoft\Windows\Currentversion\Explorer\UserAssist" and found this

Within each of the Count keys listed a number of values which as mentioned above are ROT13 encoded. To the human eye they don't make much sense but once we decode them we'll easily see what the values mean. To give you a feel for what the values look like compared to the decoded values see the following output. I have just grabbed some sample values from my own computer where the first value is the ROT13 value and the second value is the decoded value.

 P:\Cebtenz Svyrf (k86)\Zbmvyyn Sversbk\bzav.wn  
 C:\Program Files (x86)\Mozilla Firefox\omni.ja  

You get the picture of what we are dealing with and as mentioned above these are just a few samples of what I have in mine. You'll notice that there are a number of values with UEME prefixing a word. These can also add context to how an applications may have been run. I've attempted to find a full list of each of these for both Windows 7 and Windows XP however I've only been able to find bits and pieces. The following list is taken from Didier Stevens blog at the following location (here).
In Windows 7 they've significantly reduced the amount as you can see below in the comparison. Many of the following are self explanatory and I won't be going into each for this particular tutorial.

 Windows 7  
 XP DLL (version 6.00.2900.3157):  

So lets try to generate some of our own values and see how that shows within the output of RegRipper. To get started I began by running 'procexp.exe' from the system internals suite. I picked this application because it was GUI based and it would be easy for me to copy it to different locations on my computer. I'd then once again use a combination of HoboCopy (to rip my active registry hive) and RegRipper to rip the userassist registry key and examine the contents. I ran procexp.exe in four different places which were Desktop, root of my username folder, Documents and finally from within the x64 Program Files location.

I  ran the following command for HoboCopy

 HoboCopy.exe c:\Users\username c:\tmp\ ntuser.dat  

Then the following for RegRipper

 rip.exe -r c:\tmp\ntuser.dat -p userassist2 > c:\tmp\userassist.txt  

The above commands produced the following output

 Thu Dec 27 07:31:20 2012 Z  
  {6D809377-6AF0-444B-8957-A3773F02200E}\procexp.exe (1)  
 Thu Dec 27 07:30:57 2012 Z  
  C:\Users\username\Documents\procexp.exe (1)  
 Thu Dec 27 07:30:37 2012 Z  
  C:\Users\username\procexp.exe (1)  
 Thu Dec 27 07:30:11 2012 Z  
  C:\Users\username\Desktop\procexp.exe (1)  

As you can see from above most of them make sense apart from the one where we ran from within our x64 Program Files. I grabbed the code highlighted in red and Googled the code. I found the following Microsoft site which explained each of the codes.

If you don't want to use the list I've posted above you can also do a find from within regedit and that will also find the code.

I decoded some of the values that I had listed in my output and placed them in the categories identified in the Microsoft article

           {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\NOTEPAD.EXE (19)  
           {1AC14E77-02E7-4E5D-B744-2EB1AE5198B7}\cmd.exe (5)  
           {F38BF404-1D43-42F2-9305-67DE0B28FC23}\regedit.exe (1)  
           {7C5A40EF-A0FB-4BFC-874A-C0F2E0B9FA8E}\Notepad++\notepad++.exe (1)  
           {7C5A40EF-A0FB-4BFC-874A-C0F2E0B9FA8E}\Microsoft Office\Office12\OUTLOOK.EXE (11)  

Hopefully I've explained the artifact and you can take a better understanding away. This artifact has had countless articles written about it and the importance to your investigations. If you're not reviewing it then you should get started with it and make sure its part of all your investigations.

Below are some key references that I've found while researching this artifact and you might find some value.


Monday, 3 December 2012

SANS Forensic Artifact 5: Downloads.sqlite

I thought I'd get through this next artifact fairly quickly as again I've done some work prior with my Firefox script which has the option available to parse the information out of the Downloads.sqlite database.

Please note that the last category should have been posted as Artifact 4, I've adjusted that, and therefore this makes Artifact number 5 on the poster.

SANS lists the following information within the poster within their File Download Category

Firefox has a built-in download manager application which keeps a history of every file downloaded by the user. This browser artifact can provide excellent information about what sites a user has been visiting and what kinds of files they have been downloading from them.

Location: Firefox
XP %userprofile%\Application Data\Mozilla\ Firefox\Profiles\<random text>.default\downloads.sqlite
Win7 %userprofile%\AppData\Roaming\Mozilla\ Firefox\Profiles\<random text>.default\downloads.sqlite
Downloads.sqlite will include:
• Filename, Size, and Type
• Download from and Referring Page
• File Save Location
• Application Used to Open File
• Download Start and End Times

While we are on this topic I thought it might be timely to touch on a recent post by Patrick Olsen over at the System Forensics blog. Patrick posted this week about the creation of a new tool that he'd been working upon named BARFF which stands for Browser Artifact Recovery Forensic Framework. This tool is beneficial for both my last and current post but in particular the SANS poster category of "Browser Forensics". I haven't had the chance to download a copy myself as yet but I encourage anyone to give it a go and provide him with your feedback.

In terms of the structure of the Downloads.sqlite database and any of the databases associated with Firefox David Koepi has an excellent resource available here which will provide a strong resource for those wanting to get started on browser forensics. I thought it would be beneficial to first download a number of applications through Firefox and then using SQLite Manager, a plugin for firefox, we can run an initial query and take a look at what we see.

From the above screenshot there are a number of items we can use from a forensic perspective:
  • The name which contains the name of the executable
  • The source which contains the source of where the file was downloaded from.
  • The target file path
  • Start and end time which is what we'll use within our timelines
  • The state of the download as mentioned by David Koepi
    • "0"  in the state object indicates download is in progress
    • "1" in the state object indicates download is successful
    • "3" indicates download is cancelled
    • "4" indicates download is paused
  • We have a referer  field for the referring site
  •  Although not shown well in the above screenshot two important fields are preferredApplication and preferredAction which show the default application for opening 
    • "0" states that the file has been saved
    • "4" I believe states that it was open with a preferred application but more testing is required
    • In my tests i was unable to populate the preferredApplication field and again  some further testing is required
  • Lastly the currBytes and MaxBytes which can be used for a comparison between how large the file is in comparison to what has actually been downloaded.
In my example in the screenshot above I cancelled the download of FTK 4.1 and that is reflected by the state of 3 and MaxBytes lists it as -1. Important to note that this database is updated to reflect the same view as the one viewed in the graphical downloads window. Should a user delete all of the entries or remove individual downloads then this will also remove it from the database. As well as the tool mentioned above I've also created a number of Perl scripts or their converted executable to parse this information. Lets take a look at how we'd run those tools and compare the output.

To run the command you can run something like the following and obviously be aware that you're profile will be in a different location to mine.

 firefox.exe -d -p C:/Documents and Settings/username/Application Data/Mozilla/Firefox/Profiles/fd9zh9ag.default -s WORKSTATION -u USERNAME > c:\temp\events.txt  

Again this parses to Harlan's TLN timeline format and you can then convert it with the script that Harlan provides and turn this into a spreadsheet for your analysis.The output is the following.

 1353362936|FIREFOX|WORKSTATION|USERNAME|dl:winscp511setup.exe src: cB:4854080 mB:4854080  
 1353364779|FIREFOX|WORKSTATION|USERNAME|dl:sav32sfx(1).exe src: cB:72805712 mB:72805712  
 1353364806|FIREFOX|WORKSTATION|USERNAME| src: cB:3656900 mB:3656900  
 1354486120|FIREFOX|WORKSTATION|USERNAME|dl:googletalk-setup.exe src: cB:1606064 mB:1606064  
 1354493698|FIREFOX|WORKSTATION|USERNAME|dl:FTK 4.1.0 Intl.iso src: cB:0 mB:-1  

Although Chrome is not specifically mentioned I felt it was of equal importance in this category and therefore it was best I showed examples for both. Again with these examples its important that when testing these tools you note the time that you download each of the files and confirm in the output, as we did in the last post, that your timeline produces the correct time while at the same time understanding any conversions required from UTC to local time.

Again I opened the database, the History file, using SQLite Manager

In this case we don't have as much detail in the downloads table as we do with the downloads database within firefox. Once again I ran the command using a similar command to the one we used above however this time using my chrome script

 chrome -d -p "C:\Documents and Settings\username\Local Settings\Application Data\Google\Chrome\User Data\Default" -s WORKSTATION -u USERNAME > c:\temp\chrome_events.txt  

The output is the following.

 1347264951|CHROME|WORKSTATION|USERNAME|dl:C:\Documents and Settings\username\My Documents\Downloads\ChromeSetup (1).exe src: cB:739808 mB:739808  
 1354584249|CHROME|WORKSTATION|USERNAME|dl:C:\Documents and Settings\username\My Documents\Downloads\sav32sfx (1).exe src: cB:72805712 mB:72805712  
 1354584279|CHROME|WORKSTATION|USERNAME|dl:C:\Documents and Settings\username\My Documents\Downloads\googletalk-setup (1).exe src: cB:1606064 mB:1606064  
 1354584297|CHROME|WORKSTATION|USERNAME|dl:C:\Documents and Settings\username\My Documents\Downloads\sav32sfx (2).exe src: cB:72805712 mB:72805712  

Well I think I've discussed this topic enough. Again not overly complex but like everything you'll have a return on your tools if you understand what they do and have some assurance that the output is expected and once again you're aware of any adjustments you may need to make to ensure you're looking at the correct local time if required. If you're looking for the tools mentioned above that I've written you can find them at the following location


Monday, 12 November 2012

SANS Forensic Artifact 4: Index.dat / Places.sqlite

You may be wondering why at this point we've moved on from artifact 1 to artifact 4. I spent some time thinking about what I wanted to discuss PST/OST files and Skype logs and felt I needed some more time to make this more beneficial to everyone. This doesn't mean I won't be completing it just that I'll be coming back to it after we explore some of the other artifacts first. The category for today is in the  File Download category: Index.dat / Places.sqlite.

This should be a fairly easy category to post about as I've already posted some information and tools on how to parse this information.

SANS lists the following information within the poster.

Not directly related to “File Download”. Details stored for each local user account. Records number of times visited (frequency).
Location: Internet Explorer
XP %userprofile%\Local Settings\History\ History.IE5
Win7 %userprofile%\AppData\Local\Microsoft\Windows\History\History.IE5
Win7 %userprofile%\AppData\Local\Microsoft\Windows\History\Low\History.IE5
Location: Firefox
XP %userprofile%\Application Data\Mozilla\ Firefox\Profiles\<random text>.default\places.sqlite
Win7 %userprofile%\AppData\Roaming\Mozilla\ Firefox\Profiles\<random text>.default\places.sqlite

For today we can easily test the tools we have by browsing to certain sites and making a note in regards to the time we viewed the site. When we run our tools we should be able to see the same time referenced by our tool. Other areas we can look at are what happens when we delete the history and whether entries remain or they're removed. Finally I wanted to touch on some of the different entries you'll find within the index.dat file which are URL / LEAK / REDR entries. I'll touch briefly on LEAK entries and link to a practical example of how we create one of these entries. This should help us associate our own experiences in future incidents where we may be responding to incidents that contain these artifacts.

Firstly lets start by browsing three websites in both Internet Explorer and Firefox and i'll show how to parse the data using Harlan Carvey's open source perl scripts and some of my own. We'll then convert these into TLN format and confirm our times with those we noted at the time we visited each site.

Here are the times I noted for each site I visited with both IE and Firefox. I know this is an unrealistic example because nobody ever users bing or yahoo right? poor attempt at humour, stay with me it can only get better.

Firefox - 10:12am - 10.12am - 10.13am

Internet Explorer - 10.13am - 10.14am - 10.15am

I started by parsing my index.dat file with the following command, which uses Harlan's urlcache script, and producing my initial events.txt file -f "C:/Documents and Settings/username/Local Settings/History/History.IE5/index.dat" -l >> events.txt  

Following that I used my own script and ran the following command -p "C:/Documents and Settings/username/Application Data/Mozilla/Firefox/Profiles/fd9zh9ag.default" -d >> events.txt  

Finally I ran Harlan's parse script to create my timeline file. -f events.txt -c > timeline.csv  

Lets compare our timeline results with the times we listed above. Important to note is that Harlan's timeline tools produce output in UTC time. So depending which timezone you are in you may need to add a column and adjust the time to your local. I typically use a cell value such as the following =A1+TIME(#,0,0) where '#' should be replaced with the hours you wish to add.

So firstly I wanted to add that I've already had some value out of checking my own tool as you can see from the screenshot above the firefox history is in the wrong cell in comparison to Harlan's script. This is pretty easy to fix and I'll resolve that as soon as I can for everyone. It shouldn't affect your timelines in any way however.

If we compare our results from the above

Firefox - 10:12am - tln: 10:12am - 10.12am - tln: 10.12am - 10.13am - tln:

Internet Explorer - 10.13am - tln: 10.13am - 10.14am - tln: 10.14am - 10.15am - - tln: 10:15am

So the above is pretty clear that our tools are producing the output we are expecting. Again this is all fairly basic at this point but it shows now we have an understanding of what the output of our tools look like and we're confident that they're producing the correct times for us to evaluate. This can be critical in terms of your incident response as you do not want to miss an artifact because you thought your tool was providing the correct time when it was in fact off by a number of minutes or hours.

So what happens when we delete the history I presume we lose this information and our timeline should be empty. Lets test the theory by clearing all of the IE history and four hours firefox history.

I ran the same commands as I did above and as expected the last data I had was from the previous time I used the workstation and had nothing from the current day for firefox. There was also no information for Internet explorer at all. So this is worthwhile keeping in mind while you're investigating in that if the user has deleted their history you may need to go to other areas to identify sites visited. The guys at Volatility have written an excellent article on how to scan for Internet history through the use of one of their plugins.

Finally some further discussions around LEAK entries within the IE History. What are LEAK entries? Well Mike Murr from SANS classifies them as the following

"Essentially, a LEAK record is created when a cached URL entry is deleted (by calling DeleteUrlCacheEntry) and the cached file associated with the entry (a.k.a. "temporary internet file" or TIF) can not be deleted."

The above article discusses an easy way to create a LEAK entry. Mike also discusses in more detail at the following link on how to create LEAK entries using some Python scripts which are unfortunately unavailable due to dead links but you should be able to recreate it if required.

Again if you would like access to my firefox script you can grab it from the following


Wednesday, 10 October 2012

SANS Forensic Artifact 1: Open/Save MRU

As most of you would have seen by now SANS posted a fantastic forensic poster for everybody to use which will "map a specific artifact to the analysis question that it will help to answer". Basically what that means is that SANS have 8 categories used to determine an analysis question. "Was the file opened?" for example an analyst could review the 'File Opening / Creation" category which will show artifacts that assist in determining whether files were opened.

We already know there are a number of tools available to us that can easily rip this information to us, such as RegRipper, and countless articles written about each of the artifacts listed. I find however that if you're purely reviewing the output of the tools to identify that a file was opened or that a file had executed in some way you may be missing the context in how that file was opened. Security analysts must ensure they have the technical understanding for each of the tools they run to ensure they can explain the prescence of an artifact, or lack thereof, listed within the output of their tools.

So I thought for my own benefit I'd create a series of blog posts going through each of the forensic artifacts and hopefully providing some examples with screenshots on how each artifact is created. Again there is no better way to understand these tools then running it across your own workstation where you understand exactly what you've done to complete that action. So with that being said lets take a look at the first artifact SANS lists within the File Download category: Open/Save MRU.

SANS lists the following information within the poster.

In simplest terms, this key tracks files that have been opened or saved within a Windows shell dialog box. This happens to be a big data set, not only including web browsers like Internet Explorer and Firefox, but also a majority of commonly used applications.
XP NTUSER.DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\ComDlg32\OpenSaveMRU
Win7 NTUSER.DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\ComDlg32\OpenSavePIDlMRU
• The “*” key – This subkey tracks the most recent files of any extension input in an OpenSave dialog

• .??? (Three letter extension) – This subkey stores file info from the OpenSave dialog by specific extension

So how can we test the information we've been presented with above. The best and simplest way is create a text file called SANS_ForensicArtifact1_MRU_1.txt  and SANS_ForensicArtifact1_MRU_2.txt and open one of them within notepad through the Windows shell dialog box and open another just by double clicking and letting the associated application open the file. We should note the differences in this key from before and after snapshots.

There were a number of ways to export the user hive from my current workstation I decided to leverage of one of my previous posts by using hobocopy to rip it from my live machine. You could use tools such as the built in regedit.exe,  simply run a reg save query from the command line or even use a tool such as ftkimager to get this information.

I ran the following first so that  I could have a comparison. Please note there are two separate commands in the dialog box below.

FOR /F "tokens=*" %%G IN ('dir /b ^"C:\Documents and Settings\*^"') DO .\tools\hob\hobocopy.exe "c:\Documents and Settings\%%G" .\hives\%%G NTUSER.DATregripper command:rip.exe -r ..\..\hives\username\NTUSER.DAT -f ntuser >> username.txt

I searched the output of regripper for OpenSaveMRU and currently it only listed documents that I'd open previously.

Important to note that  from the above example the first key is for objects opened that do not have an extension. The SANS reference below indicates the following "The values stored in the key itself are items that do not have file extensions associated with them. Since most files have extensions, what often ends up here is auto-complete information".

The '*' key indicates the last 10 files opened through the windows dialog box regardless of the extension it has. This can assist us with determining the opening order of each of those 10 files listed. Finally there is a key for each of the extensions opened and in this case we are only concerned with the txt so that we can use our example above. Lets open our file created above and I'll also open a secondary location with a file that has no extension in c:\blogmrulocation\test_noext

As you can see from the screenshot I've opened the file named SANS_ForensicArtifact1_MRU_1.txt. I also opened the file with no extension. For compasion I also double clicked on SANS_ForensicArtifact1_MRU_2.txt (i.e. did not open through windows dialog box) so that we have a  comparison to work with.

Lets run the script again to dump my hive and process it with regripper.

As we can see the only files listed are the ones that we opened through the windows dialog box and there is nothing currently listed for SANS_ForensicArtifact1_MRU_2.txt. As I ran the RegRipper plugin with the -f ntuser option selected it will parse my ntuser.dat file against a number of different plugins and enter the output in the same text file. I decided to search for the second file opened through double click to see where we had references for it.

Although the above keys have nothing to do with my current post (although we'll hit on them later) its important to note that our actions have modified another area within the system so now we not only understand how to review items opened within the windows dialog but we also have some clues how we can identify items not opened by the dialog.

None of this information above is rocket science by any means and as listed from the SANS reference below the information has already been provided by Chad Tilbury who has written an excellent article on these registry keys. What I do hope to provide to you from this is post is some practical examples and comparisons between what we saw before and what we saw after the files were open. The next time we run the tool rather than accepting the output for what it is we'll have a deeper understanding and hopefully revert back to our example to assist with better understanding and gathering context to an incident where we do not have the luxury of understanding how or why a file was opened and therefore need to piece our view of the world back together.


Friday, 5 October 2012

The Carbon Black Test Drive

 Many of the readers of this blog have most likely heard of Carbon Black by now. Carbon Black describes its product as "the world’s first ‘surveillance camera’ for computers". Carbon Black highlights five key elements that it can monitor which are

1. A record of execution
2. A record of files system modifications
3. A record of registry modifications
4. A record of new outbound network connections
5. A copy of every unique binary executed

We're all aware that antivirus and signature based detection methods are no longer keeping up with the huge amount of samples produced every day. Carbon Black recently posted an article called Second AV Study Reveals Small Window For Catching New Malware which caught my eye. The article highlights that using multiple AV products provides better ability to detect a malicious sample which makes sense to me. The article highlights that running multiple AV on a workstation is obviously a nightmare so instead they developed a plugin which uploads binaries to VirusTotal to leverage multiple AV.

Based on the above I thought that it might be time to download the trial and better understand what this tool could do. Although there are some amazing articles written on Carbon Black, see links below, nothing is better than getting hands on with the tool. Signing up for the trial was quick and easy and before I knew it I had downloaded the CB server and installed. Upon logon I was presented with the following screen.

My test lab is fairly unsophisticated in its approach but it should be enough to get a solid understanding of what CB can do. The next step for me was to create the client package so that I could install this on my Win XP test machine. As you can see from the screenshot its very simple, you hit the 'generate' button and before you know it you have your client. I installed this on my machine and within minutes I started to see results within my console.

Carbon Black offers a number of plugins and some of which I've mentioned above. In particular I was most interested in the 'droppercheck', 'virustotal' and the 'autoruns' plugin. See the below screenshot.

Droppercheck was as simple to turn on as selecting the checkbox however the other plugins had a number of options to configure. Within minutes I had all the plugins that I wanted activated successfully.



Once the client was installed I thought the best way to test it was to start hitting sites listed on malware domain list and see that what samples I could download to my test workstation. After spending a few minutes hitting random URLs I managed to get a malicious binary to download to my test workstation.

Now that I had my malicious executable I was keen to understand what the virus total plugin had detected.  First I checked the summary of the virus total plugin. I could see some detections already

 I clicked on the infected binaries link and was presented with the following screen

Clicking on any of the links I could see the virus total results. I also checked the status of my virus total account and whether it listed the files that had been uploaded under API submissions and sure enough I had some results there also.

So from a virus perspective its safe to say that Carbon Black is providing a significant benefit to organisations. AV is far from perfect and its struggling to keep up with samples so to have the benefit of running your files against virus total automatically and having access to all of the autorun type registry keys I see as a huge advantage. Too often these days I see a huge amount of faith put in a tool that should automatically detect malicious activity based on signatures or patterns. I like that Carbon Black provides me with a means to access the information that is important to me or my employer. This also made me wonder what other information I could source from the tool. I knew that i'd run some SysInternal tools and I wondered what information I could find in regards to this.

I decided to look up *sysinternals* within the registry modifications search and the first three entries showed the MUI Cache entries for each of these tools.

As the links have highlighted below Carbon Black offers much more than just searching for malicious executables or an indication of persistance. Information that we typically wouldn't have access to until a forensic investigation is now available to us in the form of a easy and fast search option to confirm the state of our environment across our entire fleet.

I would be keen to understand the amount of bandwidth and storage to maintain this solution over long term for a large global organisation. I would consider that in comparison to SIEM type tools or full packet capture the footprint would be relatively small. Do any of you have some success stories in large environments?

Well I hope you gained something from this post. As always I'd be keen to hear anything from my readers in regards to their successes or issues with this tool.

Some other articles written on carbon black