Showing posts with label TLN. Show all posts
Showing posts with label TLN. Show all posts

Monday, 12 November 2012

SANS Forensic Artifact 4: Index.dat / Places.sqlite

You may be wondering why at this point we've moved on from artifact 1 to artifact 4. I spent some time thinking about what I wanted to discuss PST/OST files and Skype logs and felt I needed some more time to make this more beneficial to everyone. This doesn't mean I won't be completing it just that I'll be coming back to it after we explore some of the other artifacts first. The category for today is in the  File Download category: Index.dat / Places.sqlite.

This should be a fairly easy category to post about as I've already posted some information and tools on how to parse this information.

SANS lists the following information within the poster.

Description:
Not directly related to “File Download”. Details stored for each local user account. Records number of times visited (frequency).
Location: Internet Explorer
XP %userprofile%\Local Settings\History\ History.IE5
Win7 %userprofile%\AppData\Local\Microsoft\Windows\History\History.IE5
Win7 %userprofile%\AppData\Local\Microsoft\Windows\History\Low\History.IE5
Location: Firefox
XP %userprofile%\Application Data\Mozilla\ Firefox\Profiles\<random text>.default\places.sqlite
Win7 %userprofile%\AppData\Roaming\Mozilla\ Firefox\Profiles\<random text>.default\places.sqlite


For today we can easily test the tools we have by browsing to certain sites and making a note in regards to the time we viewed the site. When we run our tools we should be able to see the same time referenced by our tool. Other areas we can look at are what happens when we delete the history and whether entries remain or they're removed. Finally I wanted to touch on some of the different entries you'll find within the index.dat file which are URL / LEAK / REDR entries. I'll touch briefly on LEAK entries and link to a practical example of how we create one of these entries. This should help us associate our own experiences in future incidents where we may be responding to incidents that contain these artifacts.

Firstly lets start by browsing three websites in both Internet Explorer and Firefox and i'll show how to parse the data using Harlan Carvey's open source perl scripts and some of my own. We'll then convert these into TLN format and confirm our times with those we noted at the time we visited each site.

Here are the times I noted for each site I visited with both IE and Firefox. I know this is an unrealistic example because nobody ever users bing or yahoo right?......my poor attempt at humour, stay with me it can only get better.

Firefox
www.google.com - 10:12am
www.yahoo.com - 10.12am
www.bing.com - 10.13am

Internet Explorer
www.google.com - 10.13am
www.yahoo.com - 10.14am
www.bing.com - 10.15am

I started by parsing my index.dat file with the following command, which uses Harlan's urlcache script, and producing my initial events.txt file


 urlcache.pl -f "C:/Documents and Settings/username/Local Settings/History/History.IE5/index.dat" -l >> events.txt  

Following that I used my own firefox.pl script and ran the following command

 firefox.pl -p "C:/Documents and Settings/username/Application Data/Mozilla/Firefox/Profiles/fd9zh9ag.default" -d >> events.txt  

Finally I ran Harlan's parse script to create my timeline file.

 parse.pl -f events.txt -c > timeline.csv  

Lets compare our timeline results with the times we listed above. Important to note is that Harlan's timeline tools produce output in UTC time. So depending which timezone you are in you may need to add a column and adjust the time to your local. I typically use a cell value such as the following =A1+TIME(#,0,0) where '#' should be replaced with the hours you wish to add.


So firstly I wanted to add that I've already had some value out of checking my own tool as you can see from the screenshot above the firefox history is in the wrong cell in comparison to Harlan's urlcache.pl script. This is pretty easy to fix and I'll resolve that as soon as I can for everyone. It shouldn't affect your timelines in any way however.

If we compare our results from the above

Firefox
www.google.com - 10:12am - tln: 10:12am
www.yahoo.com - 10.12am - tln: 10.12am
www.bing.com - 10.13am - tln: 10.13.am

Internet Explorer
www.google.com - 10.13am - tln: 10.13am
www.yahoo.com - 10.14am - tln: 10.14am
www.bing.com - 10.15am - - tln: 10:15am

So the above is pretty clear that our tools are producing the output we are expecting. Again this is all fairly basic at this point but it shows now we have an understanding of what the output of our tools look like and we're confident that they're producing the correct times for us to evaluate. This can be critical in terms of your incident response as you do not want to miss an artifact because you thought your tool was providing the correct time when it was in fact off by a number of minutes or hours.

So what happens when we delete the history I presume we lose this information and our timeline should be empty. Lets test the theory by clearing all of the IE history and four hours firefox history.

I ran the same commands as I did above and as expected the last data I had was from the previous time I used the workstation and had nothing from the current day for firefox. There was also no information for Internet explorer at all. So this is worthwhile keeping in mind while you're investigating in that if the user has deleted their history you may need to go to other areas to identify sites visited. The guys at Volatility have written an excellent article on how to scan for Internet history through the use of one of their plugins.

http://volatility-labs.blogspot.com.au/2012/09/howto-scan-for-internet-cachehistory.html

Finally some further discussions around LEAK entries within the IE History. What are LEAK entries? Well Mike Murr from SANS classifies them as the following

"Essentially, a LEAK record is created when a cached URL entry is deleted (by calling DeleteUrlCacheEntry) and the cached file associated with the entry (a.k.a. "temporary internet file" or TIF) can not be deleted."
- http://computer-forensics.sans.org/blog/2009/09/18/is-your-index-dat-file-leaking 

The above article discusses an easy way to create a LEAK entry. Mike also discusses in more detail at the following link on how to create LEAK entries using some Python scripts which are unfortunately unavailable due to dead links but you should be able to recreate it if required.
- http://www.forensicblog.org/the-meaning-of-leak-records/ 

Again if you would like access to my firefox script you can grab it from the following
- http://code.google.com/p/sploited/


[1] http://volatility-labs.blogspot.com.au/2012/09/howto-scan-for-internet-cachehistory.html
[2] http://computer-forensics.sans.org/blog/2009/09/18/is-your-index-dat-file-leaking
[3] http://www.mcafee.com/au/resources/white-papers/foundstone/wp-pasco.pdf




Sunday, 16 September 2012

TLN tools updated - New features added

I've been continuing to play and refine some of the tools I've recently posted. As mentioned they were only beta and I still consider them to be just that. As always usage of the tools should be at your own risk and I provide no warranty for the result set they provide. However in saying that as we continue to refine them hopefully my readers can see some consistent and expected results. One of the other issues I've had with posting my tools is that when copied to a  text file you can have some issues when running them. Due to the spaces added at the end of the file which can cause Perl EOF errors which can be confusing if you're new to Perl. To resolve this I've created my own Google Code repository and I've uploaded both the Perl scripts and the executable. Hopefully this will resolve that issue.

You can find this repository and the tools at the following location.
http://code.google.com/p/sploited/

The reason for the following changes with the firefox and chrome scripts was because the scripts weren't that useful from an automated perspective due to Firefox using a random folder name e.g  "xxxxxxxx.default" to store the user profile. So by creating a file listing before using tsk fls, to create a bodyfile, the output can be then be parsed to firefox.pl and chrome.pl to automatically find the required files for the timeline. The other benefit to this is that many users don't automatically store the profiles in their user profile due to profile storage space. Its not uncommon to find the browser history files in the root of the C drive because the user has moved it and therefore my tool still accommodates for this scenario.

 Firefox.pl  
 - Added the -d option to allow parsing of the downloads.sqlite database to TLN format  
 - Added the -a option which uses the bodyfile output from tsk fls and parses each places/download.sqlite database discovered within it.  
 - Addded the -u option to include the username within the TLN format  


 Chrome.pl  
 - Added the -d option to allow parsing of the downloads table within the History sqlite database  
 - Added the -a option which uses the bodyfile output from tsk fls and parses each History sqlite database discovered within it  
 - Added the -u option to include the username within the TLN format  

 IDX.pl  
 - Resolved a bug with the script where IDX files that contained output on multiple lines were not parsed correctly  

As mentioned above I've added each of the files to the code repository. Hopefully for any Perl guru's out there you might be able to see some issues with my code or potentially some more efficient ways of coding the tools. Please feel free to update those tools and let me know any changes that can be made so we can all benefit. I'd be really keen to see if anybody is finding the tools a benefit to their investigations and maybe have some examples that can be shown also. Feel free to add thanks or issues to the comments below I look forward to having some feedback.

I have a number of future scripts in mind for adding logs to the TLN format. For any of you out there that require a script feel free to let me know and I can see if i can help out. In saying that for anybody out there with some basic scripting skills its very easy to pick Perl up and create some basic regex queries. Before you know it any file with a date and something useful within it can be added to your timelines and assist with your investigations..


Wednesday, 15 August 2012

Java Forensics using TLN Timelines

Based on my last two previous posts I thought it might be a good time to see how we can introduce some of the Java artifacts we've reviewed. I decided to create a perl script to parse .idx files within the Java cache into TLN format for import into our timelines. I hope that this script will be able to provide analysts with greater context to their investigations and also have a quick way to eyeball URLs within the idx files for anything that could be potentially malicious. Its important to note that again this script is in BETA and further testing is required before you should trust the results within your own investigations.

I had a strong response from my last post on TLN and browser forensics however a number of users did have issues when copying the code and attempting to run with errors such as "Can't find string terminator "EOT" anywhere before EOF at C:\idx.pl line 31". If you get this error its most likely you need to remove the two spaces after EOT and one space before EOT at the very end of the file. I'm also in the process of organising a Google code repository and hopefully this will resolve that issue.

In saying that lets take a look at the script.


 #! c:\perl\bin\perl.exe  
 #---------------------------------------------------------------------  
 # idx.pl   
 # Parse .idx files with the Java cache to TLN format  
 #   
 #   
 # Version: 0.1 (BETA)   
 # Examples:   
 # 1335156604|JAVA|WORKSTATIONNAME|USERNAME|http://malicious_site.com.br/js/jar//fap4.jar?r=1051139  
 # 1347043129|JAVA|WORKSTATIONNAME|USERNAME|http://www.malicious_site.pro/P4fLBitJ-PxK2/yUA83mE  
 # 1347043127|JAVA|WORKSTATIONNAME|USERNAME|http://www.malicious_site.pro/SfLBitJ-PxK2/yUA83mE  
 #---------------------------------------------------------------------  
 use DBI;  
 use strict;  
 use Getopt::Long;  
 use File::Find;  
 use Regexp::Common qw /URI/;  
 use Time::Local;  
 my %config = ();  
 Getopt::Long::Configure("prefix_pattern=(-|\/)");  
 GetOptions(\%config, qw(path|p=s system|s=s user|u=s help|?|h));  
 if ($config{help} || ! %config) {  
     _syntax();  
     exit 1;  
 }  
 die "You must enter a path.\n" unless ($config{path});  
 #die "File not found.\n" unless (-e $config{path} && -f $config{path});  
 my $path =$config{path};  
 my @files;  
 my $line = $_;  
 my %months = ('Jan'=>'01','Feb'=>'02','Mar'=>'03','Apr'=>'04','May'=>'05','Jun'=>'06','Jul'=>'07','Aug'=>'08','Sep'=>'09','Oct'=>'10','Nov'=>'11','Dec'=>'12');  
 my $start_dir = $path;  
 find(  
   sub { push @files, $File::Find::name unless -d; },  
   $start_dir  
 );  
 for my $file (@files) {  
   my ($ext) = $file =~ /(\.[^.]+)$/;  
   if ($ext eq ".idx") {  
             $file =~ s/\\/\//g;  
             open( FILE, "< $file" ) or die "Can't open $file : $!";  
             $line=<FILE>;  
             if ($line){  
                 my @timestamps = $line =~ m/[0-3][0-9] [a-zA-Z][a-z][a-z] [0-9][0-9][0-9][0-9] [0-2][0-9]:[0-5][0-9]:[0-5][0-9]/g;   
                 my @url = $line =~ m/($RE{URI}{HTTP}{-scheme => qr(https?)})/g;   
                 $timestamps[1] = getEpoch($timestamps[1]);  
                 print $timestamps[1]."|JAVA|".$config{system}."|".$config{user}."|".$url[0]."\n";  
             }  
             close(FILE);  
     }  
 }  
 sub getEpoch {  
     my $time = substr ( $_[0],index($_[0], ' ', 10)+1,length($_[0])-1);  
     my $date = substr ( $_[0],0,index($_[0], ' ', 10));  
     my ($hr,$min,$sec) = split(/:/,$time,3);  
     my ($dd,$mm,$yyyy) = split(/ /,$date,3);  
     $mm = $months{$mm};  
     $mm =~ s/^0//;  
     my $epoch = timegm($sec,$min,$hr, $dd,($mm)-1,$yyyy);  
     return $epoch;  
 }  
 sub _syntax {  
 print<< "EOT";  
 idx.pl  
 [option]  
 Parse Java cache IDX files (  
  -p Path..................path to java cache  
  -s Systemname............add systemname to appropriate field in tln file  
  -u user..................add user (or SID) to appropriate field in tln file  
  -h ......................Help (print this information)  
 Ex: C:\\> idx.pl -p C:\\Documents and Settings\\userprofile\\Application Data\\Sun\\Java\\Deployment\\cache\\\ -s %COMPUTERNAME% -u %USERNAME% > events.txt  
 **All times printed as GMT/UTC  
 copyright 2012 Sploit  
 EOT  
 }  

I'm not a programmer by any means so I do my best with my coding but if anybody has any views on some improvements for performance or bugs then let me know. I'm not sure whether its possible to have an IDX file without any of the values i'm looking for so potentially if you have any idx files that don't have a date listed within them then my script will most likely fail. I've also added in some examples of what the output looks like within the script but I'll list them here also to highlight some examples.


  # 1335156604|JAVA|WORKSTATIONNAME|USERNAME|http://malicious_site.com.br/js/jar//fap4.jar?r=1051139   
  # 1347043129|JAVA|WORKSTATIONNAME|USERNAME|http://www.malicious_site.pro/P4fLBitJ-PxK2/yUA83mE   
  # 1347043127|JAVA|WORKSTATIONNAME|USERNAME|http://www.malicious_site.pro/SfLBitJ-PxK2/yUA83mE   

Also to note in regards to IDX files there are typically two timestamps within an IDX file. One is listed as date and one is listed as last modified. In this instance I'm using the "date" to produce the TLN value as from what I've seen this seems to be the time the incident occurred.

Let me know if you find this script of value and if you find any bugs. As mentioned I'll hopefully upload the script to my own Google Code repository shortly and I'll let you all know when that is available in case you're having any troubles getting it to work for you.




Wednesday, 4 July 2012

Browser Forensics using TLN timelines

I've posted previously about a number of methods to create forensic timelines and depending on the incident you are handling it can be critical in the investigation to understand the browser history. Log2Timeline already has this functionality and it can automatically search your forensic image and import any browser history files it finds. Unfortunately an analyst may not always have a forensic image and may need to conduct analysis in a live response scenerio as I previously discussed in Timelines-for-live-response.

One of the reasons I lean towards Harlan Carvey's TLN timeline format is because of the flexibility it provides to my investigation. As shown in my previous blog I can easily convert required perl scripts to executables and copy the timeline tools I require to the target machine. I can also simply run the tools from a USB drive. The TLN timeline provides me with the flexibility to add required sources to an investigation or if required add information at a later stage should an artifact of interest be found. Due to the reasons above I was keen to be able to parse the Firefox places.sqlite history database as well as the Chrome History sqlite database into the TLN format. Although there are a simple executable and tools that can alreadyparse to csv or various formats it means having multiple files and referencing the two. The main advantage of the timeline is having everything you need in the one place so that you can see what occurs between different applications at points of interest.

So in saying that I decided to take this task on board and create two files that will parse the Firefox and Chrome history files into the TLN format. Please note these are in BETA and are provided with no warranty. I'm yet to have the time to effectively test these files and while i'm able to put together a script by no means would I consider myself an experienced programmer. Hopefully any of the code guru's out there can see if and where I might have failed and shoot a comment my way to assist with making this code useful to the community.

To create the script copy the following file into a text file and save it as firefox.pl. Use perl2exe to convert to an executable if required.

firefox.pl

 #! c:\perl\bin\perl.exe  
 #---------------------------------------------------------------------  
 # firefox.pl  
 # Parse the firefox places.sqlite database to TLN format  
 #  
 # Ref: http://davidkoepi.wordpress.com/2010/11/27/firefoxforensics/  
 #  
 # Version: 0.1 (BETA)  
 #---------------------------------------------------------------------  
 use DBI;  
 use strict;  
 use Getopt::Long;  
 my %config = ();  
 Getopt::Long::Configure("prefix_pattern=(-|\/)");  
 GetOptions(\%config, qw(path|p=s system|s=s help|?|h));  
 if ($config{help} || ! %config) {  
     _syntax();  
     exit 1;  
 }  
 die "You must enter a path.\n" unless ($config{path});  
 #die "File not found.\n" unless (-e $config{path} && -p $config{path});  
 my $path =$config{path};  
 my $db = DBI->connect("dbi:SQLite:dbname=$path\\places.sqlite","","") || die( "Unable to connect to database\n" );  
 my $all = $db->selectall_arrayref("SELECT url,moz_places.last_visit_date/1000000 from moz_places order by moz_places.last_visit_date desc;");  
 foreach my $row (@$all) {  
     my ($url,$date) = @$row;  
     print $date."|FIREFOX|".$config{system}."|".$url."\n";  
 }  
 $db->disconnect;  
 sub _syntax {  
 print<< "EOT";  
 firefox.pl  
 [option]  
 Parse Firefox places.sqllite (Win2000, XP, 2003, Win7)  
  -p Path..................path to places.sqllite file to parse  
  -s Systemname............add systemname to appropriate field in tln file  
  -h Help..................Help (print this information)  
 Ex: C:\\> firefox.pl -f C:\\firefox\ -s COMPUTERNAME > events.txt  
 **All times printed as GMT/UTC    
 EOT  
 }   

chrome.pl

 #! c:\perl\bin\perl.exe  
 #---------------------------------------------------------------------  
 # chrome.pl  
 # Parse the History sqlite database to TLN format  
 #  
 # Ref: http://www.forensicswiki.org/wiki/Google_Chrome  
 # Ref: http://computer-forensics.sans.org/blog/2010/01/21/google-chrome-forensics/  
 #  
 # Version: 0.1 (BETA)  
 #---------------------------------------------------------------------  
 use DBI;  
 use strict;  
 use Getopt::Long;  
 my %config = ();  
 Getopt::Long::Configure("prefix_pattern=(-|\/)");  
 GetOptions(\%config, qw(path|p=s system|s=s help|?|h));  
 if ($config{help} || ! %config) {  
     _syntax();  
     exit 1;  
 }  
 die "You must enter a path.\n" unless ($config{path});  
 #die "File not found.\n" unless (-e $config{path} && -p $config{path});  
 my $path=$config{path};  
 my $db = DBI->connect("dbi:SQLite:dbname=$path\\History","","") || die( "Unable to connect to database\n" );  
 my $all = $db->selectall_arrayref("SELECT ((visits.visit_time/1000000)-11644473600), urls.url, urls.title FROM urls, visits WHERE urls.id = visits.url;");  
 foreach my $row (@$all) {  
     my ($date,$url,$title) = @$row;  
     print $date."|CHROME|".$config{system}."|".$url."\n";  
 }  
 $db->disconnect;  
 sub _syntax {  
 print<< "EOT";  
 chrome.pl  
 [option]  
 Parse Chrome History (Win2000, XP, 2003, Win7)  
  -p Path..................path to History file to parse  
  -s Systemname............add systemname to appropriate field in tln file  
  -h Help..................Help (print this information)  
 Ex: C:\\> chrome.pl -f C:\\Users\%username%\AppData\Local\Google\Chrome\User Data\Default\ -s COMPUTERNAME > events.txt  
 **All times printed as GMT/UTC    
 EOT  
 }   

Once you have the output from the following files you can then use Harlans parse.pl or parse.exe to convert this into your CSV timeline format.
I hope you find this useful.