Skip to main content

Analyze your IIS Logs with hive

Listen:
As you know, it's really easy to collect logs from a apache driven webfarm into a hive-cluster and analyze them. But how it'll work for IIS? 


Okay, lets do a view inside. IIS let us collect logs in W3C format by checking over the administraion console, register "website", "Active log format". Here you can setup the path where the logs will be stored, the fields you'll logging and much more. After a restart you should see the logs in the desired path. A good idea will be a split into hours, so you can run the jobs every hour on a fresh dataset.


A really easy way will be for a small farm to export the path as a windows shared drive, connect your hive server with the samba-utils:
mount -t cifs //Windows-Server/share -o user=name,password=passwd /mountpoint

Copy the file into hdfs:
hadoop dfs -copyFromLocal /mountpoint/filename <hdfs-dir> (we assume iislog)

Now you can proceed with analysis, we use hive here. Lets assume you want to know which IP has the most traffic.


First you have to describe your tables in hive:
hive> create TABLE iislog (sdate STRING, stime STRING, ssitename STRING,scomputername STRING,sip STRING,csmethod STRING,csuristem STRING,csuriquery STRING,sport INT,csusername STRING,cip STRING,csversion STRING,csuseragent STRING,csCookie STRING,csReferer STRING,scstatus INT,scsubstatus INT,scwin32status INT,scbyte INT,csbytes INT,timetaken INT) partitioned by (time STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '32' STORED AS TEXTFILE;

hive> CREATE TABLE iptraffic (sdate STRING, cip STRING, traffic INT,hits INT,appid STRING,scsuseragent STRING) partitioned by (time STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '124' STORED AS TEXTFILE;

hive> describe iptraffic;
OK
sdate string
cip string
traffic int
hits int
appid string
scsuseragent string
time string
Time taken: 0.07 seconds


The first statement create a table, the rows describes our logfiles, splittet in fields. The second one we need to get the traffic per minute.
Remind, we try to partition the data to get the last results much faster. For that test we need the date -2 mins to create the partition with the data we looking for. To do that write a small script like:


#!/bin/bash
TABLE=IPTRAFFICDATEPAR=`date -d '-2 Min' +"%Y%m%d%H%M"`
DATEPATH=`date -d '-2 Min' +"%Y-%m-%d/%H00/%M"`
SDATE=`date -d '-2 Min' +"%Y-%m-%d"`
STIME=`date -d '-2 Min' +"%H:%M"`
hive -e "ALTER TABLE iptraffic ADD IF NOT EXISTS PARTITION (time='$DATEPAR')"
if [ $? -ne 0 ] ; 
 then echo "Couldn't create partition" 
 exit 1
 else echo -e "\n ==> PARTITION (time='$DATEPAR') created" 
fi

hive -e "INSERT OVERWRITE TABLE iptraffic partition (time=$DATEPAR) \ select concat('$SDATE ','$STIME:00'), cip, sum(csbytes)/1024 counter, count(1) hits,ssitename,csuseragent \ from iislog where iislog.time=$DATEPAR and NOT(iislog.cip LIKE '192\.%')\ group by cip,concat('$SDATE ','$STIME:00'), csuseragent, ssitename"

if [ $? -ne 0 ] ; 
 then echo -e "\n ==> a error occured in analysis \n" 
 exit 1
 else echo -e "\n ==> Insert analysis sucessful" 
fi


What will that do?
First we define the date and format them into a standard format we can use for. Then we create the partition and use them in our hive-statement (where clause iptraffic.PARTITION), group with concat by our mainkey (cip = SourceIP) with date and time and copy that into our hive-warehouse-dir.

Let us take a look into the table:
hive> select * from iislog limit 10; 
OK
20110928130000 2011-09-28 10:59:06 W3SVC2 IISTEST xxx.xxx.xxx.xxx GET /images/bluebox.gif -80 - xxx.xxx.xxx.xxx HTTP/1.1 Mozilla/5.0+(compatible;+MSIE+9.0;+Windows+NT+6.1;+Trident/5.0) GUID=<deleted> 6313985 NULL 200 0 0 551 1689 201109281300

But we analyzed the data in a new partition:
hadoop dfs -cat /user/hive/warehouse/iptraffic/time=201110071059/* |less

2011-10-07 10:59:00|xxx.xxx.xxx.xxx|18|2|W3SVC5|Mozilla/5.0+(compatible;+MSIE+9.0;+Windows+NT+6.1;+Win64;+x64;+Trident/5.0;+MALC)
2011-10-07 10:59:00|xxx.xxx.xxx.xxx|1|2|W3SVC7|Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/535.1+(KHTML,+like+Gecko)+Chrome/14.0.835.202+Safari/535.1

you're done.

Comments

Popular posts from this blog

Why Is Customer Obsession Disappearing?

 It's wild that even with all the cool tech we've got these days, like AI solving complex equations and doing business across time zones in a flash, so many companies are still struggling with the basics: taking care of their customers.The drama around Coinbase's customer support is a prime example of even tech giants messing up. And it's not just Coinbase — it's a big-picture issue for the whole industry. At some point, the idea of "customer obsession" got replaced with "customer automation," and now we're seeing the problems that came with it. "Cases" What Not to Do Coinbase, as main example, has long been synonymous with making cryptocurrency accessible. Whether you’re a first-time buyer or a seasoned trader, their platform was once the gold standard for user experience. But lately, their customer support practices have been making headlines for all the wrong reasons: Coinbase - Stuck in the Loop:  Users have reported being caugh...

MySQL Scaling in 2024

When your MySQL database reaches its performance limits, vertical scaling through hardware upgrades provides a temporary solution. Long-term growth, though, requires a more comprehensive approach. This involves optimizing the database strategically and integrating complementary technologies. Caching The implementation of a caching layer, such as Memcached or Redis , can result in a notable reduction in the load and an increase ni performance at MySQL. In-memory stores cache data that is accessed frequently, enabling near-instantaneous responses and freeing the database for other tasks. For applications with heavy read traffic on relatively static data (e.g. product catalogues, user profiles), caching represents a low-effort, high-impact solution. Consider a online shop product catalogue with thousands of items. With each visit to the website, the application queries the database in order to retrieve product details. By using caching, the retrieved details can be stored in Memcached (a...

Deal with corrupted messages in Apache Kafka

Under some strange circumstances, it can happen that a message in a Kafka topic is corrupted. This often happens when using 3rd party frameworks with Kafka. In addition, Kafka < 0.9 does not have a lock on Log.read() at the consumer read level, but does have a lock on Log.write(). This can lead to a rare race condition as described in KAKFA-2477 [1]. A likely log entry looks like this: ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$) kafka.message.InvalidMessageException: Message is corrupt (stored crc = xxxxxxxxxx, computed crc = yyyyyyyyyy Kafka-Tools Kafka stores the offset of each consumer in Zookeeper. To read the offsets, Kafka provides handy tools [2]. But you can also use zkCli.sh, at least to display the consumer and the stored offsets. First we need to find the consumer for a topic (> Kafka 0.9): bin/kafka-consumer-groups.sh --zookeeper management01:2181 --describe --group test Prior to Kafka 0.9, the only way to get this in...