DNS log rotation on Active Directory

Following on from my DNS logging, here is a bit of PowerShell to help you get log rotation working properly on Active Directory – Put this PS1 script into the task scheduler on the AD DNS servers to run every hour:-

$current_log = Get-ChildItem C:\DNS_Logs\queries*.log | Select-Object
Copy-Item $current_log C:\DNS_Logs\archive\
$now = [double]::Parse((Get-Date -Uformat %Y%m%d%H%M%S))
Set-DnsServerDiagnostics -computername localhost -EnableLoggingToFile $False
Set-DnsServerDiagnostics -computername localhost -EnableLoggingToFile $True -LogFilePath C:\DNS_Logs\queries-$now.log

Note: this assumes logs are kept in C:\DNS_Logs and that C:\DNS_Logs\archive exists as directories

GOTO

Today I used a GOTO! and before you all step back aghast with horror, it was for a DOS batch script to create a ‘while’ loop – ’nuff said.
Still, it’s been quite a while since i’ve needed one of them!

BIND DNS query log shipping into a MySQL database

BIND DNS query log shipping into a MySQL database

Yay!, I’ve been wanting to do this for a while! Here it goes:-

Documented herein is a method for shipping BIND DNS query logs into a MySQL database and then reporting upon them!

Note: SSH keys are used for all password-less log-ons to avoid prompt issues

BIND logging configuration

BIND named.conf query logging directive should be set to simple logging:-

logging{

  # Your other log directives here

  channel query_log {
    file "/var/log/query.log";
    severity info;
    print-time yes;
    print-severity yes;
    print-category yes;
  };

  category queries {
    query_log;
  };
};

The reason why a simple log is needed is because the built-in BIND log rotation only allows rotation granularity of 1 day if based on time, hence an external log rotation method is required for granularity of under 24 hours.

BIND query log rotation

My external BIND log rotation script is scheduled from within cron and it looks like this:-

#!/bin/bash
QLOG=/var/named/chroot/var/log/query.log
LOCK_FILE=/var/run/${0##*/}.lock

if [ -e $LOCK_FILE ]; then
  OLD_PID=`cat $LOCK_FILE`
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
  fi
fi
echo $$ > $LOCK_FILE

cat $QLOG > $QLOG.`date '+%Y%m%d%H%M%S'`
if [ $? -eq 0 ]; then
  > $QLOG
fi
service named reload

rm -f $LOCK_FILE

Place this in the crontab, working at between one and six hours, ensure it is not run on the hour or at the same time as other instances of this job on associated servers

make sure /var/named/chroot/var/log/old exists for file rotation, used in the data pump script later on.

From here, I create a MySQL table, called dnslogs with the following structure:-

create table dnslog (
  q_server   VARCHAR(255),
  q_date     VARCHAR(11),
  q_time     VARCHAR(8),
  q_client   VARCHAR(15),
  q_view     VARCHAR(64),
  q_text     VARCHAR(255),
  q_class    VARCHAR(8),
  q_type     VARCHAR(8),
  q_modifier VARCHAR(8)
);

You can either define a database user with a password and configure it such in the scripts, or you can configure a database user which can only connect and insert into the dnslogs table.

Then I use the following shell script to pump the rotated log data into the MySQL database:-

#!/bin/bash
PATH=/path/to/specific/mysql/bin:$PATH export PATH
DB_NAME=your_db
DB_USER=db_user
DB_PASS=i_know_it_is_a_bad_idea_storing_the_pass_here
DB_SOCK=/var/lib/mysql/mysql.sock
SSH_USER=someone
LOG_DIR=/var/named/chroot/var/log
LOG_REGEX=query.log.\*
NAME_SERVERS="your name server list here"

LOCK_FILE=/var/run/${0##*/}.lock

if [ -e $LOCK_FILE ]; then
  OLD_PID=`cat $LOCK_FILE`
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
  fi
fi
echo $$ > $LOCK_FILE
for host in $NAME_SERVERS; do
  REMOTE_LOGS="`ssh -l $SSH_USER $host find $LOG_DIR -maxdepth 1 -name $LOG_REGEX | sort -n`"
  test -n "$REMOTE_LOGS" && for f in $REMOTE_LOGS ; do
    ssh -C -l $SSH_USER $host "cat $f" | \
      sed 's/\./ /; s/#[0-9]*://; s/: / /g; s/\///g; s/'\''//g;' | \
        awk -v h=$host '{ printf("insert into '$DEST_TABLE' values ( 
'\''%s'\'', 
STR_TO_DATE('\''%s %s.%06s'\'','\''%s'\''), 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\''
);\n",
h, 
$1, 
$2, 
$3 * 1000, 
"%d-%b-%Y %H:%i:%S.%f", 
$7, 
$9, 
$11, 
$12, 
$13, 
$14
); }' | mysql -A -S $DB_SOCK -u $DB_USER --password=$DB_PASS $DB_NAME 2> $ERROR_LOG
    RETVAL=$?
    if [ $RETVAL -ne 0 ]; then
      echo "Import of $f returned non-zero return code $RETVAL"
      test -s $ERROR_LOG && cat $ERROR_LOG
      continue
    fi
    ssh -l $SSH_USER $host mv $f ${f%/*}/old/
  done
done
rm -f $LOCK_FILE $ERROR_LOG

Put this script into a file and schedule from within crontab, running some time after the rotate job suffice to allow it to complete, but before the next rotate job.

Note that the last operation of the script is to move the processed log file into $LOG_DIR/old/.

This will take each file in /var/named/chroot/var/log/query.\* and ship it into the dnslogs table as frequently as is defined in the crontab.

From here, it is possible to report from the db with a simple query method such as:-

#!/bin/bash
PATH=/path/to/specific/mysql/bin:$PATH export PATH
DB_NAME=your_db
DB_USER=db_user
DB_PASS=i_know_it_is_a_bad_idea_storing_the_pass_here
DB_SOCK=/var/lib/mysql/mysql.sock
SSH_USER=someone
SQL_REGEX='%your-search-term-here%'

LOCK_FILE=/var/run/${0##*/}.lock

if [ -e $LOCK_FILE ]; then
  OLD_PID=`cat $LOCK_FILE`
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
  fi
fi
echo $$ > $LOCK_FILE

echo "select * from dnslogs where q_text like '$SQL_REGEX';" | \
  mysql -A -S $DB_SOCK -u $DB_USER --password=$DB_PASS $DB_NAME

rm -f $LOCK_FILE

And there it is! SQL reporting from DNS query logs! You can turn this into whatever report you like.

From there, you may wish to script solutions to partition the database and age the data.

Database partitioning should be done upon the q_timestamp value, dividing the table into periods which align with the expectation of the depth for which reporting is expected. On a minimal basis, I would recommend keeping at least 4 days of data in partitions of between 24 hours and 1 hour, depending upon the reporting expectations. If reports are upon the previous day’s data only, then 1 partition per day will do, while reports which are only interested in the past hour or so will benefit from having partitions of an hour. in MySQL, sub-partitions are not worthwhile because they give you nothing more than partitions but adds a layer of complexity on what is otherwise a linear data set.
Once partitioning is established, it should be possible to fulfill reports by querying only the relevant partitions to cover the time span of interest.
Partitioning also has another benefit, which is data aging. Instead of deleting old records, it is possible to drop entire partitions which cover select periods of time without having to create a huge temporary table to hold the difference as would be required by a delete operation. This becomes an extremely useful feature if you have a disk with a table size which is greater than the amount of free space available.

Script updates for add and drop partition to follow….

Akadns and things that annoy me

Akadns and things that annoy me

What do you love about akadns? <silence hovers the sky> akamai’s dynamic global geo-content location system seems a great idea until you have to work with it in practice.

Fast Flux DNS is great to avoid DDOSers and the likes, but it’s a b*st*rd to ‘lock-on’ to for legit traffic

It seems that it fails in two ways:-

1) Firewalls which only support IP ACLs can’t cope with the dynamic nature of the IP hopping employed by the Geo-CDNs.

2) Firewalls which support DNS ACLs cache the result (often incorrectly and for too long) which ends up with a majority of failed connection attempts.

An epic fail on both counts

Interesting though – did anyone else notice that the akadns serial number is always that of UNIX time? (e.g. seconds since epoch of 1/1/1970), as such in their schema it is always the most up-to-date version available

Solution?: 0-second TTL for the alias A/CNAME/MX record?

Shell Tricks part 2 – Why having the current working directory in your PATH is a bad idea

Why having the current working directory in your PATH is a bad idea

Heres’s an interesting consequence of having the current directory in your path:-

$ PATH=$PATH:.
$ echo echo something benign > 0a.sh
$ chmod 0700 0a.sh
$ *
something benign
$

Let’s see that again

$ set -xv
set -xv
+ set -xv
$ PATH=$PATH:$.
PATH=$PATH:.
+ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:.
$ echo echo something benign > 0a.sh
echo echo something benign > 0a.sh
+ echo echo something benign
$ chmod 0700 0a.sh
chmod 0700 0a.sh
+ chmod 0700 0a.sh
$ *
*
+ 0a.sh bin boot dev etc home lib lost+found mnt opt proc root sbin tmp usr var
something benign
$

Notice how 0a.sh was executed as it was the first file in the list, and this could be any executable in the directory because the command sorts the commands in alphabetical order and arbitrarily expands the expression and faithfully executes it, whatever it is, and here lieth the danger, amongst others like replacing system commands unwittingly.

Beware, an accidental * could launch all kinds of mischief!

Shell Tricks Part 1 – Substituting basename, dirname and ls commands

Substituting basename, dirname and ls commands

In Bourne shell, it is possible to use the following variable expansions as substitutes for the basename, dirname and ls commands

$ MYVAR=/path/to/basename
$ echo ${MYVAR##*/}
basename
$ MYVAR=/path/to/dirname
$ echo ${MYVAR%/*}
/path/to
$ echo *
bin boot dev etc home lib lost+found mnt opt proc root sbin tmp usr var
$

Hows That?

Quick and dirty data wiping

How to wipe a disk with pre-determined bit patterns

Many of you who know me may have heard of my ‘mythical’ data wiping script, which I have maintained can be done in just a few lines shell script on pretty much *any* UNIX box. Well, here it is:-

for pattern in 85 170 ; do
awk -v p=$pattern ‘END {while (1) printf(“%c”,p);};’ \
< /dev/null > /path/to/device/or/file
done

The patterns 85 and 170 represent 01010101 and 10101010 in 8-bit binary. These patterns can be replaced with any sequence which can be generated or pre-defined prior to the wiping run.

This command converts the decimal value of $pattern into a binary pattern using awk. The output is then generated until the target device reaches the end of the device or fills the containing filesystem.

Please use with caution, my overly simplified version does not check what it is writing to – I accept no responsibility for damages arising as a result of using this script or any derivative works.