BIND DNS query log shipping into a MySQL database

BIND DNS query log shipping into a MySQL database

Yay!, I’ve been wanting to do this for a while! Here it goes:-

Documented herein is a method for shipping BIND DNS query logs into a MySQL database and then reporting upon them!

Note: SSH keys are used for all password-less log-ons to avoid prompt issues

BIND logging configuration

BIND named.conf query logging directive should be set to simple logging:-


  # Your other log directives here

  channel query_log {
    file "/var/log/query.log";
    severity info;
    print-time yes;
    print-severity yes;
    print-category yes;

  category queries {

The reason why a simple log is needed is because the built-in BIND log rotation only allows rotation granularity of 1 day if based on time, hence an external log rotation method is required for granularity of under 24 hours.

BIND query log rotation

My external BIND log rotation script is scheduled from within cron and it looks like this:-


if [ -e $LOCK_FILE ]; then
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
echo $$ > $LOCK_FILE

cat $QLOG > $QLOG.`date '+%Y%m%d%H%M%S'`
if [ $? -eq 0 ]; then
  > $QLOG
service named reload

rm -f $LOCK_FILE

Place this in the crontab, working at between one and six hours, ensure it is not run on the hour or at the same time as other instances of this job on associated servers

make sure /var/named/chroot/var/log/old exists for file rotation, used in the data pump script later on.

From here, I create a MySQL table, called dnslogs with the following structure:-

create table dnslog (
  q_server   VARCHAR(255),
  q_date     VARCHAR(11),
  q_time     VARCHAR(8),
  q_client   VARCHAR(15),
  q_view     VARCHAR(64),
  q_text     VARCHAR(255),
  q_class    VARCHAR(8),
  q_type     VARCHAR(8),
  q_modifier VARCHAR(8)

You can either define a database user with a password and configure it such in the scripts, or you can configure a database user which can only connect and insert into the dnslogs table.

Then I use the following shell script to pump the rotated log data into the MySQL database:-

PATH=/path/to/specific/mysql/bin:$PATH export PATH
NAME_SERVERS="your name server list here"


if [ -e $LOCK_FILE ]; then
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
echo $$ > $LOCK_FILE
for host in $NAME_SERVERS; do
  REMOTE_LOGS="`ssh -l $SSH_USER $host find $LOG_DIR -maxdepth 1 -name $LOG_REGEX | sort -n`"
  test -n "$REMOTE_LOGS" && for f in $REMOTE_LOGS ; do
    ssh -C -l $SSH_USER $host "cat $f" | \
      sed 's/\./ /; s/#[0-9]*://; s/: / /g; s/\///g; s/'\''//g;' | \
        awk -v h=$host '{ printf("insert into '$DEST_TABLE' values ( 
STR_TO_DATE('\''%s %s.%06s'\'','\''%s'\''), 
$3 * 1000, 
"%d-%b-%Y %H:%i:%S.%f", 
); }' | mysql -A -S $DB_SOCK -u $DB_USER --password=$DB_PASS $DB_NAME 2> $ERROR_LOG
    if [ $RETVAL -ne 0 ]; then
      echo "Import of $f returned non-zero return code $RETVAL"
      test -s $ERROR_LOG && cat $ERROR_LOG
    ssh -l $SSH_USER $host mv $f ${f%/*}/old/

Put this script into a file and schedule from within crontab, running some time after the rotate job suffice to allow it to complete, but before the next rotate job.

Note that the last operation of the script is to move the processed log file into $LOG_DIR/old/.

This will take each file in /var/named/chroot/var/log/query.\* and ship it into the dnslogs table as frequently as is defined in the crontab.

From here, it is possible to report from the db with a simple query method such as:-

PATH=/path/to/specific/mysql/bin:$PATH export PATH


if [ -e $LOCK_FILE ]; then
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
echo $$ > $LOCK_FILE

echo "select * from dnslogs where q_text like '$SQL_REGEX';" | \
  mysql -A -S $DB_SOCK -u $DB_USER --password=$DB_PASS $DB_NAME

rm -f $LOCK_FILE

And there it is! SQL reporting from DNS query logs! You can turn this into whatever report you like.

From there, you may wish to script solutions to partition the database and age the data.

Database partitioning should be done upon the q_timestamp value, dividing the table into periods which align with the expectation of the depth for which reporting is expected. On a minimal basis, I would recommend keeping at least 4 days of data in partitions of between 24 hours and 1 hour, depending upon the reporting expectations. If reports are upon the previous day’s data only, then 1 partition per day will do, while reports which are only interested in the past hour or so will benefit from having partitions of an hour. in MySQL, sub-partitions are not worthwhile because they give you nothing more than partitions but adds a layer of complexity on what is otherwise a linear data set.
Once partitioning is established, it should be possible to fulfill reports by querying only the relevant partitions to cover the time span of interest.
Partitioning also has another benefit, which is data aging. Instead of deleting old records, it is possible to drop entire partitions which cover select periods of time without having to create a huge temporary table to hold the difference as would be required by a delete operation. This becomes an extremely useful feature if you have a disk with a table size which is greater than the amount of free space available.

Script updates for add and drop partition to follow….


Shell Tricks Part 1 – Substituting basename, dirname and ls commands

Substituting basename, dirname and ls commands

In Bourne shell, it is possible to use the following variable expansions as substitutes for the basename, dirname and ls commands

$ MYVAR=/path/to/basename
$ echo ${MYVAR##*/}
$ MYVAR=/path/to/dirname
$ echo ${MYVAR%/*}
$ echo *
bin boot dev etc home lib lost+found mnt opt proc root sbin tmp usr var

Hows That?

sendmail relaying nightmare!

While I’m hot on the topic – I’ve just spent a whole afternoon/evening trying to figure out why my sendmail installation keeps on becoming an open-relay every time i configure my desired domains! – which I have now figured out!

While listing my desired domains in the access file, or in the relay-domains file, it seemed to turn my sendmail host into an open-relay.

It turns out that access and relay-domains supports relay for all valid hosts and sub-domains within the DNS domains permitted for relay, hence all hosts with a valid DNS A record within the defined domains becomes a valid source of mail! As my testing point had a valid DNS record within the permitted domain (and I did check to see whether it was an open-relay), the host allowed relay based on membership to the permitted domains.
This effectively made my sendmail box an open-relay to all internal hosts with a DNS name.

This was fixed with a FEATURE:-


This sanitised my security from internal abuse! and made my access file work as intended, supporting explicitly listed hosts and domains only.


Update: I later realised that the domain names I was configuring also had ‘A’ records in DNS for the top-level domain. As these hosts were not valid mail sources for this relay, I had to explicitly configure a REJECT action within the access file for all of the IPs named in an ‘A’ record lookups on the given domain names within the access or relay-hosts file in order to deny an implicit behavior which is the consequence of permitting a given domain.


So….some things to remember for Sendmail:-


Any domain listed in the access file or relay-domains file will allow ‘open’ relay for all hosts :-


1) Within the visible DNS structure beneath the defined domain (unless you use “FEATURE(`relay_hosts_only’)dnl”)

2) Defined as an ‘A’ Record for the given domain name as returned by DNS.
Does your Sendmail MTA relay to the hosts you intend?


Why Ubuntu is both one of the the best and one of the worst linuxes ever

Well, after it being many years since I last used Linux as my main workstation OS, I have realised that I no longer need the Windows apps which caused me to stop using Linux at home as I had done previously for many years, and reverted back to what I know is better, only, i’m a little dissapointed, and I’ll tell you why…..

My last foray into Linux workstations led me (through laziness) to Ubuntu Studio – a far superior spin of Ubuntu, which offers, pre-packaged, all of the multimedia gems that you could want for a Linux based *multimedia* system, it was like SuSE but without all the bad vibes from their Novell and Microsoft dealings. I come from a Slackware background and have never impressed by fancy guis and a shit-heap of patches on top of every app compiled from source, so i’ve never been a fan of Red Hat Linux for many reasons outside of the scope of this post.

So, I thought ….. Hmm ….. Slackware …. or …… Ubuntu Studio, err, which boiled down to the act of thinking and doing (or desired lack thereof) …. configure Slackware the hard way ………… or ………… use auto-configuring Ubuntu Studio? Easy, I haven’t got time for the krypton factor, so Ubuntu Studio it is.

Ubuntu Studio installed fine, but I was massively dissapointed that it didn’t support LVM on install (i’m yet to transplant it onto a LVM setup), so I decided to go with a 2Gb boot, 2Gb swap, and rest-of-disk root setup, with the on-disk ordering of swap, data, boot through using fdisk from the command-line. I don’t know why but I’ve never trusted the installers to partition the disks properly, I see some abhorrences of set-ups used by which quite frankly is going to be the subject of another (maybe next?) post. Here’s my partition table:-

    Device   Boot              Start                End        Blocks      Id     System
/dev/sda1     *     1460954864  1465149167       2097152   83   Linux
/dev/sda2                        2048        4196351       2097512  82    Linux swap
/dev/sda3                  4196352  1469854863   728379256  83    Linux

With sda1 being /boot, sda2 being swap and sda3 being root (/)

The rationale for my partitioning was this:-

  • You shoudn’t need more than a few gig for a few kernels, and disk space is cheap
  • You shoudn’t ever need more than 2-4Gb of swap – if you’re using this much swap – you don’t have enough RAM – simple. The essence is this – if it’s imporant enough to want running – you don’t want it sitting in swap, and conversely, if it is sitting in swap then you have to ask yourself is it important enough to keep it running? sometimes yes, most often no
  • For the actual ‘UNIX’ filesystem you can partition and fragment your filesystem across your disks to your heart’s content and in a plethora of ways but in essence, with one spindle only, it can only do so much, and despite the number of partitions per spinde (set), will take the same amount of time per spindle (set) to recover hence I opt for a single filesystem on the grounds that with a single disk, it’s all-or-nothing anyway.
  • Logical partitioning should be in the order, boot, swap, root – why it makes sense and is easy to conceptualise
  • Physical partitioning should be in the order swap, root, boot (the 1024 cylinder limit doesn’t apply now!) – why this order? because swap is wanted on the fastest part of the disk, and we don’t care how slow boot is so long as it works, so boot gets the slowest slice and swap gets the fastest slice, and in-between, root or LVM gets the rest.

This was great, and the install, once partitioned to my satisfaction, was seamless, I had a working machine.

I then proceeded to install KDE through “apt-get install kubuntu-full”, and many will ask – why didn’t you just install kubuntu and then install the packages you need for everything else? well, the reason is that I wanted a multimedia system, and Ubuntu Studio offers the best multimedia foundation (IMHO), hence my starting point.

But… being a cradle-to-the-grave KDE user (i’ve never cared too much about the cathederal and the bazaar so long as everyone gets their expected cut), I just had to have KDE over XFCE and/or Gnome, I would use WindowMaker as my second choice, but I’ve just got to move with the times (although it’s an excellent light-weight window manager for recovery shells).

So… Now I have a working Ubuntu Studio + KDE system, and i’m rather happy.

I then go venture under-the-hood, crack open konsole, and install MySQL and Apache. “apt-get install xxx” works fine as expected, and then I go to bounce the services, to be notified with:-

  • “Rather than invoking init scripts through /etc/init.d, use service(8)
    utility, e.g. service mysql restart
  • Since the script you are attempting to invoke has been converted to anUpstart job, you may also use the stop(8) and then start(8) utility is also available.
    e.g. stop mysql ; start mysql. The restart(8) utility is also available.

WTF? what’s wrong with calling init scripts from /etc/init.d/xxx <start|stop|restart>?

I refer to the System V as setting the standard of using /etc/init.d for init scripts as a way of managing run-levels, even Solaris have migrated away from init as a runlevel manager, but my gripe is – if you’re going to use SysV instead of BSD init scripts then why change it? why whinge? and why try and re-invent the wheel? SysV init works fine as it is ffs? don’t change it! and don’t nag me about knowing about it either!

So, after explaining why Ubuntu Studio + KDE is (IMHO) one of the best Linuxes, I finish on a note of annoyance as the system tries to tell me my standards compliant ways are not appreciated here.

I guess this is where Linux fails in the Enterprise – it’s willingness to re-invent the wheel (and the lack in thereof which makes me now prefer AIX and FreeBSD), I’m not too old to move with the times, but have matured enough individually to want some stability in things that work and are tried and tested.

Linux has deviated far too much fromt the core of what a UNIX system is, into some hybrid that is absolutely great for the desktop but still leaves a lot lacking for the Enterprise server estate. Red Hat has become a safe choice, but not because it is superior but in fact is because it is a Linux which *everyone* and *anyone* who has heard of Linux knows about Red Hat!

The problem with Ubuntu Studio is in fact the “Ubuntu” bit, the distro itself is an absolute merit to what a multimedia platform should be like, but alas, beyond the savvy collection of apps, is a nannying, almost patronising Ubuntu systems, goading me to adopt non-standard ways. Grrr. Give Me “Slackware Studio” – now that’s an idea

Now I have a primary Linux system, my next distro foray is going to be with Slackware (possibly on ARM – thanks Stuart! I owe you a biscuit or three when the talking clock strikes 😉

Backups are ‘uncool’ in a security world

I got told the other day that “backups are ‘uncool’ in a security world”.

I disagree.

Backups are a vital part of any information assurance strategy but are often overlooked because information security is often too focused on keeping the baddies out through the vast array of tool-sets available which then drives waves of scanning, software patching, monitoring and security hardening, but in doing so, takes an eye off the ball with regard to backups.

Backup is primarily concerned with making sure systems and data remain safe, available and consistent, this is also the primary goals of information security, and while operational teams are tasked with the day-to-day operation, information security are ultimately responsible for protecting that backup data.

The Foundations

Consider how backups apply when considered in the context of the three pillars of information security:-

  • Confidentiality
  • Integrity
  • Availability


Controls should be in place to manage and monitor access to backup devices, media, and data

  • Who has access to your backup data?
  • Who authorises access to backup data?
  • Is access to backup data and systems revoked if an authorised person leaves/moves?
  • Are data protection operation logs reviewed to identify and investigate ad-hoc restores and changes in backup policy?
  • To what degree can an authorised person examine the backup data?
  • To what degree can an unauthorised person affect the backup data?


This is the classic focus of information security – keeping the systems and their data protected from threats which cause inappropriate changes. The tools and techniques are plentiful in this aspect, but backups require some further observance in order to maintain assurance of integrity

  • How is the data handled from source to destination?
  • Who authorises changes in backup retention and frequency?
  • Are ad-hoc restores and changes in backup policy appropriate and authorised?
  • How many backup failures would it take for integrity to become an information assurance issue?


An unscheduled outage can prove just as fatal to a client as a ‘classical’ security breach, and in these situations, the availability of backup data is key

  • Can you recover a system to a given point-in-time in order to perform a post-mortem or restore a system to a state prior to known compromise?
  • Can you prove how and where the data was moved? (it might be missing!)
  • Does the current RPO and RTO reflect real business availability needs? (would it be good enough to bail you and your customer out of a security incident?)
  • How is backup data destroyed/recycled/leaked?
  • Who defines backup retention policies, and do they comply with business and legal requirements?
  • Who is responsible for compliance such that data is kept for an appropriate amount of time, or more importantly, ensure that aged data is safely purged at an appropriate time?
  • How many backup failures would it take for backup availability to become an information assurance issue?

Ultimate Questions

  • Should information security care about backups?
  • Do backups register on your list for making data ‘secure’?


There is a clear role for information security within the context of backups in terms of managing access and monitoring events, the question is – does it go further?

As mundane as backups are, they provide the foundation for availability and integrity in the event of compromise because it can often provide the only available regression plan which could be used in many cases to undo what has been done and return the compromised system and its data to a pre-compromised state, which should be achieved within the agreed RTO and RPO.

The strength of the foundation means that information security should be jealously keen to ensure that this “get-out-of-jail-free” card does not slip out of their proverbial ‘back-pocket’.

Why I love’s MX record!

It’s pretty fair to say that is probably one of the most abused domains in the world.

I take my hat off to them in their attempt to combat spam.

They tool the simple eloquent solution of setting their MX record to localhost.

This dear reader is pure genius.

It is genius because it means that any DNS-aware mail server carrying mail for will burn up on repeated local delivery attempts my this MX record to localhost forcing the mail server into attempting delivery to itself.

The added bonus of this method is that mail never hits’s servers thus ensuring that their servers do not serve a dross of spam.

While it is obvious that this method does not allow delivery of mail if you actually want to receive mail, it is only suitable in this uncommon situation, and hmm, maybe some other situations.

It may possibly be a suitable remedy to eliciting noticeable decommissioning of domains such that the receiving SMTP servers catch no load and the sending SMTP servers get to see all the errors.

This may also be a useful spoofing technique for DNS views within your control if you want to suppress mail to certain domains within a subscribed client-base.

Or maybe suppressing mail from a machine which it is not possible to disable applications from mailing out.

A quick ‘hack’ to test this on any given machine is to alias the given domain to localhost in the /etc/hosts or c:\windows\system32\drivers\etc\hosts file in order to elicit the same outcome.

Caution is recommended – don’t lock-out access to key hosts like yourself or the device’s default router by aliasing critical network nodes. Your mileage may vary – don’t alias the name in which your machine has (if known to the device) for which you are using to administer the given device.

Using Nessus for software patch management

Today’s blog is about using Nessus for software patch management.

While Nessus is a popular tool for network security scanning, it also has some less obvious uses too, such as patch management, or more specifically, reporting.

Through allowing Nessus access to a device via an authorised system account, it can audit the package inventory on the device.

As Nessus supports many different operating systems and distributions, it becomes possible to manage your patch reporting for all of your device types (such as AIX, Solaris, Linux, Windows, Cisco IOS, MacOS X) from a single point of reference.

As all package vulnerabilities known to Nessus are scored like any other vulnerability, it is possible to categorise and qualify the patches in which you choose to apply.

This enables the patching policy to be driven by qualified security needs, and not “just because the vendor recommends it”.

Nessus can also plug-in to tools such as WSUS and Red Hat Satellite, however I am yet to explore what functionality it brings (i guess it will audit only against authorised patches or something…).

So by creating a ‘nessus’ account on the host (non-root/non-Administrator of course) in order to list the package inventory

Creating a ‘nessus’ account on the WSUS or Red Hat Satellite server

Configure a scan policy with local authentication and configure WSUS/Satellite with the required credentials

Select only local scan checks, exclude operating systems and scan type which do not apply to software package releases

Configuring a policy can be time consuming – don’t worry about de-selecting *ALL* of them – just get most of them – it’s only to speed up the scan anyway as those which don’t apply shouldn’t return a hit, so refine it over many iterations by removing more unwanted checks on second and third pass and so on.

Save the scan

Schedule a scan using that policy you just saved against your targets

…and viola! once the scan is complete – you have a single cross-platform patch report for all of your machines!

Backups: Part 5 – Process Dependencies – Databases Example

It is a frequent occurrence for a backup of a database system to be de-coupled from the tape backup such that there is a risk that an over-run or failure of the database backup schedule would not be detected by the subsequent schedule of the tape backup.

I recommend that the database and tape backups always be coupled such that they can be considered one composite job where the backup is dependent upon the application being in a suitable state for a backup.

If scheduled from the UNIX cron scheduler or application scheduler itself should call a process which calls a backup once the system is known to be in a suitable state defined within the backup script.

If scheduled from the backup software, the database dump, quiesce, or shutdown should be scheduled as a backup pre/post command.

Verify that the backup destination is available – in the case of disk – make sure it is mounted and writable, and in the case of tape, make sure it is loaded and writable.

The backup-pre command should not be run if the backup media cannot be verified as available

The backup-pre command should bring the system to a safe state

The backup-post command should bring the system to an open state

The backup job should not be initiated if the backup-pre command fails

The backup-post command should be run whether or not the backup-pre command or backup command itself fails

The backup-post command should return success if the program is already running satisfactorily

Any media movement should be checked and performed prior to entering the backup-pre process.

In the event of a media movement error, the backup-pre process should not be run, nor should the backup be run.

The pre and post commands should be attempted multiple times to mask over transient errors. Something like the following code fragment is sufficient for providing a 3-strike attempt:-

try_thrice() {
  $@ || $@ || $@
backup_pre() {
do_backup() {
  tar cf - $SRC | bzip2 -9c - > $DST
backup_post() {
try_thrice backup_pre || exit $?
try_thrice do_backup
try_thrice backup_post
exit `expr $? + $RETVAL`

So…in a nutshell:-

To improve data security and backup capacity management – a database backup should be linked to the tape backup such that the newly created database backup is copied to tape at the earliest opportunity, and that the tape backup should be configured so that it is not run if the database backup-pre command fails.

Backups: Part 4 – Dropped Files

In this little brain-splat I bring to you another backup woe – “Dropped Files”. These are an area of backup which is frequently overlooked. Many are just concerned on whether or not the backup job finished and succeeded. Many times I have seen backup reports where 1000’s of files are dropped daily as a matter of course due to lack of content exclusion base-lining.

All backups should be recursively bedded-in on initial configuration until they run for at least 99% of the time with 0 dropped files.

The danger of dropped files is that if you accept it as the norm – you will miss the critical files when they happen – Only through striving to maintain 0 dropped files through appropriate exclusions can it be possible to meet an absolute criteria of a good backup and enable you to see the real backup exceptions when they happen.

Dropped files are a critical part of the assessment on whether a backup is good so that makes it a mandatory process to eliminate any hot files and directories which are not required for a DR such as temp files and spool files. Elimination of these sources also reduces the backup payload thus reducing your backup times but also your RTO too as there is less data to restore.


RPO is the “recovery point objective”, the acceptable amount of data loss  from the point of failure.

RTO is the “recovery time objective”, the acceptable amount of time available to restore data within the RPO from the time of the point of failire,

The RTO is often dictated by an SLA or KPI or the like and often this is unrealistic in the event of a real disaster scenario

The RPO is often dictated by the backup policy, it should instead be dictated by the SLA as a data loss acceptance level from the business.If a system is backed up once per day then the RPO is automatically 24 hours.