BIND DNS query log shipping into a MySQL database

BIND DNS query log shipping into a MySQL database

Yay!, I’ve been wanting to do this for a while! Here it goes:-

Documented herein is a method for shipping BIND DNS query logs into a MySQL database and then reporting upon them!

Note: SSH keys are used for all password-less log-ons to avoid prompt issues

BIND logging configuration

BIND named.conf query logging directive should be set to simple logging:-

logging{

  # Your other log directives here

  channel query_log {
    file "/var/log/query.log";
    severity info;
    print-time yes;
    print-severity yes;
    print-category yes;
  };

  category queries {
    query_log;
  };
};

The reason why a simple log is needed is because the built-in BIND log rotation only allows rotation granularity of 1 day if based on time, hence an external log rotation method is required for granularity of under 24 hours.

BIND query log rotation

My external BIND log rotation script is scheduled from within cron and it looks like this:-

#!/bin/bash
QLOG=/var/named/chroot/var/log/query.log
LOCK_FILE=/var/run/${0##*/}.lock

if [ -e $LOCK_FILE ]; then
  OLD_PID=`cat $LOCK_FILE`
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
  fi
fi
echo $$ > $LOCK_FILE

cat $QLOG > $QLOG.`date '+%Y%m%d%H%M%S'`
if [ $? -eq 0 ]; then
  > $QLOG
fi
service named reload

rm -f $LOCK_FILE

Place this in the crontab, working at between one and six hours, ensure it is not run on the hour or at the same time as other instances of this job on associated servers

make sure /var/named/chroot/var/log/old exists for file rotation, used in the data pump script later on.

From here, I create a MySQL table, called dnslogs with the following structure:-

create table dnslog (
  q_server   VARCHAR(255),
  q_date     VARCHAR(11),
  q_time     VARCHAR(8),
  q_client   VARCHAR(15),
  q_view     VARCHAR(64),
  q_text     VARCHAR(255),
  q_class    VARCHAR(8),
  q_type     VARCHAR(8),
  q_modifier VARCHAR(8)
);

You can either define a database user with a password and configure it such in the scripts, or you can configure a database user which can only connect and insert into the dnslogs table.

Then I use the following shell script to pump the rotated log data into the MySQL database:-

#!/bin/bash
PATH=/path/to/specific/mysql/bin:$PATH export PATH
DB_NAME=your_db
DB_USER=db_user
DB_PASS=i_know_it_is_a_bad_idea_storing_the_pass_here
DB_SOCK=/var/lib/mysql/mysql.sock
SSH_USER=someone
LOG_DIR=/var/named/chroot/var/log
LOG_REGEX=query.log.\*
NAME_SERVERS="your name server list here"

LOCK_FILE=/var/run/${0##*/}.lock

if [ -e $LOCK_FILE ]; then
  OLD_PID=`cat $LOCK_FILE`
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
  fi
fi
echo $$ > $LOCK_FILE
for host in $NAME_SERVERS; do
  REMOTE_LOGS="`ssh -l $SSH_USER $host find $LOG_DIR -maxdepth 1 -name $LOG_REGEX | sort -n`"
  test -n "$REMOTE_LOGS" && for f in $REMOTE_LOGS ; do
    ssh -C -l $SSH_USER $host "cat $f" | \
      sed 's/\./ /; s/#[0-9]*://; s/: / /g; s/\///g; s/'\''//g;' | \
        awk -v h=$host '{ printf("insert into '$DEST_TABLE' values ( 
'\''%s'\'', 
STR_TO_DATE('\''%s %s.%06s'\'','\''%s'\''), 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\'', 
'\''%s'\''
);\n",
h, 
$1, 
$2, 
$3 * 1000, 
"%d-%b-%Y %H:%i:%S.%f", 
$7, 
$9, 
$11, 
$12, 
$13, 
$14
); }' | mysql -A -S $DB_SOCK -u $DB_USER --password=$DB_PASS $DB_NAME 2> $ERROR_LOG
    RETVAL=$?
    if [ $RETVAL -ne 0 ]; then
      echo "Import of $f returned non-zero return code $RETVAL"
      test -s $ERROR_LOG && cat $ERROR_LOG
      continue
    fi
    ssh -l $SSH_USER $host mv $f ${f%/*}/old/
  done
done
rm -f $LOCK_FILE $ERROR_LOG

Put this script into a file and schedule from within crontab, running some time after the rotate job suffice to allow it to complete, but before the next rotate job.

Note that the last operation of the script is to move the processed log file into $LOG_DIR/old/.

This will take each file in /var/named/chroot/var/log/query.\* and ship it into the dnslogs table as frequently as is defined in the crontab.

From here, it is possible to report from the db with a simple query method such as:-

#!/bin/bash
PATH=/path/to/specific/mysql/bin:$PATH export PATH
DB_NAME=your_db
DB_USER=db_user
DB_PASS=i_know_it_is_a_bad_idea_storing_the_pass_here
DB_SOCK=/var/lib/mysql/mysql.sock
SSH_USER=someone
SQL_REGEX='%your-search-term-here%'

LOCK_FILE=/var/run/${0##*/}.lock

if [ -e $LOCK_FILE ]; then
  OLD_PID=`cat $LOCK_FILE`
  if [ ` ps -p $OLD_PID > /dev/null 2>&1 ` ]; then
    exit 0
  fi
fi
echo $$ > $LOCK_FILE

echo "select * from dnslogs where q_text like '$SQL_REGEX';" | \
  mysql -A -S $DB_SOCK -u $DB_USER --password=$DB_PASS $DB_NAME

rm -f $LOCK_FILE

And there it is! SQL reporting from DNS query logs! You can turn this into whatever report you like.

From there, you may wish to script solutions to partition the database and age the data.

Database partitioning should be done upon the q_timestamp value, dividing the table into periods which align with the expectation of the depth for which reporting is expected. On a minimal basis, I would recommend keeping at least 4 days of data in partitions of between 24 hours and 1 hour, depending upon the reporting expectations. If reports are upon the previous day’s data only, then 1 partition per day will do, while reports which are only interested in the past hour or so will benefit from having partitions of an hour. in MySQL, sub-partitions are not worthwhile because they give you nothing more than partitions but adds a layer of complexity on what is otherwise a linear data set.
Once partitioning is established, it should be possible to fulfill reports by querying only the relevant partitions to cover the time span of interest.
Partitioning also has another benefit, which is data aging. Instead of deleting old records, it is possible to drop entire partitions which cover select periods of time without having to create a huge temporary table to hold the difference as would be required by a delete operation. This becomes an extremely useful feature if you have a disk with a table size which is greater than the amount of free space available.

Script updates for add and drop partition to follow….

AV Comparatives

Today, let me introduce you to AV Comparatives, a trusty AV testing lab which will open your eyes to how good your anti-virus is. I have used these guys for many years to consider my options on AV.

Disclaimer: Don’t be fooled by the sell of McAfee and Symantec – they are *NOT* the best AV products by a country mile.

The reports from AV Comparatives shows the difference between “out-of-the-box” and “configured-for-security” effectiveness. This provides an interesting and sometimes scary revelation.

Please pay special attention to the historical reviews for proactive tests. The teams that score best consistently on this test do better overall because if they are on-top for 0day threats then the historical virus detection is, as they say, “history”. You can see developer drain happen when a product slips from it’s ranking where a developer leaves or the company generally lags.

For Windows, I normally use Avira Free with secure-start and detection of all categories including jokes and games. Just taking another look, I guess I might reconsider…..maybe Avast?

I’d like to try QiHoo but I can’t read Chinese and I’m not sure i trust a ‘free’ product which is difficult to find on Google and intended for a single-country only market (you can’t even find it easily on Baidu!). Chinese users – please leave comment on this point and let me know what your experience with QiHoo AV is like!

Meanwhile, I’m on Linux, so ClamAV will do for now.

sendmail relaying nightmare!

While I’m hot on the topic – I’ve just spent a whole afternoon/evening trying to figure out why my sendmail installation keeps on becoming an open-relay every time i configure my desired domains! – which I have now figured out!

While listing my desired domains in the access file, or in the relay-domains file, it seemed to turn my sendmail host into an open-relay.

It turns out that access and relay-domains supports relay for all valid hosts and sub-domains within the DNS domains permitted for relay, hence all hosts with a valid DNS A record within the defined domains becomes a valid source of mail! As my testing point had a valid DNS record within the permitted domain (and I did check to see whether it was an open-relay), the host allowed relay based on membership to the permitted domains.
This effectively made my sendmail box an open-relay to all internal hosts with a DNS name.

This was fixed with a FEATURE:-

FEATURE(`relay_hosts_only')dnl

This sanitised my security from internal abuse! and made my access file work as intended, supporting explicitly listed hosts and domains only.

 

Update: I later realised that the domain names I was configuring also had ‘A’ records in DNS for the top-level domain. As these hosts were not valid mail sources for this relay, I had to explicitly configure a REJECT action within the access file for all of the IPs named in an ‘A’ record lookups on the given domain names within the access or relay-hosts file in order to deny an implicit behavior which is the consequence of permitting a given domain.

 

So….some things to remember for Sendmail:-

 

Any domain listed in the access file or relay-domains file will allow ‘open’ relay for all hosts :-

 

1) Within the visible DNS structure beneath the defined domain (unless you use “FEATURE(`relay_hosts_only’)dnl”)

2) Defined as an ‘A’ Record for the given domain name as returned by DNS.
Does your Sendmail MTA relay to the hosts you intend?

 

False Positives in Nessus scans

It is a frequent occurrence for a vulnerability to be identified based upon the version string of a product or component alone. These kind of alerts often do not dissapear after remediation. Instead, a mitgation log has to be establised in order to manually track compliance.

By using version-only vulnerability plugins, it is mostly useful for deep scans while it is not as appropriate for frequent scans because it will only generate ‘noise’ consisting of false positives once the mitigation is deployed and a manual compliance process is established.

Backups are ‘uncool’ in a security world

I got told the other day that “backups are ‘uncool’ in a security world”.

I disagree.

Backups are a vital part of any information assurance strategy but are often overlooked because information security is often too focused on keeping the baddies out through the vast array of tool-sets available which then drives waves of scanning, software patching, monitoring and security hardening, but in doing so, takes an eye off the ball with regard to backups.

Backup is primarily concerned with making sure systems and data remain safe, available and consistent, this is also the primary goals of information security, and while operational teams are tasked with the day-to-day operation, information security are ultimately responsible for protecting that backup data.

The Foundations

Consider how backups apply when considered in the context of the three pillars of information security:-

  • Confidentiality
  • Integrity
  • Availability

Confidentiality

Controls should be in place to manage and monitor access to backup devices, media, and data

  • Who has access to your backup data?
  • Who authorises access to backup data?
  • Is access to backup data and systems revoked if an authorised person leaves/moves?
  • Are data protection operation logs reviewed to identify and investigate ad-hoc restores and changes in backup policy?
  • To what degree can an authorised person examine the backup data?
  • To what degree can an unauthorised person affect the backup data?

Integrity

This is the classic focus of information security – keeping the systems and their data protected from threats which cause inappropriate changes. The tools and techniques are plentiful in this aspect, but backups require some further observance in order to maintain assurance of integrity

  • How is the data handled from source to destination?
  • Who authorises changes in backup retention and frequency?
  • Are ad-hoc restores and changes in backup policy appropriate and authorised?
  • How many backup failures would it take for integrity to become an information assurance issue?

Availability

An unscheduled outage can prove just as fatal to a client as a ‘classical’ security breach, and in these situations, the availability of backup data is key

  • Can you recover a system to a given point-in-time in order to perform a post-mortem or restore a system to a state prior to known compromise?
  • Can you prove how and where the data was moved? (it might be missing!)
  • Does the current RPO and RTO reflect real business availability needs? (would it be good enough to bail you and your customer out of a security incident?)
  • How is backup data destroyed/recycled/leaked?
  • Who defines backup retention policies, and do they comply with business and legal requirements?
  • Who is responsible for compliance such that data is kept for an appropriate amount of time, or more importantly, ensure that aged data is safely purged at an appropriate time?
  • How many backup failures would it take for backup availability to become an information assurance issue?

Ultimate Questions

  • Should information security care about backups?
  • Do backups register on your list for making data ‘secure’?

Conclusion

There is a clear role for information security within the context of backups in terms of managing access and monitoring events, the question is – does it go further?

As mundane as backups are, they provide the foundation for availability and integrity in the event of compromise because it can often provide the only available regression plan which could be used in many cases to undo what has been done and return the compromised system and its data to a pre-compromised state, which should be achieved within the agreed RTO and RPO.

The strength of the foundation means that information security should be jealously keen to ensure that this “get-out-of-jail-free” card does not slip out of their proverbial ‘back-pocket’.

Why I love admin.com’s MX record!

It’s pretty fair to say that admin.com is probably one of the most abused domains in the world.

I take my hat off to them in their attempt to combat spam.

They tool the simple eloquent solution of setting their MX record to localhost.

This dear reader is pure genius.

It is genius because it means that any DNS-aware mail server carrying mail for admin.com will burn up on repeated local delivery attempts my this MX record to localhost forcing the mail server into attempting delivery to itself.

The added bonus of this method is that mail never hits admin.com’s servers thus ensuring that their servers do not serve a dross of spam.

While it is obvious that this method does not allow delivery of mail if you actually want to receive mail, it is only suitable in this uncommon situation, and hmm, maybe some other situations.

It may possibly be a suitable remedy to eliciting noticeable decommissioning of domains such that the receiving SMTP servers catch no load and the sending SMTP servers get to see all the errors.

This may also be a useful spoofing technique for DNS views within your control if you want to suppress mail to certain domains within a subscribed client-base.

Or maybe suppressing mail from a machine which it is not possible to disable applications from mailing out.

A quick ‘hack’ to test this on any given machine is to alias the given domain to localhost in the /etc/hosts or c:\windows\system32\drivers\etc\hosts file in order to elicit the same outcome.

Caution is recommended – don’t lock-out access to key hosts like yourself or the device’s default router by aliasing critical network nodes. Your mileage may vary – don’t alias the name in which your machine has (if known to the device) for which you are using to administer the given device.

Using Nessus for software patch management

Today’s blog is about using Nessus for software patch management.

While Nessus is a popular tool for network security scanning, it also has some less obvious uses too, such as patch management, or more specifically, reporting.

Through allowing Nessus access to a device via an authorised system account, it can audit the package inventory on the device.

As Nessus supports many different operating systems and distributions, it becomes possible to manage your patch reporting for all of your device types (such as AIX, Solaris, Linux, Windows, Cisco IOS, MacOS X) from a single point of reference.

As all package vulnerabilities known to Nessus are scored like any other vulnerability, it is possible to categorise and qualify the patches in which you choose to apply.

This enables the patching policy to be driven by qualified security needs, and not “just because the vendor recommends it”.

Nessus can also plug-in to tools such as WSUS and Red Hat Satellite, however I am yet to explore what functionality it brings (i guess it will audit only against authorised patches or something…).

So by creating a ‘nessus’ account on the host (non-root/non-Administrator of course) in order to list the package inventory

Creating a ‘nessus’ account on the WSUS or Red Hat Satellite server

Configure a scan policy with local authentication and configure WSUS/Satellite with the required credentials

Select only local scan checks, exclude operating systems and scan type which do not apply to software package releases

Configuring a policy can be time consuming – don’t worry about de-selecting *ALL* of them – just get most of them – it’s only to speed up the scan anyway as those which don’t apply shouldn’t return a hit, so refine it over many iterations by removing more unwanted checks on second and third pass and so on.

Save the scan

Schedule a scan using that policy you just saved against your targets

…and viola! once the scan is complete – you have a single cross-platform patch report for all of your machines!