Hurricane Sandy causes visible disruption to Internet traffic

hurricane sandy internet stats 24 hours

hurricane sandy internet stats 24 hours

Courtesy of, the effects of the hurricane Sandy can visibly be seen on global internet statistics, but interesting things happen after!

The interesting things here are that you can visibly see the impact on the ‘net. It starts with the rapidly decreasing netflow starting at about 10PM on the graph (it’s a shame it doesn’t show what timezone it applies to!), with packet loss growing at an equally alarming rate. It then plateus out until things are re-routed, and connectivity for everyone else unaffected is eventually restored by about 1:30AM. The Internet was largely ‘healed’ in about 3.5 hours, not bad for a sev1 response!.

The final interesting thing is the subsequent relative stability of the Internet after reconfiguration. The performance is generally slightly degraded given the loss of New York’s traffic and data in transit, but is afterwards looks strangely too uniform and consistent, it’s like the event in itself has caused the Internet to stablise. Let’s see how long it lasts….

My thoughts go to those lost and those who have lost in the disaster.


hurricane sandy internet traffic stats 7 day stats

hurricane sandy internet traffic stats 7 day stats

7-day stats show drop start from about 2PM as infrastructure starts to fail from the bad weather that preceeded the flooding. This shows that there was a loss from the affected sites which, by’s measure accounted for approximately 3% of the global internet traffic.


Backups: Part 5 – Process Dependencies – Databases Example

It is a frequent occurrence for a backup of a database system to be de-coupled from the tape backup such that there is a risk that an over-run or failure of the database backup schedule would not be detected by the subsequent schedule of the tape backup.

I recommend that the database and tape backups always be coupled such that they can be considered one composite job where the backup is dependent upon the application being in a suitable state for a backup.

If scheduled from the UNIX cron scheduler or application scheduler itself should call a process which calls a backup once the system is known to be in a suitable state defined within the backup script.

If scheduled from the backup software, the database dump, quiesce, or shutdown should be scheduled as a backup pre/post command.

Verify that the backup destination is available – in the case of disk – make sure it is mounted and writable, and in the case of tape, make sure it is loaded and writable.

The backup-pre command should not be run if the backup media cannot be verified as available

The backup-pre command should bring the system to a safe state

The backup-post command should bring the system to an open state

The backup job should not be initiated if the backup-pre command fails

The backup-post command should be run whether or not the backup-pre command or backup command itself fails

The backup-post command should return success if the program is already running satisfactorily

Any media movement should be checked and performed prior to entering the backup-pre process.

In the event of a media movement error, the backup-pre process should not be run, nor should the backup be run.

The pre and post commands should be attempted multiple times to mask over transient errors. Something like the following code fragment is sufficient for providing a 3-strike attempt:-

try_thrice() {
  $@ || $@ || $@
backup_pre() {
do_backup() {
  tar cf - $SRC | bzip2 -9c - > $DST
backup_post() {
try_thrice backup_pre || exit $?
try_thrice do_backup
try_thrice backup_post
exit `expr $? + $RETVAL`

So…in a nutshell:-

To improve data security and backup capacity management – a database backup should be linked to the tape backup such that the newly created database backup is copied to tape at the earliest opportunity, and that the tape backup should be configured so that it is not run if the database backup-pre command fails.

Backups: Part 4 – Dropped Files

In this little brain-splat I bring to you another backup woe – “Dropped Files”. These are an area of backup which is frequently overlooked. Many are just concerned on whether or not the backup job finished and succeeded. Many times I have seen backup reports where 1000’s of files are dropped daily as a matter of course due to lack of content exclusion base-lining.

All backups should be recursively bedded-in on initial configuration until they run for at least 99% of the time with 0 dropped files.

The danger of dropped files is that if you accept it as the norm Рyou will miss the critical files when they happen РOnly through striving to maintain 0 dropped files through appropriate exclusions can it be possible to meet an absolute criteria of a good backup and enable you to see the real backup exceptions when they happen.

Dropped files are a critical part of the assessment on whether a backup is good so that makes it a mandatory process to eliminate any hot files and directories which are not required for a DR such as temp files and spool files. Elimination of these sources also reduces the backup payload thus reducing your backup times but also your RTO too as there is less data to restore.


RPO is the “recovery point objective”, the acceptable amount of data loss ¬†from the point of failure.

RTO is the “recovery time objective”, the acceptable amount of time available to restore data within the RPO from the time of the point of failire,

The RTO is often dictated by an SLA or KPI or the like and often this is unrealistic in the event of a real disaster scenario

The RPO is often dictated by the backup policy, it should instead be dictated by the SLA as a data loss acceptance level from the business.If a system is backed up once per day then the RPO is automatically 24 hours.