Multi-Cloud DNS delegated Sub domain with Terraform Cloud

DNS in a multi-cloud world

As companies transition to multi-cloud deployments, creating a common way to deploy solutions has become a requirement, this is also easily achieved by using Terraform to create your immutable infrastructure.

The challenge you face is that the infrastructure in a cloud deployment is dynamic in nature and so you can’t predict what the IP addresses of parts of your systems will be and these IP addresses can change daily if your developers have a fast deployment cycle.

The most common way for us to solve this problem is to create DNS records for each piece of the infrastructure that we are deploying and that needs to be addressed. If you deploy resources with external IP addresses on cloud providers they have different ways that these IP addresses are presented. Each cloud provider provides their solutions with differing features.

By providing and configuring your own DNS delegated zones within the providers you give yourself the flexibility you need when creating your applications within these clouds.


This post explains how you would utilise Terraform to deploy and manage a multi-cloud DNS solution from a hosted Route53 domain.


For this example you need to have an account on each of the cloud providers if you only want to use two providers then you only need an account on the two you would like to use.

Terraform 0.12.x cli installed
AWS Account
GCP Account
Azure Account
Azure Service Principal Account
Terraform Cloud account.
DNS domain hosted on route53
Git repository hosted on Github or Gitlab or Bitbucket
Text editor (I use vscode)


You now need to configure some variables.

General Variables

created-by: terraform (This a variable You use in your tags to show it was created by Terraform).
owner: dns-demo-team (This is another label so that you know who the owner is).
namespace: dnsmc (This is the name of the delegated subzone you want to create).
hosted-zone: (Hosted domain on Route53).

General Environment Variables

CONFIRM_DESTROY: 1 (This is so that you don’t accidentally delete your zones).

Basic Variables

Type in cd ~
Type in mkdir dns-multicloud
Type in git init

Creating the working environment

Open a terminal and navigate to the directory where you want to store this code.

There are many ways that people split their Terraform code up to make it easier to know where resources are.
I prefer to split my Terraform code into areas of concern for this example.

Create the files needed.
Type in touch
Type in touch
Type in touch

The current repository looks like this.

 => tree
 0 directories, 6 files file to declare our general Terraform configuration
.e.g. Our remote configuration to use Terraform Cloud as our backend
cloud provider specific configuration. file that will be used to output the information needed to use these delegated zone when deploying infrastructure to our cloud providers. file that will be used to declare the Terraform code that asks for the variables needed to run our code.

I also added a .gitignore for Terraform, LICENSE and a for the repository.

Commit these files to the git history.

Adding the backend

To start working with Terraform you need to configure a remote backed for the Terraform plan.

You can open the file and add the following.

terraform {
  required_version = ">= 0.11.0"
  backend "remote" {
    hostname     = ""
    organization = "dns-multicloud-org"
    workspaces {
      name = "dns-multicloud"

There are a number of fields here.
required_version (Version of the Terraform cli you are using).
backend “remote” (This tells Terraform you want to use the remote backend).
hostname (Hostname for Terraform Cloud).
organization (Our organization).
workspaces (The workspace you will be using).

You have one more step to do before can test this configuration.
You need to authenticate against Terraform Cloud. There are multiple ways to do this. You can find the documentation here:

I will be using a Token that you can generate in the Token Section of your account.

Navigate to your user settings section and generate a new Token to use.

Token Section terraform Cloud
Make sure you copy the token to a safe place as it is only displayed the one time.

You need to add this token to a credentials file to use when you run our Terraform.
For a *nix based system you will use a file called .terraformrc in your home directory.

The structure of the file looks like this.

credentials "" {

Insert the token code into this file and save it.
Now you are ready to test if your local client can connect to Terraform Cloud.
You should make sure you are in the dns-multicloud directory you can do this by typing the following commands and pressing enter after each one.


Initialising The Terraform Cloud Backend

To test that you have everything configured correctly run the following command.
terraform init
You should see the following output in the console.

Initializing the backend…
 Successfully configured the backend "remote"! Terraform will automatically
 use this backend unless the backend configuration changes.
 Terraform has been successfully initialized!
 You may now begin working with Terraform. Try running "terraform plan" to see
 any changes that are required for your infrastructure. All Terraform commands
 should now work.
 If you ever set or change modules or backend configuration for Terraform,
 rerun this command to reinitialize your working directory. If you forget, other
 commands will detect it and remind you to do so if necessary.

After that you want to initiate the state file in the backend.

terraform plan
 Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
 will stop streaming the logs, but will not stop the plan running remotely.
 Preparing the remote plan…
 To view this run in a browser, visit:
 Waiting for the plan to start…
 Terraform v0.12.9
 Configuring remote state backend…
 Initializing Terraform configuration…
 2019/10/07 11:57:37 [DEBUG] Using modified User-Agent: Terraform/0.12.9 TFC/3bcf15d045
 Refreshing Terraform state in-memory prior to plan…
 The refreshed state will be used to calculate this plan, but will not be
 persisted to local or remote state storage.
 No changes. Infrastructure is up-to-date.
 This means that Terraform did not detect any differences between your
 configuration and real physical resources that exist. As a result, no
 actions need to be performed.

When these two commands complete successfully then we are now ready to start writing our Terraform configuration.

Open the file in your editor and create the following items.

# General
variable "owner" {
  description = "Person Deploying this Stack e.g. john-doe"

variable "namespace" {
  description = "Name of the zone e.g. demo"

variable "created-by" {
  description = "Tag used to identify resources created programmatically by Terraform"
  default     = "terraform"

variable "hosted-zone" {
  description = "The name of the dns zone on Route 53 that will be used as the master zone "

The description in the variables explains what each one does.

Commit the changes to git.

Creating the DNS zones

Now that we have bootstrapped the plan we now need to start configuring the zones.

As mentioned before we have a hosted zone in Route53 that we will use as the master zone and from that we will create 3 delegated sub zones in each cloud provider.
Starting with AWS we will create our delegated zone in AWS.

Amazon Web Services (AWS) Sub Zone

We now need to add the AWS configuration to our zone.
To connect to AWS we need to use a Terraform provider that will give us this ability.
In the file add the following lines below the Remote backend configuration.

# AWS General Configuration
provider "aws" {
  version = "~> 2.0"
  region  = var.aws_region

This block of code tells Terraform that we want to use the AWS provider from the registry in our plan.
We are telling it that we want to use version 2.0 of the provider and we have an extra configuration item called region. This is the region we want to use for our deployments. We have set another variable here called aws_region which we will add to the Terraform Cloud configurastion later in the post.
You can read more about the provider block here:

Now create a file in the project called You can do this in the same way we did before by using touch in the console.


The directory structure should look like this.

=> tree
 0 directories, 6 files

Open the file in your editor and add the following block of code.

data "aws_route53_zone" "main" {
  name = var.hosted-zone

This is a data source block that will query AWS for the zone we have hosted there and return the resource for us to use in our code later.
You can read more about the data sources here:

AWS Sub Zone

The next step is to create the DNS subzone that we will be using. You must add the code listed below to the file just after the data source.


resource "aws_route53_zone" "aws_sub_zone" {
  name = "${var.namespace}.aws.${var.hosted-zone}"
  comment = "Managed by Terraform, Delegated Sub Zone for AWS for ${var.namespace}"

  tags = {
    name        = var.namespace
    owner       = var.owner
    created-by  = var.created-by

What we are doing here is using the aws_route53_zone resource from the provider with a name we have chosen aws_sub_zone we have then provided a number of arguments to the resource so that it can be created. The important ones are the name argument where we have used two variables to create our zone name.

In this case it would form a domain of We have also populated the tags from the other general variables we created.

The block above only creates the zone in AWS but does not provide the master zone with any information on the zone. We need to create some DNS nameserver(ns) records for the new delegated zone so that any records that are created in the new zone will be found.
The code block below will create the nameserver(ns) records for the zone .

resource "aws_route53_record" "aws_sub_zone_ns" {
  zone_id = "${data.aws_route53_zone.main.zone_id}"
  name    = "${var.namespace}.aws.${var.hosted-zone}"
  type    = "NS"
  ttl     = "30"

  records = [
    for awsns in aws_route53_zone.aws_sub_zone.name_servers:

The important section here is the records section. You can see that we use a for loop in the section to grab all the name servers that were created in the aws_route53_zone block we created earlier and populate them in this argument.

We now need to create the variable block in our file for the aws_region argument we used in our provider block.
The code Block will look like this.


variable "aws_region" {
  description = "The region to create resources."
  default     = "eu-west-2"

When you need to use this zone in AWS for creating records for other resources you will need the zone ID. To provide this we need to create a block in the file that provides this detail.
Add the blocks below to the file.

output "aws_sub_zone_id" {
  value = aws_route53_zone.aws_sub_zone.zone_id

output "aws_sub_zone_nameservers" {
  value = aws_route53_zone.aws_sub_zone.name_servers

This creates two outputs the zone_id output to be referenced in other deployments and a list of the name servers that were assigned to the delegated domain.

This completes the configuration we need to create the resources for the plan.
To apply these to our account we will need to add some authentication credentials for AWS to our Terraform Cloud workspace. We will also need to add the aws_region variable.
Below are the variables we need to create. Please take note that the Environment Variables need to be marked as sensitive when you create them to encrypt them when they are stored.

AWS Variables

aws_region: eu-west-2 (Region to deploy the delegated domain).

AWS Environment Variables

Mark these as sensitive so that they are hidden
You can get these from your AWS account

The variable screen will look like this when you are done.

We can now run our terraform plan to see what will be created.

=> terraform plan
 Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
 will stop streaming the logs, but will not stop the plan running remotely.
 Preparing the remote plan…
 To view this run in a browser, visit:
 Waiting for the plan to start…
 Terraform v0.12.9
 Configuring remote state backend…
 Initializing Terraform configuration…
 2019/10/07 14:46:51 [DEBUG] Using modified User-Agent: Terraform/0.12.9 TFC/3bcf15d045
 Refreshing Terraform state in-memory prior to plan…
 The refreshed state will be used to calculate this plan, but will not be
 persisted to local or remote state storage.
 data.aws_route53_zone.main: Refreshing state…
 An execution plan has been generated and is shown below.
 Resource actions are indicated with the following symbols:
 Terraform will perform the following actions:
 # aws_route53_record.aws_sub_zone_ns will be created
 resource "aws_route53_record" "aws_sub_zone_ns" {
 allow_overwrite = (known after apply)
 fqdn            = (known after apply)
 id              = (known after apply)
 name            = ""
 records         = (known after apply)
 ttl             = 30
 type            = "NS"
 zone_id         = "Z2VGUC188F45PC"
 aws_route53_zone.aws_sub_zone will be created
 resource "aws_route53_zone" "aws_sub_zone" {
 comment       = "Managed by Terraform, Delegated Sub Zone for AWS for dnsmc"
 force_destroy = false
 id            = (known after apply)
 name          = ""
 name_servers  = (known after apply)
 tags          = { "created-by" = "terraform"
 "name"       = "dnsmc"
 "owner"      = "dns-demo-team"
 vpc_id        = (known after apply)
 vpc_region    = (known after apply)
 zone_id       = (known after apply)
 Plan: 2 to add, 0 to change, 0 to destroy.

Commit the changes to git.

You can now run terraform apply to create these resources in AWS
You should see something similar to what is below.

Plan: 2 to add, 0 to change, 0 to destroy.
 Do you want to perform these actions in workspace "dns-multicloud"?
   Terraform will perform the actions described above.
   Only 'yes' will be accepted to approve.
 Enter a value: yes
 aws_route53_zone.aws_sub_zone: Creating…
 aws_route53_zone.aws_sub_zone: Still creating… [10s elapsed]
 aws_route53_zone.aws_sub_zone: Still creating… [20s elapsed]
 aws_route53_zone.aws_sub_zone: Still creating… [30s elapsed]
 aws_route53_zone.aws_sub_zone: Creation complete after 37s [id=Z2MPGT7J02JUKT]
 aws_route53_record.aws_sub_zone_ns: Creating…
 aws_route53_record.aws_sub_zone_ns: Still creating… [10s elapsed]
 aws_route53_record.aws_sub_zone_ns: Still creating… [20s elapsed]
 aws_route53_record.aws_sub_zone_ns: Still creating… [30s elapsed]
 aws_route53_record.aws_sub_zone_ns: Creation complete after 30s []
 Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
 aws_sub_zone_id = Z2MPGT7J02JUKT
 aws_sub_zone_nameservers = [

Congratulations the AWS zone is created.

Google Cloud Platform (GCP) Sub Zone

The process for the Google Compute zone will be largely the same.

Open the file in your editor and add the code block below the AWS config block.

# Google Cloud Platform General Configuration
provider "google" {
  version = "~> 2.9"
  project = var.gcp_project
  region  = var.gcp_region

This block of code describes the GCP provider for Terraform to use when creating the delegated zone on GCP DNS.
It has similar arguments that the AWS one has and you can read more about it here

Next you need to create is the file where we will be storing the details for the gcp zone resource.
Create the file in the same way as before with the touch command.


The directory structure should look like this.

=> tree
 0 directories, 7 files

Open the file in your editor and add the code blocks below.


resource "google_dns_managed_zone" "gcp_sub_zone" {
  name              = "${var.namespace}-zone"
  dns_name          = "${var.namespace}.gcp.${var.hosted-zone}."
  project           = var.gcp_project
  description       = "Managed by Terraform, Delegated Sub Zone for GCP for  ${var.namespace}"
  labels = {
    name = var.namespace
    owner = var.owner
    created-by = var.created-by

The Code block above does much the same as the AWS one.
The data source google_compute_zones collects all the zones available within the region you have chosen.
There is a difference to the dns_name argument for the zone. GCP requires you to add a “.” to the end of the zone when you create it.
The project argument is the name of the GCP project you are working in.

You now need to add the variables needed to the file.
Open the file in your editor and add the code block below.


variable "gcp_project" {
  description = "GCP project name"

variable "gcp_region" {
  description = "GCP region, e.g. us-east1"
  default     = "europe-west3"

Here you add variables for the gcp_project and gcp_region that are needed to authenticate and apply to GCP.
Now you need to add the outputs that are needed for using this zone in other Terraform deployments. Add the code block below to provide the outputs.

output "gcp_dns_zone_name" {
  value =

output "gcp_dns_zone_nameservers" {
  value = google_dns_managed_zone.gcp_sub_zone.name_servers

Te is it for the resources now you need to configure Terraform with the variables needed for authentication and provisioning.

GCP Variables

gcp_project: dns-multicloud-demo (Name of the GCP project to deploy to).
gcp_region: europe-west3 (GCP Region for the deployment).

The Environment variable below enables authentication to google cloud.

GCP Environment variables (Sensitive)


What needs to be added here is the contents of the json file that you can download from your GCP account .
Terraform Cloud needs the credentials in a specific format .

Save your google credentials into the project directory.
Using your editor edit the file you downloaded.

You need to convert the json into one single line. If you have access to vim you can use the steps below.

vim gcp-credentials.json

then press :

enter the following 
%s;\n; ;g

Press enter

Save the file by pressing : then wq and press enter

After doing these steps if you open the file in your normal editor it should all be on one line. Copy the text from this file into the GOOGLE_CREDENTIALS Environment variable and mark it as secret.

The variable screen will look like this when you are done.

You then run terraform plan and terraform apply and the outputs should look something like this.

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
 aws_sub_zone_id = Z2MPGT7J02JUKT
 aws_sub_zone_nameservers = [
 gcp_dns_zone_name = dnsmc-zone
 gcp_dns_zone_nameservers = [

Congratulations the GCP zone is created.

Commit the changes to git

Azure Cloud Platform (Azure) Sub Zone

The creation of this zone should be largely the same as the AWS and GCP ones.

Open the file in your editor and add the code block below the GCP config block.

# Azure General Configuration
provider "azurerm" {
  version = "~>1.32.1"

This block of code describes the Azure provider for Terraform to use when creating the delegated zone on Azure DNS.
It has similar arguments that the AWS and GCP ones have and you can read more about it here:

Next you need to create is the file where we will be storing the details for the azure zone resource.
Create the file in the same way as before with the touch command.


The directory structure should look like this.

=> tree
 0 directories, 8 files

Open the file in your editor and add the code blocks below.

resource "azurerm_resource_group" "dns_resource_group" {
  name     = "${var.namespace}DNSrg"
  location = var.azure_location

resource "azurerm_dns_zone" "azure_sub_zone" {
  name                = "${var.namespace}.azure.${var.hosted-zone}"
  resource_group_name = "${}"
  tags = {
    name        = var.namespace
    owner       = var.owner
    created-by  = var.created-by

The code block above does petty much the same as the other two.
The resource azurerm_resource_group creates a resource group for the DNS zone.
The location argument determines where the resource Group and zone will be deployed.
The resource azurerm_dns_zone creates the zone in the resource group.

Next you need to add the variable in the file.
Open the file in your editor and add the code block below.

# Azure

variable "azure_location" {
  description = "The azure location to deploy the DNS service"
  default     = "West Europe"

We only have one variable here to determine which location the zone will be deployed.

The last bit of code you need to create is the outputs in the file.
Add the code block below to the file after the GCP ones.

output "azure_sub_zone_name" {
  value =

output "azure_sub_zone_nameservers" {
  value = azurerm_dns_zone.azure_sub_zone.name_servers

output "azure_dns_resourcegroup" {
  value =

As before we need to output the details that will be needed to deploy resources and create DNS records in the zone.

You now need to add the Variables for the Azure deployment.

Azure Variables

azure_location: West Europe The Azure Region Location to deploy to.

Azure Variables (Sensitive)


These credentials you can get from your Azure Service Principal account.

The variable screen will look like this when you are done.

Commit the changes to git.

The final piece of the puzzle is to now make the AWS DNS servers aware of where the different GCP and Azure DNS zones are hosted.
Open the file with your editor and add the code blocks below after the AWS blocks.

# GCP SubZone

resource "aws_route53_zone" "gcp_sub_zone" {
 name          = "${var.namespace}.gcp.${var.hosted-zone}"
 comment       = "Managed by Terraform, Delegated Sub Zone for GCP for  ${var.namespace}"
 force_destroy = false
 tags = {
    name           = var.namespace
    owner          = var.owner
    created-by     = var.created-by

 resource "aws_route53_record" "gcp_sub_zone" {
   zone_id = "${data.aws_route53_zone.main.zone_id}"
   name    = "${var.namespace}.gcp.${var.hosted-zone}"
   type    = "NS"
   ttl     = "30"

   records = [ 
     for gcpns in google_dns_managed_zone.gcp_sub_zone.name_servers:

# Azure SUBZONE 

resource "aws_route53_zone" "azure_sub_zone" {
  name = "${var.namespace}.azure.${var.hosted-zone}"
  comment = "Managed by Terraform, Delegated Sub Zone for Azure for ${var.namespace}"

  tags = {
    name        = var.namespace
    owner       = var.owner
    created-by  = var.created-by

resource "aws_route53_record" "azure_sub_zone_ns" {
  zone_id = "${data.aws_route53_zone.main.zone_id}"
  name    = "${var.namespace}.azure.${var.hosted-zone}"
  type    = "NS"
  ttl     = "30"

  records = [
    for azurens in azurerm_dns_zone.azure_sub_zone.name_servers:

In these code blocks you are creating the delegated zones in Route53 and providing the DNS name servers from each cloud zone as records.
Save the file and the run terraform plan and terraform apply
You should get results similar to this.

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
 aws_sub_zone_id = Z2MPGT7J02JUKT
 aws_sub_zone_nameservers = [
 azure_dns_resourcegroup = dnsmcDNSrg
 azure_sub_zone_name = /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/dnsmcdnsrg/providers/Microsoft.Network/dnszones/
 azure_sub_zone_nameservers = [
 gcp_dns_zone_name = dnsmc-zone
 gcp_dns_zone_nameservers = [

After this step your DNS zones will be available in each cloud provider for your services to use as needed.
If you ever need to add another cloud provider just follow the same process as with these.

The git repository for this blog post can be found here :

Setting up GOOGLE_CREDENTIALS for Terraform Cloud

The getting stared guides for using Terraform with Google Cloud Platform (GCP)

All suggest using code like this to provide credentials 

// Configure the Google Cloud provider
provider "google" {
 credentials = "${file("CREDENTIALS_FILE.json")}"
 project     = "flask-app-211918"
 region      = "us-west1"

This works well when you are just learning Terraform. Once you start working with 2 or three other engineers this becomes more of a challenge because you need to keep the state file secure using a remote S3 backend etc.. but you still have the problem of the credential file that needs to be shared. However since the launch of Terraform Cloud at Hashconf it is now possible to sign up for a free Terraform Cloud account and to use it as a remote backend for your plans.

This secures your state file with the encryption provided as part of the service.

Your current GCP credentials are still stored locally on our laptop and could still accidentally be committed to a git repository

The way to solve this is to create an Environment Variable in Terraform Cloud add the content from your json file to the variable and then mark it as secret. This then protects the secret and you can add the local json file to your favourite password manager to encrypt. 

To add the credentials they need to be altered a bit to be stored in the variable.
You need to remove all newline characters from the file.
Using your favourite editor remove these and the json will shrink to only one line.

I use vim for this with the following steps

Open the file with vim

vi gcp-credential.json

press :

Add the following 
%s;\n; ;g

Press enter.

press : again

type wq

After the file is saved add an Environment Variable Called
GOOGLE_CREDENTIALS to the terraform Cloud workspace you will be running your plans in.
Copy in the data from the file and paste it into the variable value and mark it as sensitive.
Then you are done.

All terraform runs should now use these credentials for authenticating to GCP

Meteor User Seed

I have been trying to build a site with Meteor and have slowly started getting stuff working.

One of the ways to populate your development data has been with database seeding.

I wanted to link a user account with a document in one of my collections so I worked this out from the documentation.

You add users to the users collection this returns the userId for the new user which you capture in a variable that you use to insert the userId into your second collection.

if (Meteor.users.find().count() === 0) {
  seed1UserId = Accounts.createUser({
    username: 'lance',
    email: '',
    password: '123456'
  seed2UserId = Accounts.createUser({
    username: 'david',
    email: '',
    password: '123456'
  seed3UserId = Accounts.createUser({
    username: 'glenn',
    email: '',
    password: '123456'
  seed4UserId = Accounts.createUser({
    username: 'martin',
    email: '',
    password: '123456'

if (MasterList.find().count() === 0) {
    firstname: "Lance",
    lastname: "James",
    user_id: seed1UserId

    firstname: "David",
    lastname: "Cope"
    user_id: seed2UserId

    firstname: "Glenn",
    lastname: "Manner",
    user_id: seed3UserId
    firstname: "Martin",
    lastname: "Drone",
    user_id: seed4UserId

Great Script to tidy Up our Photos

I was looking for a way to tidy up out photos on the NAS at home and have tried a number of things that just did not fit the bill.

they were either just to difficult or completely wrong.

i then stumbled upon this blog post

What a gem. if you can install ruby on a machine you have to sort your photos this is it.

The post is from 2009 and so I had to update some of the gems it uses as well as change some of the code. but it was not much work.

Thanks to Falesafe for making it available it had another bonus in that I found out that we have 76000 photos.

I feel some culling is needed.


I had to work with the script a bit as the EXIF attribute it was using was causing my photos to be sorted incorrectly namely (date_time).
So I have updated the script to use the (date_time_original) atribute and this has now sorted my photos properly for me. The original post that was written has comments that are closed so I will upload the adjusted script here if you want to use it.

# == Synopsis
# This script examines a source directory for photos and movie files and moves them to
# a destination directory.  The destination directory will contain a date-hierarchy of folders.
# == Usage
# ruby photo_organizer.rb [ -h | --help ] source_dir destination_dir
# == Author
# Doug Fales, Falesafe Consulting, Inc.
# == Change Log
# LANCE HAIG = changed the EXIF attribute used to determine photo date taken to .date_time_original
# == Copyright
# Copyright (c) 2009 Doug Fales.
# Licensed under the same terms as Ruby.
require 'rubygems'
require 'exifr'
require 'find'
require 'logger'
require 'optparse'
require 'pathname3'
require 'digest/sha3'

STDOUT.sync = true

#$log ="photo_organizer.log", 3, 20*1024*1024)  # Log files up to 20MB, keep at least three around
#$"Photo organizer started...")

def log
	@log ||="photo_organizer.log", 3, 20*1024*1024)  # Log files up to 20MB, keep at least three around
end"Photo organizer started...")

def usage()
puts < e
			if(f =~ /.DS_Store/)"Skipping .DS_Store")
			elsif (e.message =~ /malformed JPEG/)"Malformed JPEG: #{f}")

		if(time.nil?)"WARNING: No EXIF time for: #{f}.  Will skip it.")

		was_moved = move_image(f, time)
		increment_counter if was_moved

	when"Processing directory: #{f}")
	else "?""Non-dir, non-file: #{f}")

puts "\nFinished."

Quest NDS Migrator LogFile Parser

I am currently helping a customer migrate from Novell to Microsoft and they are using the Quest migrator product to move their data to new DFS servers.
They currently have a large amount of data stored on a number of volumes. The sheer number of volumes and data have required that they deploy a large number of the copy engine servers.
The copy engine does not utilize a central logging facility, it stores the logfiles in a folder alongside the copy engine.
This unfortunately has a side affect, that there are now quite a few log files and some are reaching over 1.5GB in size.
Trying to load these files into a text editor as proven impossible and unworkable and another way was needed.

I decided that the best way to achieve this was to use a script that would parse the log files and extract the errors from the files into another file that would be smaller and easier to work with.

I decided to use Powershell as the scripting language as it would run on the new infrastructure and could be run on a copy engine server with enough disk space.

I undertook quite a bit of research and trial and error but eventually I have a working script.

This script is not signed so you will either need to sign the script to run it or elevate the privileges with set-Executionpolicy on the system you are going to be using.

The script uses two files the main script file and a csv file with the volume names and copy engine server names.

Below you will find a copy of both. I have also created a git repository that you can find on GitHub if you would like to help make it better

Original PowerShell Script

[codebox 1]

Original CSV File

[codebox 2]

Building the new Bongo Admin UI

A while ago Alex (so_solid_moo to the IRC channel) created a php binding for the Bongo API. He also created the start of the new UI that we are working towards.

We started with the admin ui for now as we have created a user interface with the roundcube project that Alex also integrated with.

I stared porting the current Dragonfly assets into the project and I tried to stick to the old design style as much as possible as I really loved it’s look and feel.

After a bout a week I was done and submitted it to the git repo of the new project.

Although I was glad that we had stared the project and that I had done as well as I could on the pretty bits I was not quite happy with the quality of the work.

I was going through my git repo’s this weekend and found the twitter git repo where they have open sourced all their CSS (Cascading Style Sheets) this inspired me to see if I could use this as it was MUCH better quality CSS that what I could come up with.

So I started work on the migration as an experiment and from the word go it was so much easier. Their default styles just make sense and to alter or add my customisation took only a very few lines of CSS code.

I was extremely grateful for this as it will enable us to improve our UI as we go along. you can find the new webui in Alex’s github project here.

Thanks twitter

My interesting way to end a week.

This is an account of Thursday the 29th of September 2011 when my cynical view on Londoners only thinking of themselves and not wanting to get involved with other peoples troubles was blown completely out of the water thanks to some really amazing people who I don’t have names for but would really love to thank from the depths of my heart for everything they did for me.

The story begins as I am making my way from Hertfordshire down to Camden on my commute to get to the office.

At about 08:22 in the morning I was driving down Hawley Road on my motorcycle and stopped for a Red traffic light at the junction with Jeffrey’s and Camden Street after a short while the light turned orange and then green for us. As this is normally when a load of cyclists jump the red light on the Kentish Town Road part of this junction I made sure to check that none had done so and then started to travel across the junction, once I had reached about ¾ of the way across the junction I felt the most unbelievable pain on my left side, felt myself hit the road surface on my right hand side and heard my self screaming. (and no it was not like a girl but close though)

The pain was something that was indescribable and all I could see was my bike over my left shoulder and the smell petrol that was leaking out of the petrol tank around me. I looked up to see if any cars were coming and unable to stop but thankfully none were.

(This is where people who I have never known stepped into my life to my eternal thanks)

Almost immediately I gentleman in a dark suite was leaning over me looking through my visor and asking if I was ok, All I could feel at the time was the screaming pain in my left foot and answered yes. I looked over my shoulder and saw another gentleman talking to a young lad in a grey helmet all I could hear was did you not see the red light.

Out of the corner of my eye I saw another gentleman in a black helmet and a white shirt (I think) take off his helmet and say “are you ok mate?”, “Do you know your Name?” I tried to answer him as best I could. I then realised that I had not hurt my head or hands or arms so decided to take off my helmet so I could see and hear better as I use earplugs in my ears to protect them from the motorway sounds.

After taking off my helmet and taking out my ear plugs the intense loads sounds flooded into my brain and I realised that my helmet had cut off my ability to realise that there were so many more people who had stopped on there way to work to help me.

I looked behind me and could see the young chap who had ridden into me sitting against the railings on the pavements clearly in a great deal of shock and I became worried for him as he was really pale. A bit further over I looked at what was left of his Vespa and thought Bloody hell how am I still conscious and he sitting there.

A wonderful vision in dayglo yellow cyclists gear was a lady who promptly informed me that she had called the ambulance and the police and that they would soon be here.

My attention was grabbed by a tapping on my Right hand shoulder another man in motorcycle helmet leaned in and said “I was right behind you and saw it all, here is my business card I will be a witness for you”, and with that he was off and climbed on his bike and road off. I did not even have time to say thanks.

All this time the gentlemen in the dark suite and white shirt kept talking to me asking me if I still felt ok. He asked me if I would like to get out of the road to which I agreed but then I realised that at 118 kgs and being over 6feet would make things difficult for them so I crawled towards the pavement.

I remained calm which I was pleasantly surprised about and this helped I think everyone around me to keep focussed on what they needed to do to help me.

For some reason I was worried about my laptop that was in the storage on the back of my bike and so asked the gentleman in the white shirt if he would mind getting it out for me. He duly did and I can only thank him for that. I was after my mobile phone so I could call my wife as I did not want anyone calling her first and scaring her. The gentleman in the white shirt said just use mine I don’t mind.

It was about 3 minutes later that the first ambulance arrived on the scene and two lady ambulance paramedics climbed out and started to do there work on me.

After a short while I was in the ambulance and through the open door I could see the man in the suite and the one in the white shirt smile at each other and then as if to say well no one else will do this they shrugged their shoulders and these wonderful people physically righted motorbike and wheel it onto the pavement.

The ambulance staff (I wish I had taken their names) then closed the door and started going through their procedures Blood pressure finger pricks blood oxygen levels etc… and they did this with a smile on their faces and in their voices. To anyone who ever says anything bad about the NHS ambulance staff I think you are completely mistaken.

We then heard the police siren and a Metropolitan police officer PC Barker knocked on the door and entered again these guys were the nicest people to talk to and were a credit to their profession.

After a few more questions and answers we were free to go and I was taken to the Royal Free Hamsted hospital where I was taken to the Minors 4 cubical where some really great medical staff came to my aid and scanned and prodded and x-rayed me whilst some learner Dr’s etc.. looked on and I am glad I could add one more addition to their education a RTC between two motorcycles.

The orthopedic staff determined that I had a fractured ankle and would not need screws etc.. (whew) and a cast would be all that is needed to med these broken bones.

My wonderful wife then walked in to the cubical and I relaxed quite a bit as I knew that my day could only get better from then on. After getting a back slab plaster cast and with strict instructions not to walk on my leg and a booked appointment for the fracture clinic next Thursday We dutifully made our way out of the hospital. I had not used crutches in a very long time so it was slow going.

The wonderful people at the company I work for help arrange transport home from the hospital to home where I was happy to sit on the couch with my foot up. My motor mechanic friend woke up at 4am and went to pick up my motorbike on Friday I can’t that Duncan enough.

So to all you wonderful people

  • Gentleman in the Dark suite
  • Gentleman in the White Shirt
  • Lady cyclist dressed in dayglo yellow
  • Gentleman who gave me his business card.
  • The people who helped roll my motorbike to the pavement
  • The two ladies who drove the ambulance
  • PC Barker and his partner
  • The medical staff at the Royal Free Hamstead hospital
  • Duncan from Bikers realm

I wish I had had the forethought to get all your names so I could thank you in person all I can do unfortunately is thank you from the bottom of my heart and on behalf of my Mother, Father,Wife and kids thank you for helping to get me safely home.

How to copy custom attributes when migrating vmware vcenter to new database

I recently had to move hosts and guests to a new vcenter server as the old server had become corrupt and full of issues.
The current vcenter has a few custom attributes and notes that would not be transferred as part of the move.
So I wanted to use powercli to read the attributes out and put them back.

To export the attributes I used the script below.
You will need to add as many Key Value pairs as you have custom attributes

#load Vmware Module
Add-PSSnapin VMware.VimAutomation.Core

Connect-VIServer -User 'VMUSER' -Password 'USerPasswd221' -Server 'vcenter1'

$vmlist = get-vm
$Report =@()
foreach ($vm in $vmlist) {
$row = "" | Select Name, Notes, Key, Value, Key1, Value1, Key2, Value2, Key3, Value3
$ = $vm.Name
$row.Notes = $vm | select -ExpandProperty Notes
$customattribs = $vm | select -ExpandProperty CustomFields
$row.Key = $customattribs[0].Key
$row.Value = $customattribs[0].value
$row.Key1 = $customattribs[1].Key
$row.Value1 = $customattribs[1].value
$row.Key2 = $customattribs[2].Key
$row.Value2 = $customattribs[2].value
$row.Key3 = $customattribs[3].Key
$row.Value3 = $customattribs[3].value
$Report += $row

$report | Export-Csv "c:\vms-with-notes-and-attributes.csv" -NoTypeInformation

It should produce a csv file that looks something like this


Once you have exported the file you need to import it into the new vCenter
again adding Key Value pairs as needed.

#load Vmware Module
Add-PSSnapin VMware.VimAutomation.Core

Connect-VIServer -User 'VMUSER' -Password 'USerPasswd221' -Server 'vcenter2'

$NewAttribs = Import-Csv "C:\vms-with-notes-and-attributes.csv"

foreach ($line in $NewAttribs) {
set-vm -vm $line.Name -Description $line.Notes -Confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key -Value $line.Value -confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key1 -Value $line.Value1 -confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key2 -Value $line.Value2 -confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key3 -Value $line.Value3 -confirm:$false


Hope this helps someone.

Creating a Two Node Mysql Cluster On Ubuntu With DRBD Part 2

This blog is a follow on from a blog post I wrote ages ago and have eventually got round to finishing it off

In this part of the process we will create the disks and setup the DRBD devices
First we need to connect to the Virtual Machines from a terminal session as it makes life much easier and quicker when you connect remotely.
You will need to make sure that your servers have static IP addresses.
For this document I will be using the following IP addresses for my servers.

drbdnode1 =
drbdnode2 =
drbdmstr = (clustered IP address)
Subnet Mask =
Gateway =
DNS Servers = and

So to set the IP address as fixed you need to do the following.
Connect to the console of drbdnode1 and login
now we need to edit the file that contains the IP address of the network card
enter the following command and press return

sudo nano /etc/network/interfaces

enter the password for the user you are logged in as
You should see the following screen

now use your arrow keys on your keyboard and move the white cursor to the section that starts with iface eth0
press Ctrl K to remove the line then add the lines below with your IP address details

auto eth0
iface eth0 inet static

It should end up looking like this

Now press Ctrl X to exit
Then Y
Then press Enter to save
Now type in the following

sudo /etc/init.d/networking restart

Do the same for drbdnode2
Now that we have given each server a static Ip address we can connect via ssh to the server to do the admin remotely.
To do this you need to have a machine that has an ssh client installed most linux and osx clients have one already installed if you are on windows look for putty and use that.
So open a terminal on your machine and the in the following

ssh cluster@ and press enter.

You need to substitute the username you created on your server when setting it up for the word cluster in the above command.
You will be prompted to accept a key for the server. Type yes and press enter.
Now enter the password for the user and press enter.
You should see a screen like this

Connect to both cluster nodes to make sure you are not stopped down the line to fix the problem.
You are now ready to work on your cluster.
First we need to create host records for the two servers
type the following into your terminal session

sudo nano /etc/hosts

and add a record for each server it should look something like this

Save the file as before and do the same for node2 but swap the names and ipadresses
Now we need to install a few packages that will allow us to use drbd
in the terminal on drbdnode1 type

apt-get install heartbeat drbd8-utils

and press enter you should have a screen like this

Press Y and then Enter to install the software. Do this on drbdnode2 as well
Now we need to create the partitions that we will use for the drbd cluster
to find out which disk we will be using run the command

sudo fdisk -l

to see which disks have not been partitioned your screen should look like this

As you can see at the end is the disk /dev/sdb does not have a partition table
look for the line “ Disk /dev/sdb doesn’t contain a valid partition table “
to create a partition table we need to run the following commands

sudo fdisk /dev/sdb
n (to create a new partition)
p (to select a primary partition)
1 (for the first partition)
Enter (to select the start cylinder)
and enter (to select the end cylinder)
w (to write the changes)

the screen should look like this

Do this on both servers
once this is complete we now need to edit the drbd configuration files to set up our clustered filesystem.
In your terminal on drbdnode1 enter the command

sudo nano /etc/drbd.d/clusterdisk.res

Enter the password for your user and edit the file
Copy and paste the following code into your terminal screen and then change the details to match your server names and ipaddresses

resource clusterdisk { # name of resources

protocol C;

on drbdnode1 { # first server hostname

device /dev/drbd0; # Name of DRBD device

disk /dev/sdb1; # Partition to use, which was created using fdisk

address; # IP addres and port number used by drbd

meta-disk internal; # where to store metadata meta-data


on drbdnode2 { # second server hostname

device /dev/drbd0;

disk /dev/sdb1;


meta-disk internal;


disk {

on-io-error detach;


net {

max-buffers 2048;

ko-count 4;


syncer {

rate 10M;

al-extents 257;


startup {

wfc-timeout 0;

degr-wfc-timeout 120; # 2 minutos.



The screen should look similar to this

ctrl x (to exit)
y (to save the changed file)
enter (to overwrite the file)
Now we need to create the DRBD resource
enter the following command into your terminal session

sudo drbdadm create-md clusterdisk

After running this command you should see a screen similar to this

On drbdnode1 enter the following command

drbdadm -- --overwrite-data-of-peer primary all

this will activate it as the primary drbd node
to see if this has worked you can run the following command

sudo drbdadm status

the result should look like this on drbdnode1

and like this on drbdnode2

you will see that drbdnode1 has a status of
and drbdnode2 has a status of
this tells you what role they are playing in the cluster
at the end of this line you will see a status resynced_percent=”3.8″
this tells you how much the drbd disk has synced.
Once the sync is complete connect to drbdnode1 and run the following command

sudo mkfs.ext4 /dev/drbd0

this will create an ext4 partition on the drbd file system. Which will sync across to drbdnode2

Configuring heartbeat resource

Now we need to setup the Mysql resource in the heartbeat configuration
firstly we need to create a file called authkeys. The file should be created in /etc/ha.d directory. You can do this with the following command

nano /etc/ha.d/authkeys

in this file you need to add the following text.

auth 3


Replace [SECRETWORD] with a key you have generated.
This file needs to be on both servers in the /etc/ha.d directory.
After you have created the file you need to change the permissions on the file to make it more secure. This can be done with the following command

chmod 600 /etc/ha.d/authkeys

do this on both servers
Now we need to create the /etc/ha.d/ file to store the cluster config.
You can do this with the following command

nano /etc/ha.d/

copy and paste this code into the file

logfile /var/log/ha-log

keepalive 2

deadtime 30

udpport 695

bcast eth0
auto_failback off
stonith_host drbdnode1 meatware drbdnode2
stonith_host drbdnode2 meatware drbdnode1
node drbdnode1 drbdnode2

do the same for both servers
next is the haresources file. Create the file here

nano /etc/ha.d/haresources

paste this code in there

dhcp-1 IPaddr:: /24/eth0 drbddisk::clusterdisk Filesystem::/dev/drbd0::/var/lib/mysql::ext4 mysql

Your cluster is now ready to role.
All you now need to do is test the cluster which I will tell you how to do in a future blog post
Let me know how you get on