Setting up GOOGLE_CREDENTIALS for Terraform Cloud

The getting stared guides for using Terraform with Google Cloud Platform (GCP) 
https://cloud.google.com/community/tutorials/getting-started-on-gcp-with-terraform

All suggest using code like this to provide credentials 

// Configure the Google Cloud provider
provider "google" {
 credentials = "${file("CREDENTIALS_FILE.json")}"
 project     = "flask-app-211918"
 region      = "us-west1"
}

This works well when you are just learning Terraform. Once you start working with 2 or three other engineers this becomes more of a challenge because you need to keep the state file secure using a remote S3 backend etc.. but you still have the problem of the credential file that needs to be shared. However since the launch of Terraform Cloud at Hashconf it is now possible to sign up for a free Terraform Cloud account and to use it as a remote backend for your plans.

This secures your state file with the encryption provided as part of the service.

Your current GCP credentials are still stored locally on our laptop and could still accidentally be committed to a git repository

The way to solve this is to create an Environment Variable in Terraform Cloud add the content from your json file to the variable and then mark it as secret. This then protects the secret and you can add the local json file to your favourite password manager to encrypt. 

To add the credentials they need to be altered a bit to be stored in the variable.
You need to remove all newline characters from the file.
Using your favourite editor remove these and the json will shrink to only one line.

I use vim for this with the following steps

Open the file with vim

vi gcp-credential.json

press :

Add the following 
%s;\n; ;g

Press enter.

press : again

type wq

After the file is saved add an Environment Variable Called
GOOGLE_CREDENTIALS to the terraform Cloud workspace you will be running your plans in.
Copy in the data from the file and paste it into the variable value and mark it as sensitive.
Then you are done.

All terraform runs should now use these credentials for authenticating to GCP

Meteor User Seed

I have been trying to build a site with Meteor and have slowly started getting stuff working.

One of the ways to populate your development data has been with database seeding.

I wanted to link a user account with a document in one of my collections so I worked this out from the documentation.

You add users to the users collection this returns the userId for the new user which you capture in a variable that you use to insert the userId into your second collection.

if (Meteor.users.find().count() === 0) {
  seed1UserId = Accounts.createUser({
    username: 'lance',
    email: 'l@oo.com',
    password: '123456'
  });
  seed2UserId = Accounts.createUser({
    username: 'david',
    email: 'd@oo.com',
    password: '123456'
  });
  seed3UserId = Accounts.createUser({
    username: 'glenn',
    email: 'g@oo.com',
    password: '123456'
  });
  seed4UserId = Accounts.createUser({
    username: 'martin',
    email: 'm@oo.com',
    password: '123456'
  });
}

if (MasterList.find().count() === 0) {
 
  MasterList.insert({
    firstname: "Lance",
    lastname: "James",
    user_id: seed1UserId
  });

  MasterList.insert({
    firstname: "David",
    lastname: "Cope"
    user_id: seed2UserId
  });

  MasterList.insert({
    firstname: "Glenn",
    lastname: "Manner",
    user_id: seed3UserId
  });
  
  MasterList.insert({
    firstname: "Martin",
    lastname: "Drone",
    user_id: seed4UserId
  });
}

Great Script to tidy Up our Photos

I was looking for a way to tidy up out photos on the NAS at home and have tried a number of things that just did not fit the bill.

they were either just to difficult or completely wrong.

i then stumbled upon this blog post http://falesafe.wordpress.com/2009/07/07/photo-management/

What a gem. if you can install ruby on a machine you have to sort your photos this is it.

The post is from 2009 and so I had to update some of the gems it uses as well as change some of the code. but it was not much work.

Thanks to Falesafe for making it available it had another bonus in that I found out that we have 76000 photos.

I feel some culling is needed.

EDIT:

I had to work with the script a bit as the EXIF attribute it was using was causing my photos to be sorted incorrectly namely (date_time).
So I have updated the script to use the (date_time_original) atribute and this has now sorted my photos properly for me. The original post that was written has comments that are closed so I will upload the adjusted script here if you want to use it.

#!/usr/bin/ruby
# == Synopsis
#
# This script examines a source directory for photos and movie files and moves them to
# a destination directory.  The destination directory will contain a date-hierarchy of folders.
#
# == Usage
#
# ruby photo_organizer.rb [ -h | --help ] source_dir destination_dir
#
# == Author
# Doug Fales, Falesafe Consulting, Inc.
# 
# == Change Log
# LANCE HAIG = changed the EXIF attribute used to determine photo date taken to .date_time_original
#
# == Copyright
# Copyright (c) 2009 Doug Fales.
# Licensed under the same terms as Ruby.
require 'rubygems'
require 'exifr'
require 'find'
require 'logger'
require 'optparse'
require 'pathname3'
require 'digest/sha3'

STDOUT.sync = true


#$log = Logger.new("photo_organizer.log", 3, 20*1024*1024)  # Log files up to 20MB, keep at least three around
#$log.info("Photo organizer started...")

def log
	@log ||= Logger.new("photo_organizer.log", 3, 20*1024*1024)  # Log files up to 20MB, keep at least three around
	@log
end

log.info("Photo organizer started...")


def usage()
puts < e
			if(f =~ /.DS_Store/)
				log.info("Skipping .DS_Store")
				next
			elsif (e.message =~ /malformed JPEG/)
				log.info("Malformed JPEG: #{f}")
				next
			end
		end

		if(time.nil?) 
			log.info("WARNING: No EXIF time for: #{f}.  Will skip it.")
			next
		end

		was_moved = move_image(f, time)
		increment_counter if was_moved

	when File.directory?(f)
		log.info("Processing directory: #{f}")
	else "?"
		log.info("Non-dir, non-file: #{f}")
	end
end

puts "\nFinished."

Quest NDS Migrator LogFile Parser

I am currently helping a customer migrate from Novell to Microsoft and they are using the Quest migrator product to move their data to new DFS servers.
They currently have a large amount of data stored on a number of volumes. The sheer number of volumes and data have required that they deploy a large number of the copy engine servers.
The copy engine does not utilize a central logging facility, it stores the logfiles in a folder alongside the copy engine.
This unfortunately has a side affect, that there are now quite a few log files and some are reaching over 1.5GB in size.
Trying to load these files into a text editor as proven impossible and unworkable and another way was needed.

I decided that the best way to achieve this was to use a script that would parse the log files and extract the errors from the files into another file that would be smaller and easier to work with.

I decided to use Powershell as the scripting language as it would run on the new infrastructure and could be run on a copy engine server with enough disk space.

I undertook quite a bit of research and trial and error but eventually I have a working script.

This script is not signed so you will either need to sign the script to run it or elevate the privileges with set-Executionpolicy on the system you are going to be using.

The script uses two files the main script file and a csv file with the volume names and copy engine server names.

Below you will find a copy of both. I have also created a git repository that you can find on GitHub if you would like to help make it better

Original PowerShell Script

[codebox 1]

Original CSV File

[codebox 2]

Building the new Bongo Admin UI

A while ago Alex (so_solid_moo to the IRC channel) created a php binding for the Bongo API. He also created the start of the new UI that we are working towards.

We started with the admin ui for now as we have created a user interface with the roundcube project that Alex also integrated with.

I stared porting the current Dragonfly assets into the project and I tried to stick to the old design style as much as possible as I really loved it’s look and feel.

After a bout a week I was done and submitted it to the git repo of the new project.

Although I was glad that we had stared the project and that I had done as well as I could on the pretty bits I was not quite happy with the quality of the work.

I was going through my git repo’s this weekend and found the twitter git repo where they have open sourced all their CSS (Cascading Style Sheets) http://twitter.github.com/bootstrap/ this inspired me to see if I could use this as it was MUCH better quality CSS that what I could come up with.

So I started work on the migration as an experiment and from the word go it was so much easier. Their default styles just make sense and to alter or add my customisation took only a very few lines of CSS code.

I was extremely grateful for this as it will enable us to improve our UI as we go along. you can find the new webui in Alex’s github project here.

Thanks twitter

Creating a Two Node Mysql Cluster On Ubuntu With DRBD Part 2

This blog is a follow on from a blog post I wrote ages ago and have eventually got round to finishing it off

In this part of the process we will create the disks and setup the DRBD devices
First we need to connect to the Virtual Machines from a terminal session as it makes life much easier and quicker when you connect remotely.
You will need to make sure that your servers have static IP addresses.
For this document I will be using the following IP addresses for my servers.

drbdnode1 = 172.16.71.139
drbdnode2 = 172.16.71.140
drbdmstr = 172.16.71.141 (clustered IP address)
Subnet Mask = 255.255.255.0
Gateway = 172.16.71.1
DNS Servers = 8.8.8.8 and 4.4.4.4

So to set the IP address as fixed you need to do the following.
Connect to the console of drbdnode1 and login
now we need to edit the file that contains the IP address of the network card
enter the following command and press return

sudo nano /etc/network/interfaces

enter the password for the user you are logged in as
You should see the following screen

now use your arrow keys on your keyboard and move the white cursor to the section that starts with iface eth0
press Ctrl K to remove the line then add the lines below with your IP address details

auto eth0
iface eth0 inet static
address 172.16.71.139
netmask 255.255.255.0
network 172.16.71.0
broadcast 172.16.71.255
gateway 172.16.71.1

It should end up looking like this

Now press Ctrl X to exit
Then Y
Then press Enter to save
Now type in the following

sudo /etc/init.d/networking restart

Do the same for drbdnode2
Now that we have given each server a static Ip address we can connect via ssh to the server to do the admin remotely.
To do this you need to have a machine that has an ssh client installed most linux and osx clients have one already installed if you are on windows look for putty and use that.
So open a terminal on your machine and the in the following

ssh cluster@172.16.71.139 and press enter.

You need to substitute the username you created on your server when setting it up for the word cluster in the above command.
You will be prompted to accept a key for the server. Type yes and press enter.
Now enter the password for the user and press enter.
You should see a screen like this

Connect to both cluster nodes to make sure you are not stopped down the line to fix the problem.
You are now ready to work on your cluster.
First we need to create host records for the two servers
type the following into your terminal session

sudo nano /etc/hosts

and add a record for each server it should look something like this

Save the file as before and do the same for node2 but swap the names and ipadresses
Now we need to install a few packages that will allow us to use drbd
in the terminal on drbdnode1 type

apt-get install heartbeat drbd8-utils

and press enter you should have a screen like this

Press Y and then Enter to install the software. Do this on drbdnode2 as well
Now we need to create the partitions that we will use for the drbd cluster
to find out which disk we will be using run the command

sudo fdisk -l

to see which disks have not been partitioned your screen should look like this

As you can see at the end is the disk /dev/sdb does not have a partition table
look for the line “ Disk /dev/sdb doesn’t contain a valid partition table “
to create a partition table we need to run the following commands

sudo fdisk /dev/sdb
n (to create a new partition)
p (to select a primary partition)
1 (for the first partition)
Enter (to select the start cylinder)
and enter (to select the end cylinder)
w (to write the changes)

the screen should look like this

Do this on both servers
once this is complete we now need to edit the drbd configuration files to set up our clustered filesystem.
In your terminal on drbdnode1 enter the command

sudo nano /etc/drbd.d/clusterdisk.res

Enter the password for your user and edit the file
Copy and paste the following code into your terminal screen and then change the details to match your server names and ipaddresses

resource clusterdisk { # name of resources

protocol C;

on drbdnode1 { # first server hostname

device /dev/drbd0; # Name of DRBD device

disk /dev/sdb1; # Partition to use, which was created using fdisk

address 172.16.71.139:7788; # IP addres and port number used by drbd

meta-disk internal; # where to store metadata meta-data

}

on drbdnode2 { # second server hostname

device /dev/drbd0;

disk /dev/sdb1;

address 172.16.71.140:7788;

meta-disk internal;

}

disk {

on-io-error detach;

}

net {

max-buffers 2048;

ko-count 4;

}

syncer {

rate 10M;

al-extents 257;

}

startup {

wfc-timeout 0;

degr-wfc-timeout 120; # 2 minutos.

}

}

The screen should look similar to this

ctrl x (to exit)
y (to save the changed file)
enter (to overwrite the file)
Now we need to create the DRBD resource
enter the following command into your terminal session

sudo drbdadm create-md clusterdisk

After running this command you should see a screen similar to this

On drbdnode1 enter the following command

drbdadm -- --overwrite-data-of-peer primary all

this will activate it as the primary drbd node
to see if this has worked you can run the following command

sudo drbdadm status

the result should look like this on drbdnode1

and like this on drbdnode2

you will see that drbdnode1 has a status of
cs=”SyncSource”
and drbdnode2 has a status of
cs=”SyncTarget”
this tells you what role they are playing in the cluster
at the end of this line you will see a status resynced_percent=”3.8″
this tells you how much the drbd disk has synced.
Once the sync is complete connect to drbdnode1 and run the following command

sudo mkfs.ext4 /dev/drbd0

this will create an ext4 partition on the drbd file system. Which will sync across to drbdnode2

Configuring heartbeat resource

Now we need to setup the Mysql resource in the heartbeat configuration
firstly we need to create a file called authkeys. The file should be created in /etc/ha.d directory. You can do this with the following command

nano /etc/ha.d/authkeys

in this file you need to add the following text.

auth 3

3 md5 [SECRETWORD]

Replace [SECRETWORD] with a key you have generated.
This file needs to be on both servers in the /etc/ha.d directory.
After you have created the file you need to change the permissions on the file to make it more secure. This can be done with the following command

chmod 600 /etc/ha.d/authkeys

do this on both servers
Now we need to create the /etc/ha.d/ha.cf file to store the cluster config.
You can do this with the following command

nano /etc/ha.d/ha.cf

copy and paste this code into the file

logfile /var/log/ha-log

keepalive 2

deadtime 30

udpport 695

bcast eth0
auto_failback off
stonith_host drbdnode1 meatware drbdnode2
stonith_host drbdnode2 meatware drbdnode1
node drbdnode1 drbdnode2

do the same for both servers
next is the haresources file. Create the file here

nano /etc/ha.d/haresources

paste this code in there

dhcp-1 IPaddr::172.16.71.141 /24/eth0 drbddisk::clusterdisk Filesystem::/dev/drbd0::/var/lib/mysql::ext4 mysql

Your cluster is now ready to role.
All you now need to do is test the cluster which I will tell you how to do in a future blog post
Let me know how you get on

ESX Trunk VLAN config for Storage

I was struggeling with an install of ESXi on a cisco 6509 switch where the management and VM LAN connectivity worked just fine but for some reason the SAN NFS VLAN just did not want to communicate with the Nexenta on that VLAn.

After some searching and trial and error I was able to utilise this post http://blogs.egroup-us.com/?p=2453 to get the port-channel and the ports configured correctly. I did not use all the settings from this post but it did remind me to check my port0-channel config.

Learning Ruby and rBongo with WeBongo

This post should have been written quite a while ago as I wanted to start documenting my efforts to learn to program using Ruby. As most of you know I have been trying for a while to teach myself to programe.

I started my efforts with a course in c# at a company in London, the course was just great and the instructor was a fantastic guy. Unfortunately life meant I could not practice at all. I worked for a company for a year that gave me no opportunity to play during the day and having a newborn baby left my wife and I looking more like zombies that real people.

Then things changed the year before last when I Joined Forward as a contractor to help them with their Virtual infrastructure. This company is completly different to anywhere I have ever worked before. Normaly as a contractor you are shoved in a corner and beat with a whip so that they get the most out of you, here at Forward this is definitly not the case.

Forward has some really great intelligent developers who are a pleasure to work with and be part of and that is where I was going with this post.

A short while ago Fred George held one of his famous OO Bootcamp training sessions and I was lucky enough to be invited to join and true to Freds statement he tries to keep the course at a level where you always feel stupid and belive me I felt REALLY stupid. The good thing about feeling stupid was I actually learnt something. You kind of learn to program by accident (quoting Tom Hall).

This is where the Ruby, rBongo, WeBogo bits come in. Fred uses Ruby to illustrate OO programming best practice and to help you understand OO in general a side effect of this is that you start learning ruby syntax and start learning the programming vocabulary needed to accomplish the tasks, having gained some experience with ruby during the course I thought it best to use ruby to continue learning to program.

I have been working on Bongo since it was founded and the other day I was speaking to Alex the project lead and we realised it is almost 10 years now. I have always wanted to contribute more than just packaging the app on the OBS and being available to do testing and such. Alex wrote the PHP binding for the Bongo Store and having seen what some of the guys can do at Forward with Ruby and jQuery I wanted to create a binding for the store in Ruby and create a gem from it thus allowing anyone to create the best REST interface ever.

This is forcing me to learn TCP Sockets win Ruby and other nice things. I will try to document as often as I can what I am upto on this.

My efforts will be on 2 Ruby projects. Initially I will need to work on rBongo which is the ruby binding for the Bongo store I am sure most of the developers at Forward could probably write this in a day or so, hopefully I can convince one or two of them to help out .

My second effort will be on WeBongo the REST webui for the Bongo mail store. I have other ideas for the webui once we have a working solution (using it as a sync destination for Tomboy desktop notes)

Please keep that in mind as I try to get this working.

Bongo-Project.org Needs To Be Renamed

The Bongo-Project needs to be renamed

Now before you fall of your chair or swallow your phone while reading this let me explain why.

I have been wanting to create this post for sometime now and have been holding off as we are almost ready for our 1.0 release.

The problem I have is that even if we released a new version of the software and it was our 1.0 release I am not sure anyone will be able to find us. I have been doing some digging over the last few months about what actually we are and to be honest I am not convinced we are “Individual” enough. We as a project get lost in the Bongo Project mahem that is People who play bongo’s as instruments.

Right from day one when we started this fork we had a problem with the name, we chose ” Bongo” as a name as an interim measure so that we had something to call ourselves while we thought of a new name. This has obviously not happend and I would like to kick this discussion off before we are ready for 1.0 so we have time to get it all done so we can release both to the outside world at the same time.

Now I know that this is not something lightly taken on and also it would mean quite a bit of work on the part of the limited Dev’s we have to change the code as well as then changing the website and our images and stuff.

I am now thinking I will spend £30.00 on a prize for the winning name. So now I need to find out what the community think and if there is apetite for this.

I have created a poll on my site for some active feedback

let me know I really want to know

UPDATE

This will be put on hold untill further notice