Multi-Cloud DNS delegated Sub domain with Terraform Cloud

DNS in a multi-cloud world

As companies transition to multi-cloud deployments, creating a common way to deploy solutions has become a requirement, this is also easily achieved by using Terraform to create your immutable infrastructure.

The challenge you face is that the infrastructure in a cloud deployment is dynamic in nature and so you can’t predict what the IP addresses of parts of your systems will be and these IP addresses can change daily if your developers have a fast deployment cycle.

The most common way for us to solve this problem is to create DNS records for each piece of the infrastructure that we are deploying and that needs to be addressed. If you deploy resources with external IP addresses on cloud providers they have different ways that these IP addresses are presented. Each cloud provider provides their solutions with differing features.

By providing and configuring your own DNS delegated zones within the providers you give yourself the flexibility you need when creating your applications within these clouds.

Overview

This post explains how you would utilise Terraform to deploy and manage a multi-cloud DNS solution from a hosted Route53 domain.

Prerequisites

For this example you need to have an account on each of the cloud providers if you only want to use two providers then you only need an account on the two you would like to use.

Terraform 0.12.x cli installed
AWS Account
GCP Account
Azure Account
Azure Service Principal Account
Terraform Cloud account.
DNS domain hosted on route53
Git repository hosted on Github or Gitlab or Bitbucket
Text editor (I use vscode)

Steps

You now need to configure some variables.

General Variables

created-by: terraform (This a variable You use in your tags to show it was created by Terraform).
owner: dns-demo-team (This is another label so that you know who the owner is).
namespace: dnsmc (This is the name of the delegated subzone you want to create).
hosted-zone: hashidemos.io (Hosted domain on Route53).

General Environment Variables

CONFIRM_DESTROY: 1 (This is so that you don’t accidentally delete your zones).

Basic Variables

Type in cd ~
Type in mkdir dns-multicloud
Type in git init

Creating the working environment

Open a terminal and navigate to the directory where you want to store this code.

There are many ways that people split their Terraform code up to make it easier to know where resources are.
I prefer to split my Terraform code into areas of concern for this example.

Create the files needed.
Type in touch general.tf
Type in touch variables.tf
Type in touch outputs.tf

The current repository looks like this.

 => tree
 .
 ├── LICENSE
 ├── README.md
 ├── general.tf
 ├── outputs.tf
 └── variables.tf
 0 directories, 6 files

general.tf file to declare our general Terraform configuration
.e.g. Our remote configuration to use Terraform Cloud as our backend
cloud provider specific configuration.

outputs.tf file that will be used to output the information needed to use these delegated zone when deploying infrastructure to our cloud providers.

variables.tf file that will be used to declare the Terraform code that asks for the variables needed to run our code.

I also added a .gitignore for Terraform, LICENSE and a README.md for the repository.

Commit these files to the git history.

Adding the backend

To start working with Terraform you need to configure a remote backed for the Terraform plan.

You can open the general.tf file and add the following.

terraform {
  required_version = ">= 0.11.0"
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "dns-multicloud-org"
    workspaces {
      name = "dns-multicloud"
    }
  }
}

There are a number of fields here.
required_version (Version of the Terraform cli you are using).
backend “remote” (This tells Terraform you want to use the remote backend).
hostname (Hostname for Terraform Cloud).
organization (Our organization).
workspaces (The workspace you will be using).

You have one more step to do before can test this configuration.
You need to authenticate against Terraform Cloud. There are multiple ways to do this. You can find the documentation here:
https://www.terraform.io/docs/commands/cli-config.html#credentials

I will be using a Token that you can generate in the Token Section of your account.

Navigate to your user settings section and generate a new Token to use.

Token Section terraform Cloud
Make sure you copy the token to a safe place as it is only displayed the one time.

You need to add this token to a credentials file to use when you run our Terraform.
For a *nix based system you will use a file called .terraformrc in your home directory.

The structure of the file looks like this.

credentials "app.terraform.io" {
   token = "INSERTTOKENHERE"
 }

Insert the token code into this file and save it.
Now you are ready to test if your local client can connect to Terraform Cloud.
You should make sure you are in the dns-multicloud directory you can do this by typing the following commands and pressing enter after each one.

pwd
/Users/lance/dns-multicloud

Initialising The Terraform Cloud Backend

To test that you have everything configured correctly run the following command.
terraform init
You should see the following output in the console.

Initializing the backend…
 Successfully configured the backend "remote"! Terraform will automatically
 use this backend unless the backend configuration changes.
 Terraform has been successfully initialized!
 You may now begin working with Terraform. Try running "terraform plan" to see
 any changes that are required for your infrastructure. All Terraform commands
 should now work.
 If you ever set or change modules or backend configuration for Terraform,
 rerun this command to reinitialize your working directory. If you forget, other
 commands will detect it and remind you to do so if necessary.

After that you want to initiate the state file in the backend.

terraform plan
 Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
 will stop streaming the logs, but will not stop the plan running remotely.
 Preparing the remote plan…
 To view this run in a browser, visit:
 https://app.terraform.io/app/dns-multicloud-org/dns-multicloud/runs/run-N4hkB4m81kVaP1QG
 Waiting for the plan to start…
 Terraform v0.12.9
 Configuring remote state backend…
 Initializing Terraform configuration…
 2019/10/07 11:57:37 [DEBUG] Using modified User-Agent: Terraform/0.12.9 TFC/3bcf15d045
 Refreshing Terraform state in-memory prior to plan…
 The refreshed state will be used to calculate this plan, but will not be
 persisted to local or remote state storage.
 
 No changes. Infrastructure is up-to-date.
 This means that Terraform did not detect any differences between your
 configuration and real physical resources that exist. As a result, no
 actions need to be performed.

When these two commands complete successfully then we are now ready to start writing our Terraform configuration.

Open the variables.tf file in your editor and create the following items.

# General
variable "owner" {
  description = "Person Deploying this Stack e.g. john-doe"
}

variable "namespace" {
  description = "Name of the zone e.g. demo"
}

variable "created-by" {
  description = "Tag used to identify resources created programmatically by Terraform"
  default     = "terraform"
}

variable "hosted-zone" {
  description = "The name of the dns zone on Route 53 that will be used as the master zone "
}

The description in the variables explains what each one does.

Commit the changes to git.

Creating the DNS zones

Now that we have bootstrapped the plan we now need to start configuring the zones.

As mentioned before we have a hosted zone in Route53 that we will use as the master zone and from that we will create 3 delegated sub zones in each cloud provider.
Starting with AWS we will create our delegated zone in AWS.

Amazon Web Services (AWS) Sub Zone

We now need to add the AWS configuration to our zone.
To connect to AWS we need to use a Terraform provider that will give us this ability.
In the general.tf file add the following lines below the Remote backend configuration.

# AWS General Configuration
provider "aws" {
  version = "~> 2.0"
  region  = var.aws_region
}

This block of code tells Terraform that we want to use the AWS provider from the registry in our plan.
We are telling it that we want to use version 2.0 of the provider and we have an extra configuration item called region. This is the region we want to use for our deployments. We have set another variable here called aws_region which we will add to the Terraform Cloud configurastion later in the post.
You can read more about the provider block here:
https://www.terraform.io/docs/providers/aws/index.html

Now create a file in the project called aws.tf. You can do this in the same way we did before by using touch in the console.

touch aws.tf

The directory structure should look like this.

=> tree
 .
 ├── LICENSE
 ├── README.md
 ├── aws.tf
 ├── general.tf
 ├── outputs.tf
 └── variables.tf
 0 directories, 6 files

Open the aws.tf file in your editor and add the following block of code.

data "aws_route53_zone" "main" {
  name = var.hosted-zone
}

This is a data source block that will query AWS for the zone we have hosted there and return the resource for us to use in our code later.
You can read more about the data sources here:
https://www.terraform.io/docs/configuration/data-sources.html

AWS Sub Zone

The next step is to create the DNS subzone that we will be using. You must add the code listed below to the aws.tf file just after the data source.

# AWS SUBZONE 

resource "aws_route53_zone" "aws_sub_zone" {
  name = "${var.namespace}.aws.${var.hosted-zone}"
  comment = "Managed by Terraform, Delegated Sub Zone for AWS for ${var.namespace}"

  tags = {
    name        = var.namespace
    owner       = var.owner
    created-by  = var.created-by
  }
}

What we are doing here is using the aws_route53_zone resource from the provider with a name we have chosen aws_sub_zone we have then provided a number of arguments to the resource so that it can be created. The important ones are the name argument where we have used two variables to create our zone name.

In this case it would form a domain of dnsmc.aws.hashidemos.io We have also populated the tags from the other general variables we created.

The block above only creates the zone in AWS but does not provide the master zone with any information on the zone. We need to create some DNS nameserver(ns) records for the new delegated zone so that any records that are created in the new zone will be found.
The code block below will create the nameserver(ns) records for the zone .

resource "aws_route53_record" "aws_sub_zone_ns" {
  zone_id = "${data.aws_route53_zone.main.zone_id}"
  name    = "${var.namespace}.aws.${var.hosted-zone}"
  type    = "NS"
  ttl     = "30"

  records = [
    for awsns in aws_route53_zone.aws_sub_zone.name_servers:
    awsns
  ]
}

The important section here is the records section. You can see that we use a for loop in the section to grab all the name servers that were created in the aws_route53_zone block we created earlier and populate them in this argument.

We now need to create the variable block in our variables.tf file for the aws_region argument we used in our provider block.
The code Block will look like this.

# AWS

variable "aws_region" {
  description = "The region to create resources."
  default     = "eu-west-2"
}

When you need to use this zone in AWS for creating records for other resources you will need the zone ID. To provide this we need to create a block in the outputs.tf file that provides this detail.
Add the blocks below to the outputs.tf file.

output "aws_sub_zone_id" {
  value = aws_route53_zone.aws_sub_zone.zone_id
}

output "aws_sub_zone_nameservers" {
  value = aws_route53_zone.aws_sub_zone.name_servers
}

This creates two outputs the zone_id output to be referenced in other deployments and a list of the name servers that were assigned to the delegated domain.

This completes the configuration we need to create the resources for the plan.
To apply these to our account we will need to add some authentication credentials for AWS to our Terraform Cloud workspace. We will also need to add the aws_region variable.
Below are the variables we need to create. Please take note that the Environment Variables need to be marked as sensitive when you create them to encrypt them when they are stored.

AWS Variables

aws_region: eu-west-2 (Region to deploy the delegated domain).

AWS Environment Variables

Mark these as sensitive so that they are hidden
You can get these from your AWS account
AWS_ACCESS_KEY_ID: [AWSACCESSKEY]
AWS_SECRET_ACCESS_KEY: [AWSSECRETACCESSKEY]

The variable screen will look like this when you are done.

We can now run our terraform plan to see what will be created.

=> terraform plan
 Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
 will stop streaming the logs, but will not stop the plan running remotely.
 Preparing the remote plan…
 To view this run in a browser, visit:
 https://app.terraform.io/app/dns-multicloud-org/dns-multicloud/runs/run-Ek1ZvYEfAkMHyTqb
 Waiting for the plan to start…
 Terraform v0.12.9
 Configuring remote state backend…
 Initializing Terraform configuration…
 2019/10/07 14:46:51 [DEBUG] Using modified User-Agent: Terraform/0.12.9 TFC/3bcf15d045
 Refreshing Terraform state in-memory prior to plan…
 The refreshed state will be used to calculate this plan, but will not be
 persisted to local or remote state storage.
 data.aws_route53_zone.main: Refreshing state…
 
 An execution plan has been generated and is shown below.
 Resource actions are indicated with the following symbols:
 create 
 Terraform will perform the following actions:
 # aws_route53_record.aws_sub_zone_ns will be created
 resource "aws_route53_record" "aws_sub_zone_ns" {
 allow_overwrite = (known after apply)
 fqdn            = (known after apply)
 id              = (known after apply)
 name            = "dnsmc.aws.hashidemos.io"
 records         = (known after apply)
 ttl             = 30
 type            = "NS"
 zone_id         = "Z2VGUC188F45PC"
 }
 aws_route53_zone.aws_sub_zone will be created
 resource "aws_route53_zone" "aws_sub_zone" {
 comment       = "Managed by Terraform, Delegated Sub Zone for AWS for dnsmc"
 force_destroy = false
 id            = (known after apply)
 name          = "dnsmc.aws.hashidemos.io"
 name_servers  = (known after apply)
 tags          = { "created-by" = "terraform"
 "name"       = "dnsmc"
 "owner"      = "dns-demo-team"
 }
 vpc_id        = (known after apply)
 vpc_region    = (known after apply)
 zone_id       = (known after apply)
 } 
 Plan: 2 to add, 0 to change, 0 to destroy.

Commit the changes to git.

You can now run terraform apply to create these resources in AWS
You should see something similar to what is below.

Plan: 2 to add, 0 to change, 0 to destroy.
 Do you want to perform these actions in workspace "dns-multicloud"?
   Terraform will perform the actions described above.
   Only 'yes' will be accepted to approve.
 Enter a value: yes
 aws_route53_zone.aws_sub_zone: Creating…
 aws_route53_zone.aws_sub_zone: Still creating… [10s elapsed]
 aws_route53_zone.aws_sub_zone: Still creating… [20s elapsed]
 aws_route53_zone.aws_sub_zone: Still creating… [30s elapsed]
 aws_route53_zone.aws_sub_zone: Creation complete after 37s [id=Z2MPGT7J02JUKT]
 aws_route53_record.aws_sub_zone_ns: Creating…
 aws_route53_record.aws_sub_zone_ns: Still creating… [10s elapsed]
 aws_route53_record.aws_sub_zone_ns: Still creating… [20s elapsed]
 aws_route53_record.aws_sub_zone_ns: Still creating… [30s elapsed]
 aws_route53_record.aws_sub_zone_ns: Creation complete after 30s [id=Z2VGUC188F45PC_dnsmc.aws.hashidemos.io_NS]
 Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
 Outputs:
 aws_sub_zone_id = Z2MPGT7J02JUKT
 aws_sub_zone_nameservers = [
   "ns-1441.awsdns-52.org",
   "ns-1595.awsdns-07.co.uk",
   "ns-365.awsdns-45.com",
   "ns-852.awsdns-42.net",
 ]

Congratulations the AWS zone is created.

Google Cloud Platform (GCP) Sub Zone

The process for the Google Compute zone will be largely the same.

Open the general.tf file in your editor and add the code block below the AWS config block.

# Google Cloud Platform General Configuration
provider "google" {
  version = "~> 2.9"
  project = var.gcp_project
  region  = var.gcp_region
}

This block of code describes the GCP provider for Terraform to use when creating the delegated zone on GCP DNS.
It has similar arguments that the AWS one has and you can read more about it here
https://www.terraform.io/docs/providers/google/index.html

Next you need to create is the gcp.tf file where we will be storing the details for the gcp zone resource.
Create the file in the same way as before with the touch command.

touch gcp.tf

The directory structure should look like this.

=> tree
 .
 ├── LICENSE
 ├── README.md
 ├── aws.tf
 ├── gcp.tf
 ├── general.tf
 ├── outputs.tf
 └── variables.tf
 0 directories, 7 files

Open the gcp.tf file in your editor and add the code blocks below.

# GCP SUBZONE 

resource "google_dns_managed_zone" "gcp_sub_zone" {
  name              = "${var.namespace}-zone"
  dns_name          = "${var.namespace}.gcp.${var.hosted-zone}."
  project           = var.gcp_project
  description       = "Managed by Terraform, Delegated Sub Zone for GCP for  ${var.namespace}"
  labels = {
    name = var.namespace
    owner = var.owner
    created-by = var.created-by
  }
}

The Code block above does much the same as the AWS one.
The data source google_compute_zones collects all the zones available within the region you have chosen.
There is a difference to the dns_name argument for the zone. GCP requires you to add a “.” to the end of the zone when you create it.
The project argument is the name of the GCP project you are working in.

You now need to add the variables needed to the variables.tf file.
Open the viariables.tf file in your editor and add the code block below.

# GCP

variable "gcp_project" {
  description = "GCP project name"
}

variable "gcp_region" {
  description = "GCP region, e.g. us-east1"
  default     = "europe-west3"
}

Here you add variables for the gcp_project and gcp_region that are needed to authenticate and apply to GCP.
Now you need to add the outputs that are needed for using this zone in other Terraform deployments. Add the code block below to provide the outputs.

output "gcp_dns_zone_name" {
  value = google_dns_managed_zone.gcp_sub_zone.name
}

output "gcp_dns_zone_nameservers" {
  value = google_dns_managed_zone.gcp_sub_zone.name_servers
}

Te is it for the resources now you need to configure Terraform with the variables needed for authentication and provisioning.

GCP Variables

gcp_project: dns-multicloud-demo (Name of the GCP project to deploy to).
gcp_region: europe-west3 (GCP Region for the deployment).

The Environment variable below enables authentication to google cloud.

GCP Environment variables (Sensitive)

GOOGLE_CREDENTIALS: [json]

What needs to be added here is the contents of the json file that you can download from your GCP account .
Terraform Cloud needs the credentials in a specific format .

Save your google credentials into the project directory.
Using your editor edit the file you downloaded.

You need to convert the json into one single line. If you have access to vim you can use the steps below.

vim gcp-credentials.json

then press :

enter the following 
%s;\n; ;g

Press enter

Save the file by pressing : then wq and press enter

After doing these steps if you open the file in your normal editor it should all be on one line. Copy the text from this file into the GOOGLE_CREDENTIALS Environment variable and mark it as secret.

The variable screen will look like this when you are done.

You then run terraform plan and terraform apply and the outputs should look something like this.

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
 Outputs:
 aws_sub_zone_id = Z2MPGT7J02JUKT
 aws_sub_zone_nameservers = [
   "ns-1441.awsdns-52.org",
   "ns-1595.awsdns-07.co.uk",
   "ns-365.awsdns-45.com",
   "ns-852.awsdns-42.net",
 ]
 gcp_dns_zone_name = dnsmc-zone
 gcp_dns_zone_nameservers = [
   "ns-cloud-b1.googledomains.com.",
   "ns-cloud-b2.googledomains.com.",
   "ns-cloud-b3.googledomains.com.",
   "ns-cloud-b4.googledomains.com.",
 ]

Congratulations the GCP zone is created.

Commit the changes to git

Azure Cloud Platform (Azure) Sub Zone

The creation of this zone should be largely the same as the AWS and GCP ones.

Open the general.tf file in your editor and add the code block below the GCP config block.

# Azure General Configuration
provider "azurerm" {
  version = "~>1.32.1"
}

This block of code describes the Azure provider for Terraform to use when creating the delegated zone on Azure DNS.
It has similar arguments that the AWS and GCP ones have and you can read more about it here:
https://www.terraform.io/docs/providers/azurerm/index.html

Next you need to create is the azure.tf file where we will be storing the details for the azure zone resource.
Create the file in the same way as before with the touch command.

touch azure.tf

The directory structure should look like this.

=> tree
 .
 ├── LICENSE
 ├── README.md
 ├── aws.tf
 ├── azure.tf
 ├── gcp.tf
 ├── general.tf
 ├── outputs.tf
 └── variables.tf
 0 directories, 8 files

Open the azure.tf file in your editor and add the code blocks below.

resource "azurerm_resource_group" "dns_resource_group" {
  name     = "${var.namespace}DNSrg"
  location = var.azure_location
}

resource "azurerm_dns_zone" "azure_sub_zone" {
  name                = "${var.namespace}.azure.${var.hosted-zone}"
  resource_group_name = "${azurerm_resource_group.dns_resource_group.name}"
  tags = {
    name        = var.namespace
    owner       = var.owner
    created-by  = var.created-by
  }
}

The code block above does petty much the same as the other two.
The resource azurerm_resource_group creates a resource group for the DNS zone.
The location argument determines where the resource Group and zone will be deployed.
The resource azurerm_dns_zone creates the zone in the resource group.

Next you need to add the variable in the variables.tf file.
Open the variables.tf file in your editor and add the code block below.

# Azure

variable "azure_location" {
  description = "The azure location to deploy the DNS service"
  default     = "West Europe"
}

We only have one variable here to determine which location the zone will be deployed.

The last bit of code you need to create is the outputs in the outputs.tf file.
Add the code block below to the outputs.tf file after the GCP ones.

output "azure_sub_zone_name" {
  value = azurerm_dns_zone.azure_sub_zone.id
}

output "azure_sub_zone_nameservers" {
  value = azurerm_dns_zone.azure_sub_zone.name_servers
}

output "azure_dns_resourcegroup" {
  value = azurerm_resource_group.dns_resource_group.name
}

As before we need to output the details that will be needed to deploy resources and create DNS records in the zone.

You now need to add the Variables for the Azure deployment.

Azure Variables

azure_location: West Europe The Azure Region Location to deploy to.

Azure Variables (Sensitive)

ARM_SUBSCRIPTION_ID: [ARMSUBID]
ARM_CLIENT_ID: [ARMCLIENTID]
ARM_CLIENT_SECRET: [ARMCLIENTSECRET]
ARM_TENANT_ID: [ARMTENANTID]

These credentials you can get from your Azure Service Principal account.

The variable screen will look like this when you are done.

Commit the changes to git.

The final piece of the puzzle is to now make the AWS DNS servers aware of where the different GCP and Azure DNS zones are hosted.
Open the aws.tf file with your editor and add the code blocks below after the AWS blocks.

# GCP SubZone

resource "aws_route53_zone" "gcp_sub_zone" {
 name          = "${var.namespace}.gcp.${var.hosted-zone}"
 comment       = "Managed by Terraform, Delegated Sub Zone for GCP for  ${var.namespace}"
 force_destroy = false
 tags = {
    name           = var.namespace
    owner          = var.owner
    created-by     = var.created-by
 }
}

 resource "aws_route53_record" "gcp_sub_zone" {
   zone_id = "${data.aws_route53_zone.main.zone_id}"
   name    = "${var.namespace}.gcp.${var.hosted-zone}"
   type    = "NS"
   ttl     = "30"

   records = [ 
     for gcpns in google_dns_managed_zone.gcp_sub_zone.name_servers:
     gcpns
    ]
 }

# Azure SUBZONE 

resource "aws_route53_zone" "azure_sub_zone" {
  name = "${var.namespace}.azure.${var.hosted-zone}"
  comment = "Managed by Terraform, Delegated Sub Zone for Azure for ${var.namespace}"

  tags = {
    name        = var.namespace
    owner       = var.owner
    created-by  = var.created-by
  }
}

resource "aws_route53_record" "azure_sub_zone_ns" {
  zone_id = "${data.aws_route53_zone.main.zone_id}"
  name    = "${var.namespace}.azure.${var.hosted-zone}"
  type    = "NS"
  ttl     = "30"

  records = [
    for azurens in azurerm_dns_zone.azure_sub_zone.name_servers:
    azurens
  ]
}

In these code blocks you are creating the delegated zones in Route53 and providing the DNS name servers from each cloud zone as records.
Save the file and the run terraform plan and terraform apply
You should get results similar to this.

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
 Outputs:
 aws_sub_zone_id = Z2MPGT7J02JUKT
 aws_sub_zone_nameservers = [
   "ns-1441.awsdns-52.org",
   "ns-1595.awsdns-07.co.uk",
   "ns-365.awsdns-45.com",
   "ns-852.awsdns-42.net",
 ]
 azure_dns_resourcegroup = dnsmcDNSrg
 azure_sub_zone_name = /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/dnsmcdnsrg/providers/Microsoft.Network/dnszones/dnsmc.azure.hashidemos.io
 azure_sub_zone_nameservers = [
   "ns1-02.azure-dns.com.",
   "ns2-02.azure-dns.net.",
   "ns3-02.azure-dns.org.",
   "ns4-02.azure-dns.info.",
 ]
 gcp_dns_zone_name = dnsmc-zone
 gcp_dns_zone_nameservers = [
   "ns-cloud-b1.googledomains.com.",
   "ns-cloud-b2.googledomains.com.",
   "ns-cloud-b3.googledomains.com.",
   "ns-cloud-b4.googledomains.com.",
 ]

After this step your DNS zones will be available in each cloud provider for your services to use as needed.
If you ever need to add another cloud provider just follow the same process as with these.

The git repository for this blog post can be found here :
https://github.com/lhaig/dns-multicloud

Setting up GOOGLE_CREDENTIALS for Terraform Cloud

The getting stared guides for using Terraform with Google Cloud Platform (GCP) 
https://cloud.google.com/community/tutorials/getting-started-on-gcp-with-terraform

All suggest using code like this to provide credentials 

// Configure the Google Cloud provider
provider "google" {
 credentials = "${file("CREDENTIALS_FILE.json")}"
 project     = "flask-app-211918"
 region      = "us-west1"
}

This works well when you are just learning Terraform. Once you start workign with 2 or three other engineers this becomes more of a challenge because you need to keep the state file secure using a remote S3 backend etc.. but you still have the problem of the credential file that needs to be shared. However since the launch of Terraform Cloud at Hashconf it is now possible to sign up for a free Terraform Cloud account and to use it as a remote backend for your plans.

This secures your state file with the encryption provided as part of the service.

Your current GCP credentials are still stored locally on our laptop and could still accidentally be committed to a git repository

The way to solve this is to create an Environment Variable in Terraform Cloud add the content from your json file to the variable and then mark it as secret. This then protects the secret and you can add the local json file to your favourite password manager to encrypt. 

To add the credentials they need to be altered a bit to be stored in the variable.
You need to remove all newline characters from the file.
Using your favourite editor remove these and the json will shrink to only one line.

I use vim for this with the following steps

Open the file with vim

vi gcp-credential.json

press :

Add the following 
%s;\n; ;g

Press enter.

press : again

type wq

After the file is saved add an Environment Variable Called
GOOGLE_CREDENTIALS to the terraform Cloud workspace you will be running your plans in.
Copy in the data from the file and paste it into the variable value and mark it as sensitive.
Then you are done.

All terraform runs should now use these credentials for authenticating to GCP

Building the new Bongo Admin UI

A while ago Alex (so_solid_moo to the IRC channel) created a php binding for the Bongo API. He also created the start of the new UI that we are working towards.

We started with the admin ui for now as we have created a user interface with the roundcube project that Alex also integrated with.

I stared porting the current Dragonfly assets into the project and I tried to stick to the old design style as much as possible as I really loved it’s look and feel.

After a bout a week I was done and submitted it to the git repo of the new project.

Although I was glad that we had stared the project and that I had done as well as I could on the pretty bits I was not quite happy with the quality of the work.

I was going through my git repo’s this weekend and found the twitter git repo where they have open sourced all their CSS (Cascading Style Sheets) http://twitter.github.com/bootstrap/ this inspired me to see if I could use this as it was MUCH better quality CSS that what I could come up with.

So I started work on the migration as an experiment and from the word go it was so much easier. Their default styles just make sense and to alter or add my customisation took only a very few lines of CSS code.

I was extremely grateful for this as it will enable us to improve our UI as we go along. you can find the new webui in Alex’s github project here.

Thanks twitter

Learning Ruby and rBongo with WeBongo

This post should have been written quite a while ago as I wanted to start documenting my efforts to learn to program using Ruby. As most of you know I have been trying for a while to teach myself to programe.

I started my efforts with a course in c# at a company in London, the course was just great and the instructor was a fantastic guy. Unfortunately life meant I could not practice at all. I worked for a company for a year that gave me no opportunity to play during the day and having a newborn baby left my wife and I looking more like zombies that real people.

Then things changed the year before last when I Joined Forward as a contractor to help them with their Virtual infrastructure. This company is completly different to anywhere I have ever worked before. Normaly as a contractor you are shoved in a corner and beat with a whip so that they get the most out of you, here at Forward this is definitly not the case.

Forward has some really great intelligent developers who are a pleasure to work with and be part of and that is where I was going with this post.

A short while ago Fred George held one of his famous OO Bootcamp training sessions and I was lucky enough to be invited to join and true to Freds statement he tries to keep the course at a level where you always feel stupid and belive me I felt REALLY stupid. The good thing about feeling stupid was I actually learnt something. You kind of learn to program by accident (quoting Tom Hall).

This is where the Ruby, rBongo, WeBogo bits come in. Fred uses Ruby to illustrate OO programming best practice and to help you understand OO in general a side effect of this is that you start learning ruby syntax and start learning the programming vocabulary needed to accomplish the tasks, having gained some experience with ruby during the course I thought it best to use ruby to continue learning to program.

I have been working on Bongo since it was founded and the other day I was speaking to Alex the project lead and we realised it is almost 10 years now. I have always wanted to contribute more than just packaging the app on the OBS and being available to do testing and such. Alex wrote the PHP binding for the Bongo Store and having seen what some of the guys can do at Forward with Ruby and jQuery I wanted to create a binding for the store in Ruby and create a gem from it thus allowing anyone to create the best REST interface ever.

This is forcing me to learn TCP Sockets win Ruby and other nice things. I will try to document as often as I can what I am upto on this.

My efforts will be on 2 Ruby projects. Initially I will need to work on rBongo which is the ruby binding for the Bongo store I am sure most of the developers at Forward could probably write this in a day or so, hopefully I can convince one or two of them to help out .

My second effort will be on WeBongo the REST webui for the Bongo mail store. I have other ideas for the webui once we have a working solution (using it as a sync destination for Tomboy desktop notes)

Please keep that in mind as I try to get this working.

Bongo-Project.org Needs To Be Renamed

The Bongo-Project needs to be renamed

Now before you fall of your chair or swallow your phone while reading this let me explain why.

I have been wanting to create this post for sometime now and have been holding off as we are almost ready for our 1.0 release.

The problem I have is that even if we released a new version of the software and it was our 1.0 release I am not sure anyone will be able to find us. I have been doing some digging over the last few months about what actually we are and to be honest I am not convinced we are “Individual” enough. We as a project get lost in the Bongo Project mahem that is People who play bongo’s as instruments.

Right from day one when we started this fork we had a problem with the name, we chose ” Bongo” as a name as an interim measure so that we had something to call ourselves while we thought of a new name. This has obviously not happend and I would like to kick this discussion off before we are ready for 1.0 so we have time to get it all done so we can release both to the outside world at the same time.

Now I know that this is not something lightly taken on and also it would mean quite a bit of work on the part of the limited Dev’s we have to change the code as well as then changing the website and our images and stuff.

I am now thinking I will spend £30.00 on a prize for the winning name. So now I need to find out what the community think and if there is apetite for this.

I have created a poll on my site for some active feedback

let me know I really want to know

UPDATE

This will be put on hold untill further notice

Standing up and being counted.

I have for some time now been wondering how many people actually use Bongo.

The reason for this is that we have had images available for a while and I am still non the wiser as to how many people actually use them.

I faithfully spend hours and hours building packages and getting them out the door but have no markers to see if they are being used.

While reading the docs for the ESVA appliance (http://www.global-domination.org/esva) I noticed that they have a cronjob that downloads a file and immediately deletes it. This allows for roughly seeing who is using their appliance .

They have documentation that tells people how to remove the cronjob which effectively turns off this tool.

I propose that the Bongo project perhaps use something similar to allow us to know how many people use the products we produce. it would be nice to know how many people are using Bong while the Web-UI is not working and then once we release something if that number increases and at what rate.

I am really  interested in ideas as to how we can achieve this with or without having some kind of phone home too.

Please leave a comment on this post if you like, or send an e-mail to the user or devel list or even come and have your say on the IRC channel.

I have also added a simple poll on the left

Thanks in advance

Great News About rPath Images

I have great news about the images on the rPath system.

I have been able to get Bongo 0.6.1 to build on the rPath system with a json patch from Alex.

I have promoted it to the QA and RELEASE repo’s so it should be available for you guys.

Please test and let me know if you have any issues.

You can find the images here

Bongo rPath Images

I have been trying to get the latest 0.6.1 release of Bongo onto the rPath images. This unfortunately has not been possible due to our reliance on a newer version of Python than what is available on the rPath system.

This is a sad moment as I have been doing that for quite some time now and will not be able to continue.

So for those that have images which contain Bongo on a rPath system please use either the Fedora, Suse or Gentoo packages that we have created.

Thanks to everyone who has helped me over the years and especially Stu Gott who was instrumental in moving the images forward.

If  rPath eventually support python 2.6 I will revisit the images on their platform.

Images for Bongo 0.6.0

I have been working on the images in the Suse Studio environment as I mentioned in my blog post  here.

I started the investigations to find that our RPM repository was in need of a bit of work. I wanted to create a new repository in my own name that would allow me to build these RPMs for the project.

Thinking about the delivery of the RPM’s I thought it best to open a repository with a generic name namely “bongo-project”.  This would allow more than one person to work on the repository, but for the repository to keep its identity.

Once the repository was setup I realized that I had to learn RPM packaging as I had been so used to the Conary way of packaging that it was almost second nature.

After quite a long time I have been able to get consistent builds from the OBS which have produced RPM’s for a number of OS’s.

Now cam the part that I really to do from the start Create images.

All in all it has been a painless effort as the interface is easy to use and intuitive. I only had to ask for help a few times to find out that the error was mine and not the studio’s. I have created images for the following

  1. ISO Live CD
  2. Vmware Image
  3. Xen Image
  4. USB/HDD Image

These images are all x86 (32bit) and do not have any web interface (well we ripped it out remember)

The only downside at the moment is that they have not created the marketplace yet so any images created will be deleted after a while.  This left me with a dilemma, how do I publish the images?

My solution…..

To create a subdomain on my own website for the bongo downloads.

The link is Http://bongo.haigmail.com

Here you will find a crude yet cute website with links to the bongo tar.gz files. I was impressed with the size of the images about 150mb each which I think is quite good.

For those of you on the rPath images as promised I will be creating one more update to that image set unless there are enough of you that want it. I have a problem in that Bongo does not work on Python 2.4 which is the deployed version on the rPath system.  rPath is not the only OS affected CentOS 5 and RHEL 5 are also affected by this. I have asked the guys to look at why it is failing and to see if they could get it working as soon as I can build it I will.

I would really like to know who of you are using the rPath images as I have no idea how many of you there are. Please post a response here if you do.