How to copy custom attributes when migrating vmware vcenter to new database

I recently had to move hosts and guests to a new vcenter server as the old server had become corrupt and full of issues.
The current vcenter has a few custom attributes and notes that would not be transferred as part of the move.
So I wanted to use powercli to read the attributes out and put them back.

To export the attributes I used the script below.
You will need to add as many Key Value pairs as you have custom attributes

#load Vmware Module
Add-PSSnapin VMware.VimAutomation.Core

Connect-VIServer -User 'VMUSER' -Password 'USerPasswd221' -Server 'vcenter1'

$vmlist = get-vm
$Report =@()
foreach ($vm in $vmlist) {
$row = "" | Select Name, Notes, Key, Value, Key1, Value1, Key2, Value2, Key3, Value3
$row.name = $vm.Name
$row.Notes = $vm | select -ExpandProperty Notes
$customattribs = $vm | select -ExpandProperty CustomFields
$row.Key = $customattribs[0].Key
$row.Value = $customattribs[0].value
$row.Key1 = $customattribs[1].Key
$row.Value1 = $customattribs[1].value
$row.Key2 = $customattribs[2].Key
$row.Value2 = $customattribs[2].value
$row.Key3 = $customattribs[3].Key
$row.Value3 = $customattribs[3].value
$Report += $row
}

$report | Export-Csv "c:\vms-with-notes-and-attributes.csv" -NoTypeInformation

It should produce a csv file that looks something like this

VMNAME,NOTES,CREATEDATE,CREATOR,DEPLOYDATE,TEAM
vmguest1,note1,12/29/2011,Bob,12/30/2011,Web
vmguest2,note2,12/29/2011,John,12/30/2011,Accounts
vmguest3,note3,12/29/2011,Paul,12/30/2011,Database

Once you have exported the file you need to import it into the new vCenter
again adding Key Value pairs as needed.

#load Vmware Module
Add-PSSnapin VMware.VimAutomation.Core

Connect-VIServer -User 'VMUSER' -Password 'USerPasswd221' -Server 'vcenter2'

$NewAttribs = Import-Csv "C:\vms-with-notes-and-attributes.csv"

foreach ($line in $NewAttribs) {
set-vm -vm $line.Name -Description $line.Notes -Confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key -Value $line.Value -confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key1 -Value $line.Value1 -confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key2 -Value $line.Value2 -confirm:$false
Set-CustomField -Entity (get-vm $line.Name) -Name $line.Key3 -Value $line.Value3 -confirm:$false

}

Hope this helps someone.

Creating a Two Node Mysql Cluster On Ubuntu With DRBD Part 2

This blog is a follow on from a blog post I wrote ages ago and have eventually got round to finishing it off

In this part of the process we will create the disks and setup the DRBD devices
First we need to connect to the Virtual Machines from a terminal session as it makes life much easier and quicker when you connect remotely.
You will need to make sure that your servers have static IP addresses.
For this document I will be using the following IP addresses for my servers.

drbdnode1 = 172.16.71.139
drbdnode2 = 172.16.71.140
drbdmstr = 172.16.71.141 (clustered IP address)
Subnet Mask = 255.255.255.0
Gateway = 172.16.71.1
DNS Servers = 8.8.8.8 and 4.4.4.4

So to set the IP address as fixed you need to do the following.
Connect to the console of drbdnode1 and login
now we need to edit the file that contains the IP address of the network card
enter the following command and press return

sudo nano /etc/network/interfaces

enter the password for the user you are logged in as
You should see the following screen

now use your arrow keys on your keyboard and move the white cursor to the section that starts with iface eth0
press Ctrl K to remove the line then add the lines below with your IP address details

auto eth0
iface eth0 inet static
address 172.16.71.139
netmask 255.255.255.0
network 172.16.71.0
broadcast 172.16.71.255
gateway 172.16.71.1

It should end up looking like this

Now press Ctrl X to exit
Then Y
Then press Enter to save
Now type in the following

sudo /etc/init.d/networking restart

Do the same for drbdnode2
Now that we have given each server a static Ip address we can connect via ssh to the server to do the admin remotely.
To do this you need to have a machine that has an ssh client installed most linux and osx clients have one already installed if you are on windows look for putty and use that.
So open a terminal on your machine and the in the following

ssh cluster@172.16.71.139 and press enter.

You need to substitute the username you created on your server when setting it up for the word cluster in the above command.
You will be prompted to accept a key for the server. Type yes and press enter.
Now enter the password for the user and press enter.
You should see a screen like this

Connect to both cluster nodes to make sure you are not stopped down the line to fix the problem.
You are now ready to work on your cluster.
First we need to create host records for the two servers
type the following into your terminal session

sudo nano /etc/hosts

and add a record for each server it should look something like this

Save the file as before and do the same for node2 but swap the names and ipadresses
Now we need to install a few packages that will allow us to use drbd
in the terminal on drbdnode1 type

apt-get install heartbeat drbd8-utils

and press enter you should have a screen like this

Press Y and then Enter to install the software. Do this on drbdnode2 as well
Now we need to create the partitions that we will use for the drbd cluster
to find out which disk we will be using run the command

sudo fdisk -l

to see which disks have not been partitioned your screen should look like this

As you can see at the end is the disk /dev/sdb does not have a partition table
look for the line “ Disk /dev/sdb doesn’t contain a valid partition table “
to create a partition table we need to run the following commands

sudo fdisk /dev/sdb
n (to create a new partition)
p (to select a primary partition)
1 (for the first partition)
Enter (to select the start cylinder)
and enter (to select the end cylinder)
w (to write the changes)

the screen should look like this

Do this on both servers
once this is complete we now need to edit the drbd configuration files to set up our clustered filesystem.
In your terminal on drbdnode1 enter the command

sudo nano /etc/drbd.d/clusterdisk.res

Enter the password for your user and edit the file
Copy and paste the following code into your terminal screen and then change the details to match your server names and ipaddresses

resource clusterdisk { # name of resources

protocol C;

on drbdnode1 { # first server hostname

device /dev/drbd0; # Name of DRBD device

disk /dev/sdb1; # Partition to use, which was created using fdisk

address 172.16.71.139:7788; # IP addres and port number used by drbd

meta-disk internal; # where to store metadata meta-data

}

on drbdnode2 { # second server hostname

device /dev/drbd0;

disk /dev/sdb1;

address 172.16.71.140:7788;

meta-disk internal;

}

disk {

on-io-error detach;

}

net {

max-buffers 2048;

ko-count 4;

}

syncer {

rate 10M;

al-extents 257;

}

startup {

wfc-timeout 0;

degr-wfc-timeout 120; # 2 minutos.

}

}

The screen should look similar to this

ctrl x (to exit)
y (to save the changed file)
enter (to overwrite the file)
Now we need to create the DRBD resource
enter the following command into your terminal session

sudo drbdadm create-md clusterdisk

After running this command you should see a screen similar to this

On drbdnode1 enter the following command

drbdadm -- --overwrite-data-of-peer primary all

this will activate it as the primary drbd node
to see if this has worked you can run the following command

sudo drbdadm status

the result should look like this on drbdnode1

and like this on drbdnode2

you will see that drbdnode1 has a status of
cs=”SyncSource”
and drbdnode2 has a status of
cs=”SyncTarget”
this tells you what role they are playing in the cluster
at the end of this line you will see a status resynced_percent=”3.8″
this tells you how much the drbd disk has synced.
Once the sync is complete connect to drbdnode1 and run the following command

sudo mkfs.ext4 /dev/drbd0

this will create an ext4 partition on the drbd file system. Which will sync across to drbdnode2

Configuring heartbeat resource

Now we need to setup the Mysql resource in the heartbeat configuration
firstly we need to create a file called authkeys. The file should be created in /etc/ha.d directory. You can do this with the following command

nano /etc/ha.d/authkeys

in this file you need to add the following text.

auth 3

3 md5 [SECRETWORD]

Replace [SECRETWORD] with a key you have generated.
This file needs to be on both servers in the /etc/ha.d directory.
After you have created the file you need to change the permissions on the file to make it more secure. This can be done with the following command

chmod 600 /etc/ha.d/authkeys

do this on both servers
Now we need to create the /etc/ha.d/ha.cf file to store the cluster config.
You can do this with the following command

nano /etc/ha.d/ha.cf

copy and paste this code into the file

logfile /var/log/ha-log

keepalive 2

deadtime 30

udpport 695

bcast eth0
auto_failback off
stonith_host drbdnode1 meatware drbdnode2
stonith_host drbdnode2 meatware drbdnode1
node drbdnode1 drbdnode2

do the same for both servers
next is the haresources file. Create the file here

nano /etc/ha.d/haresources

paste this code in there

dhcp-1 IPaddr::172.16.71.141 /24/eth0 drbddisk::clusterdisk Filesystem::/dev/drbd0::/var/lib/mysql::ext4 mysql

Your cluster is now ready to role.
All you now need to do is test the cluster which I will tell you how to do in a future blog post
Let me know how you get on

ESX Trunk VLAN config for Storage

I was struggeling with an install of ESXi on a cisco 6509 switch where the management and VM LAN connectivity worked just fine but for some reason the SAN NFS VLAN just did not want to communicate with the Nexenta on that VLAn.

After some searching and trial and error I was able to utilise this post http://blogs.egroup-us.com/?p=2453 to get the port-channel and the ports configured correctly. I did not use all the settings from this post but it did remind me to check my port0-channel config.

Connecting ESXi 4.1 to extreme x650 10G switches

I am busy working on a contract with a comonay that is implementing a 10G network for their ESX cluster.

We have been using extreme X650 10G copper switches for the core.

Migrating the current servers to the 10G environment was relatively easy and went with minimal issues.

Over the last 2 weeks I have been trying to get a new server to connect to the cluster ahnd have been frustrated. I have been unable to get the server communicating on the management VLAN. the server was configured correctly and so was the switch or so I thought.

It seems that to make this work you need to add the network cards to the vlan in the extreme os as untagged which then allows you to connect the server to vCenter and then once you add your NIC’s to the dv switch you lose connectivity again and have to readd the ports to the management vlan as tagged.

this post is more fo rme to be able to look it up in the future.

Creating a Two Node Mysql Cluster On Ubuntu With DRBD

Creating a Two Node Mysql Cluster On Ubuntu With DRBD

To create this cluster you will need 2 ubuntu node servers installed as follows.

I am creating this cluster inside of vMware so I created 2 VM’s with:

1GB RAM

1 10GB hdd for the root (/) partition.

1 20GB hdd for the database store.

I downloaded the ubuntu 9.10 server iso and presented this to the VM’s and started the install.

This is by no means an in depth install instruction.

I have just captured screenshots to show what I did.

So here we go with the install of the machines.

 

You should see this screen on booting the VM. Select the language you want to use.

 

 

 

Then push the F3 key to select your keyboard.

 

Then use the up and down arrows to select the install ubuntu server option and push the Enter key.

Then select your install language using the up and down keys.

 

 

Then select your Country

 

Then Enter the hostname for the system

 

 

We then need to partition the disks.

 

I selected the guided – use entire disk option as these are new servers.

Then select the first disk Which is 10GB here. We are only formatting the system disk as we need to make some changes to the other disk before we format it later on in this series.

Selecting this option lets Ubuntu manage the system partition in the default manner.

 

You will then be asked if you really want to write the changes to the disk. As these are new servers you select YES.

 

 

Now we create a new User that will be used to access the server to administrate it. Enter the Full name e.g. “John Doe”

 

 

 

After you select continue you will be asked to select or type a username.

 

Select continue after you are happy with the username. And you will be prompted for a password.

 

 

Enter a new password and select the continue option. You will then be asked to enter the password again.

 

After entering the password again select the continue option. You will then be asked if you want to encrypt your home drive. As this will be a server I selected No.

 

You will then be asked if you need to add a proxy address to access the internet. In some businesses this is a required solution for me it was not necessary. Select Continue when finished.

You will be asked to enable updates for the server. The choice is yours. I prefer to automatically install security updates.

 

 

The next screen is related to the software patterns that you would like to install.

 

I moved the red bar down and used the space par to select the openssh server. The reason for this is to allow us access t othe console of the server via an ssh terminal.

 

 

Once you select continue the install will take place. After a while the following screen will come up that shows the install is complete and that the server will now reboot.

 

 

This is the last task for the install process We will move on to the configuration of the server once it has rebooted.

Part 2 can be found here

 

Standing up and being counted.

I have for some time now been wondering how many people actually use Bongo.

The reason for this is that we have had images available for a while and I am still non the wiser as to how many people actually use them.

I faithfully spend hours and hours building packages and getting them out the door but have no markers to see if they are being used.

While reading the docs for the ESVA appliance (http://www.global-domination.org/esva) I noticed that they have a cronjob that downloads a file and immediately deletes it. This allows for roughly seeing who is using their appliance .

They have documentation that tells people how to remove the cronjob which effectively turns off this tool.

I propose that the Bongo project perhaps use something similar to allow us to know how many people use the products we produce. it would be nice to know how many people are using Bong while the Web-UI is not working and then once we release something if that number increases and at what rate.

I am really  interested in ideas as to how we can achieve this with or without having some kind of phone home too.

Please leave a comment on this post if you like, or send an e-mail to the user or devel list or even come and have your say on the IRC channel.

I have also added a simple poll on the left

Thanks in advance

vMware workstation 6.5.3 on openSUSE 11.2

I have just upgraded to 11.2 from ubuntu 9.10 on my IBM / Lenovo T61 laptop. ( I will post a better blog about that later)

What I wanted to mention here is that the current vMware workstation 6.5.3 does not want to run on my system.

I have found this post http://k—–k.blogspot.com/2009/09/install-vmware-workstation-653-on.html

Which has steps for compiling the modules yourself but by the look of things is based on 32bit openSUSE.

I will see if I can get it working here on my 64bit machine

Bongo Images and the 0.5 release

Bongo Images and the 0.5 release

I have been working on the images for the 0.5 release of Bongo and I have now completed them.

They can be found here http://www.rpath.org/rbuilder/project/bongo/latestRelease

There is a change however in how we will be implementing the images from now on.

I have decided that to make it easier on you guys and also to limit the number of hours I have to work on them I will create a version 1.0 of the images which I will not change for the foreseeable future.

What will now happen is that we will just update the packages that apply to Bongo and release them for consumption.

I have also decided that I will start using the built in features of the rPath rbuilder system to start developing changes or updates to the bongo packages in the -devel branch and then when I am happy that they are ready for general consumption I will promote them to -qa which is an image I will be running permanently I will make this image available to anyone who would want to help test the bleeding edge stuff from Bongo.

Once the -qa has been completed I will promote to the release branch which will be bongo.rpath.org@rpl:bongo-1.0.

This will automatically make the changes available to you guys and so we will become a rolling update project. So you will never have to re-install your server again unless you have to upgrade your hardware.

I hope you find it easier to use. We are now well on our way to making Bongo a great mail server.

Bongo Images for 0.4.0 Broken

Due to a server error where the images get built an error has crept in that make the images useless.

This error has been prevalent for some time and so it has caused the images to break.

I have therefore pulled the images and will try to fix the problem as soon as I can.

I will post again when things get better