Provisioning openstack on vmware infrastructure.

As I didn’t found extensive docs about provisioning Red Hat Openstack on a vmware infrastructure, I browsed the python code.

Python is a very expressive and clear language and you can get to the point in a moment!

I then was able to create the following instack.json to power-management a set of vmware machines.

Despite the many ways to pass ssh_* variables via ironic, the right way of running it via the instack.json is to:

– use the `pm_virt_type` instead of `ssh_virt_type`;
– express the ssh_key_content in the pm_password parameter like shown in the docs;
– set capabilities like profile and boot_option directly.

The key should be json-serialized on one line, replacing CR with ‘\n’.

            "capabilities": "profile:control,boot_option:local"
            "pm_virt_type": "vmware",
            "pm_password":"-----BEGIN RSA PRIVATE KEY-----\nMY\nRSA\nKEY\n-----END RSA PRIVATE KEY-----"
{..other nodes..} 

Importare una virtual machine dal formato di VMware in RHEV

Per prima cosa non si può fare direttamente, bisogna prima convertire la macchina virtuale per KVM e poi successivamente migrarla in RHEV.

Da VMWare a KVM

Cosa serve:

  • yum install fuse-devel
  • yum install qemu-img
  • installare VMware-vix-disklib-5.0.0-614080.x86_64.tar.gz da scaricare dal sito di VMWare

È bene prima di migrare la macchina disinstallare le guest additions di VMWare.

l’immagine di vmware e composta da piu file .vmdk. Bisogna convertirli con il tool vmware-vdiskmanager

export LD_LIBRARY_PATH=/usr/lib/vmware-vix-disklib/lib64:$LD_LIBRARY_PATH
vmware-vdiskmanager -r Ubuntu.vmdk -t 0 Ubuntu-2.vmdk

è preferibile usare il tool della stessa vmware su cui girava la macchina virtuale, altrimenti potrebbe dare degli errori.

Ora si dovrà importare l’immagine nel formato di kvm/qemu:

qemu-img convert Ubuntu-2.vmdk -O qcow2 Ubuntu-2.qemu

Con il comando:

vmware2libvirt -f Ubuntu.vmx > Ubunut-2.xml

viene generato un file xml che definisce la macchina virtuale. Se necessario modificare i path all’interno del file.

Per ora abbiamo finito, la migrazione verso KVM non e terminata, manca l’import della macchina virtuale (virsh -c qemu:///system define Ubuntu-2.xml) ma non e necessario per la migrazione verso RHEV.


Per la migrazione da KVM a RHEV si usa la stessa procedura per le altre macchine virtuali. Il file Ubuntu-2.xml e` quello generato precedentemente. Modificare il file XML, settare il disco con l’immagine qemu.

virt-v2v -f /etc/virt-v2v.conf -o rhev -i libvirtxml -os Ubunut-2.xml

Per maggiori informazioni consultare il manuale Red_Hat_Enterprise_Virtualization-3.0-V2V_Guide

Vsphere: VM with RDM migration across Virtual Datacenters

Usually, a migration of a virtual machine from one VMware Datacenter to another is a piece of cake job. You just have to present the LUNs which contain the VM data to the new Esx servers, and you can cold migrate the VM in a matter of minutes, or even hot migrate it using a few tricks (with no downtime at all).

There’s only one thing one may not consider in the equation: the pesky Raw Device Mapping disk attached to the VM.
What is a RDM disk? It’s a LUN just mapped to the VM, without a VMFS, its pointers stored along the VM in a special VMDK file.

So you’ve just powered off your VM, browsed the datastore in the destination datacenter, and added the VM to the Inventory. As you try to power on, an error comes before you:

“Virtual disk ‘Hard Disk X’ is a mapped direct-access LUN that is not accessible.”

The RDM strikes with nonsense. You may have already checked the storage for the correct presentation, and Vsphere for visibility of the LUN: it’s all there. Why should it be not accessible?

Easier to say than to discover at first, the problem is the different LUN ID in the destination datacenter. Lun presentation, as a matter of fact, follows a numerical order and VSphere uses the specific LUN ID to map the disk to the VM.

In my case, the RDM LUN ID in the source datacenter was 23, while it was 49 on the destination.

You can check the source ID in the “Physical LUN and Datastore Mapping File” area in the RDM disk properties in the VM Settings. There are many ways you can check the correspondence in the destination datacenter, via both Vsphere and command line. In Vsphere, the easier way is to check in Configuration > Storage > view: Device and sort by the size of the disk: mine was 1.7TB, easy to spot. If you have a trickier, more common size, you have to identify and compare the UID of the disk with commands such as “esxcli storage core path list”.

So, there are ways to solve this problem but VMware’s proposed solution is actually the less favorable, as they ask to remove the LUN and present it with the correct ID. But in my case ID 23 was already in use.
It would be better to just map the RDM to the VM with the new ID, but VSphere won’t even allow to see the RDM from the Add Disk Wizard: those already initialized are filtered by default.

What we’re going to do is disable the LUN filtering so we can attach the same LUN with the destination ID.
In the vSphere Client, select Administration > vCenter Server Settings > Advanced Settings.
Then add the following key and value: config.vpxd.filter.rdmFilter; false
Click Add, and click OK.
Now, in the VM properties, first of all detach the source ID LUN from the VM, then click Add > Hard Disk > Raw Device Mappings and select the correct LUN.

The RDM is now properly attached, and the VM will finally boot on the destination datacenter.

Unable to login as a user on a 4.1 ESX server

By default, a 4.1 ESX server denies logins of standard users, while root access via ssh is enabled without problems. This has changed from 4.0 and has caused many headaches for those systems upgraded to 4.1.

Obviously, this is a security problem and something we do not want.

To protect your ESX server and restore standard user access, you have to replace the system-auth config file. In this event, an older 4.0 version of the file will do the job. Always remember to make a backup just in case something goes wrong (if it does and you don’t’re screwed, so pay attention)

#vi /etc/pam.d/system-auth

paste this content inside the file:

# Autogenerated by esxcfg-auth

account    required    /lib/security/$ISA/

auth          required    /lib/security/$ISA/
auth          sufficient           /lib/security/$ISA/        likeauth nullok
auth          required    /lib/security/$ISA/

password    requisite try_first_pass retry=3 dcredit=-1 ucredit=0  ocredit=-1 lcredit=-1 minlen=8
password           required    /lib/security/$ISA/            retry=3
password           sufficient           /lib/security/$ISA/        nullok use_authtok md5 shadow
password           required    /lib/security/$ISA/

session      required    /lib/security/$ISA/
session      required    /lib/security/$ISA/

You can now login to your 4.1 ESX server using standard login. Now go and harden your server!

Easy vmdk file system extension under redhat (LVM)

As a VMware sysadmin, you may be asked to extend a file system not by adding another virtual disk, but by extending the vmdk itself.

Such operation is relatively risk free but has to be done with the VM powered down, so the first thing to do is shut down the VM.

Log in any esx in your cluster, go to the vm datastore path and run:

# vmkfstools -X nnG vmname.vmdk

In such command, nn represents the new size of the disk, and obviously vmname is the name of your VM.

Now, you have to mount any redhat CD/DVD to your VM, power it on, and boot it from CD/DVD.

Run the installer with the command: linux rescue

Obtain the recovery shell but skipping any network and disk mounting options, then we’re ready to go.

Assuming you will be extending the root disk, you will have to do the following with fdisk:

# fdisk /dev/sda

remove sda2 partition(d, 2)
create  new sda2 partition(n, p, 2)
change partition type in Linux LVM (t, 2, 8e)
write changes and exit (w)

Now, let’s resize the physical partition to the max:

# lvm pvresize /dev/sda2

If you want to check if everything is ok, run:

# lvm pvdisplay

Now, activate the Logical Volumes:

# lvm vgchange -a y

If you want to check the configured Logical Volumes, run:

# lvm lvdisplay

It is mandatory to fscheck the integrity of the Logical Volume to be extended:

#e2fsck -f /dev/VolGroup00/LogVol00

Now, we can extend the Volume Group:

lvm lvextend -L+10G /dev/VolGroup00/LogVol00

And then  the file system:

resize2fs /dev/VolGroup00/LogVol00

Finished, reboot the system and that’s it!

Release the lock from a hung vm on Vmware

After an HA event or a network/storage outage with VMware ESX servers (3.5/4.1 alike), you may have a situation in which the VM is down and cannot be powered on, even if you try to migrate it, or to deregister/register it again on the Virtual Center.
On closer inspection, you might notice that the vswp file is still on the VM folder (a sign the VM might be still active somewhere), yet you cannot delete the file because it is “locked”. Actually, one of the ESX in the cluster owns the lock, even if the VM is not running.
So, how to understand what to do with several hosts in the cluster? Let’s find out.
First of all, we have to know which esx is preventing the poweron.
Log in whatever esx, and run:

tail -f /var/log/vmkernel &

Now go to the locked VM datastore, and try to run:

cat vmname.vmdk

You should get some errors referring to the lock, but, more importantly, some vmkernel logs, such as:

Apr 5 09:45:26 Hostname vmkernel: 17:00:38:46.977 cpu1:1033)Lock [type 10c00001 offset 13058048 v 20, hb offset 3499520
Apr 5 09:45:26 Hostname vmkernel: gen 532, mode 1, owner 45feb537-9c52009b-e812-00137266e200 mtime 1174669462]
Apr 5 09:45:26 Hostname vmkernel: 17:00:38:46.977 cpu1:1033)Addr <4, 136, 2>, gen 19, links 1, type reg, flags 0x0, uid 0, gid 0, mode 600
Apr 5 09:45:26 Hostname vmkernel: 17:00:38:46.977 cpu1:1033)len 297795584, nb 142 tbz 0, zla 1, bs 2097152
Apr 5 09:45:26 Hostname vmkernel: 17:00:38:46.977 cpu1:1033)FS3: 132:

Now, that part identifies the host locking the file. That bold part is nothing but the MAC Address of the ESX!
Now, to the boring part: you have to login in every esx of the cluster and check if any network card matches this MAC:

/sbin/ifconfig -a |grep -i 00:13:72:66:e2:00

As soon as identified, the host should be placed in maintenance from the Virtual Center (DRS should do all the work for migrating the virtual machines) and the rebooted. This will release any lock and allow the VM to be finally powered on.