r(h)e(v)set password

RHEV uses free-ipa services to authenticate users to its portal. If password expired, the administrator can only set an one-time-password via the IPA portal eg. https://freeipa.rhevlab.local/ipa/ui/ and the user must change it *before* logging in.

The user – eg. u01@rhevlab.local – needs only to
#kinit u01;

The kinit executable reads kerberos configuration from /etc/krb5.conf and sets the default realm (eg. rhevlab.local) thus associating the user with the ldap entry

[libdefaults]
default_realm = RHEVLAB.LOCAL
dns_lookup_realm = false
dns_lookup_kdc = false
rdns = false
ticket_lifetime = 24h
forwardable = yes

You can otherwise specify the UPPERCASE domain in your kinit request
#kinit u01@RHEVLAB.LOCAL

While all your *lowercase* requests are doomed to fail ;
#kinit u01@rhevlab.local
kinit: Cannot find KDC for requested realm while getting initial credentials

How to manage duplicated UUID on RHEV 3.1

Do you remember the duplicated UID issue on RHEV 3.0? With v3.0 you can register multiple hosts with the same UUID but you can’t perform a live migration.

On RHEV 3.1 you cannot register more than one host with a specific UUID.

This means that you need to change it on each host before registering it to your RHEV Manager.

How to solve this situation? You can simply generate a new UUID by executing this command:

# uuidgen > /etc/vdsm/vdsm.id

then restart the vdsmd service…

# service vdsmd restart

…or reboot your hypervisor.

As you can see it is no longer necessary to edit the file libvirtd.conf on each hypervisor.


Importare una virtual machine dal formato di VMware in RHEV

Per prima cosa non si può fare direttamente, bisogna prima convertire la macchina virtuale per KVM e poi successivamente migrarla in RHEV.

Da VMWare a KVM

Cosa serve:

  • yum install fuse-devel
  • yum install qemu-img
  • installare VMware-vix-disklib-5.0.0-614080.x86_64.tar.gz da scaricare dal sito di VMWare

È bene prima di migrare la macchina disinstallare le guest additions di VMWare.

l’immagine di vmware e composta da piu file .vmdk. Bisogna convertirli con il tool vmware-vdiskmanager

export LD_LIBRARY_PATH=/usr/lib/vmware-vix-disklib/lib64:$LD_LIBRARY_PATH
vmware-vdiskmanager -r Ubuntu.vmdk -t 0 Ubuntu-2.vmdk

è preferibile usare il tool della stessa vmware su cui girava la macchina virtuale, altrimenti potrebbe dare degli errori.

Ora si dovrà importare l’immagine nel formato di kvm/qemu:

qemu-img convert Ubuntu-2.vmdk -O qcow2 Ubuntu-2.qemu

Con il comando:

vmware2libvirt -f Ubuntu.vmx > Ubunut-2.xml

viene generato un file xml che definisce la macchina virtuale. Se necessario modificare i path all’interno del file.

Per ora abbiamo finito, la migrazione verso KVM non e terminata, manca l’import della macchina virtuale (virsh -c qemu:///system define Ubuntu-2.xml) ma non e necessario per la migrazione verso RHEV.

Da KVM a RHEV

Per la migrazione da KVM a RHEV si usa la stessa procedura per le altre macchine virtuali. Il file Ubuntu-2.xml e` quello generato precedentemente. Modificare il file XML, settare il disco con l’immagine qemu.

virt-v2v -f /etc/virt-v2v.conf -o rhev -i libvirtxml -os whale.babel.it:/mnt2/EXPORT_Domain Ubunut-2.xml

Per maggiori informazioni consultare il manuale Red_Hat_Enterprise_Virtualization-3.0-V2V_Guide

RHEV 3.0 GUI doesn’t show non-empty LUNs

RHEV  stores virtual machines on a specific type of storage domain, the so called “data domain”.

While creating a new iSCSI data domain you need to perform these steps:

  1. Set the host used as iscsi initiator, the IP address/port of the iSCSI target.
  2. Eventually add the credentials to perform a CHAP authentication.
  3. Discovery and login to a particular iqn.
  4. Expand the LUNs lists shown by the GUI and select the LUN/LUNs that you want to add to the data domain.

Note that the GUI will show you only the un-initializated LUNs discarding pre-formatted or too small LUNs. This is done intentionally but not explicity in order to avoid the risk of overwriting existing data.

If you want to force RHEV to use a LUN with existing data you can wipe out at least the first 512 bytes of the LUN to convince RHEV that the LUN is empty by executing this simple command:

# dd if=/dev/zero of=/dev/mapper/lun_name bs=1M count=10

The value of 10M is arbitrary but feel free to choose another value.

Now perform a new cycle discovery+login and the GUI will show you the LUNs!

And now a question for Red Hat developers: hey guys, why don’t you show a simple alert???

P.S. On #rhev we found that this behaviour will be modified with 3.1 release: administrator will see the unselectable LUNs also ;-)

RHEV: extending a storage domain

Rhev – Red Hat Enterprise Virtualization 3.0 – stores data on special areas named Storage Domains (aka SD). An iscsi SD has lun attached – which is usually a Logical Volume (aka LV).

With current releases, you can’t grow a SD modifying the underlying LVs. This is probably due to the complex structure of the Storage Pool Manager which coordinates storage access from various hypervisors.

Let’s grow our “my_iscsi”. If we’re lucky we can do:
1- edit it in the rhev-manager interface;
2- add another LUN/Target to it.

In case the target lun exists but the rhev-manager can’t discover it, we may need to rediscover it trying to create a new storage domain. So:
a- try to create a new iscsi storage domain (don’t save it!);
b- discover for the missing target/lun so that rhev-manager is now aware of it;
c- close the create menu WITHOUT saving.

Now let’s re-select the “my_iscsi” and edit and we should see the new target! Once added to the SD click on “save” and we’re done.

RHEV guest migration: how to troubleshoot generic errors

As you know, Red Hat Enterprise Virtualization (RHEV) supports the so-called “Live Migration” between two hypervisors belonging to the same cluster. This feature allows to migrate a virtual machine from an hypervisor to another without the need to power down all the VMs executed by that host.

After the first installation could happen that the RHEV Manager interface shows a generic alert “Migration failed due to Error: Fatal error during migration“.

The first thing to do is to login to your management server and analyze your /etc/rhevm/rhevm.log in order to achieve a little bit more information:

2012-11-13 14:46:27,030 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-11-thread-50) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration mMessage Fatal error during migration

Damn, this isn’t useful. It’s better to go deep analyzing more specific logs on the source hypervisor.

I suggest to tail the /var/log/libvirt.log and launch the migration. You will see a log like this:

2012-11-13 14:42:17.914+0000: 24249: debug : virDomainMigrateToURI:5594 : Using peer2peer migration
2012-11-13 14:42:17.914+0000: 24249: debug : virDomainMigratePeer2Peer:4969 : dom=0x7f97fc0e34b0, (VM: name=vm_name, uuid=2548e682-7a9a-4520-bbb4-3e79923d03a9), xmlin=(null), flags=3, dname=(null), dconnuri=qemu+tls://destination_hypervisor/system, uri=(null), bandwidth=0
2012-11-13 14:42:17.915+0000: 24249: debug : virDomainMigratePeer2Peer:4997 : Using migration protocol 3
2012-11-13 14:42:17.964+0000: 24249: error : virNetClientProgramDispatchError:174 : internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009

The same error appears on vdsm.log:

Thread-358::ERROR::2012-11-13 14:45:09,869::vm::177::vm.Vm::(_recover) vmId=`2548e682-7a9a-4520-bbb4-3e79923d03a9`::internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009
Thread-358::ERROR::2012-11-13 14:45:10,833::vm::232::vm.Vm::(run) vmId=`2548e682-7a9a-4520-bbb4-3e79923d03a9`::Traceback (most recent call last):
File “/usr/share/vdsm/vm.py”, line 224, in run
File “/usr/share/vdsm/libvirtvm.py”, line 423, in _startUnderlyingMigration
File “/usr/share/vdsm/libvirtvm.py”, line 445, in f
File “/usr/share/vdsm/libvirtconnection.py”, line 63, in wrapper
File “/usr/lib64/python2.6/site-packages/libvirt.py”, line 1039, in migrateToURI
libvirtError: internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009

What does it means? I’ve found that libvirt uses the UUID (universally unique identifier) instead of IP address or hostname but… what is an UUID? Take a look at the Wikipedia page.

For some strange reason my two hypervisors had the same UUID. How to solve this situation? You can simply generate a new UUID by executing this command:

# uuidgen

Then you have to edit the libvirtd configuration file…

# vi /etc/libvirt/libvirtd.conf

…edit the commented option host_uuid (uncomment and substitute the example uuid)…

#host_uuid = “00000000-0000-0000-0000-000000000000”

…and lastly restart the vdsmd service

# service vdsmd restart

If you’re using oVirt instead of RHEV you have to restart the libvirtd service.

If the service restart doesn’t solve the issue try to reboot the hypervisor.

Don’t forget that hypervisors need to reach each other on different ways. Check your iptables configuration for these rules:

# vdsm
-A INPUT -p tcp –dport 54321 -j ACCEPT
# libvirt tls
-A INPUT -p tcp –dport 16514 -j ACCEPT
# SSH
-A INPUT -p tcp –dport 22 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport –dports 5634:6166 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport –dports 49152:49216 -j ACCEPT

If you forget them you will see this error on libvirt.log:

Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
Traceback (most recent call last):
File “/usr/share/vdsm/vm.py”, line 223, in run
File “/usr/share/vdsm/libvirtvm.py”, line 451, in _startUnderlyingMigration
File “/usr/share/vdsm/libvirtvm.py”, line 491, in f
File “/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py”, line 82, in wrapper
File “/usr/lib64/python2.7/site-packages/libvirt.py”, line 1034, in migrateToURI2
libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tls://ipaddress/system

Now enjoy your RHEV Live Migration! ;-)