$ mv vaunaspada labs

Avete cercato vaunaspada.babel.it e vi siete ritrovati su labs.par-tec.it? Non siete vittime di un DNS spoofing, abbiamo solo dato una rinfrescata al blog, allineandolo al nuovo brand e look&feel.

Ciò che non cambia è la natura tecnica dei contenuti e l’entusiasmo del nostro team nel condividere le proprie esperienze con i propri simili.


Were you searching for vaunaspada.babel.it and you ended up on labs.par-tec.it? You are not the victim of a DNS spoofing, we just gave our blog a makeover, matching it with our (new) brand and look&feel.

What stays the same is the technical nature of the contents and the enthusiasm of our team in sharing their experience with others of their own kind.

Enjoy ;-)

How to manage duplicated UUID on RHEV 3.1

Do you remember the duplicated UID issue on RHEV 3.0? With v3.0 you can register multiple hosts with the same UUID but you can’t perform a live migration.

On RHEV 3.1 you cannot register more than one host with a specific UUID.

This means that you need to change it on each host before registering it to your RHEV Manager.

How to solve this situation? You can simply generate a new UUID by executing this command:

# uuidgen > /etc/vdsm/vdsm.id

then restart the vdsmd service…

# service vdsmd restart

…or reboot your hypervisor.

As you can see it is no longer necessary to edit the file libvirtd.conf on each hypervisor.


RHEV 3.0 GUI doesn’t show non-empty LUNs

RHEV  stores virtual machines on a specific type of storage domain, the so called “data domain”.

While creating a new iSCSI data domain you need to perform these steps:

  1. Set the host used as iscsi initiator, the IP address/port of the iSCSI target.
  2. Eventually add the credentials to perform a CHAP authentication.
  3. Discovery and login to a particular iqn.
  4. Expand the LUNs lists shown by the GUI and select the LUN/LUNs that you want to add to the data domain.

Note that the GUI will show you only the un-initializated LUNs discarding pre-formatted or too small LUNs. This is done intentionally but not explicity in order to avoid the risk of overwriting existing data.

If you want to force RHEV to use a LUN with existing data you can wipe out at least the first 512 bytes of the LUN to convince RHEV that the LUN is empty by executing this simple command:

# dd if=/dev/zero of=/dev/mapper/lun_name bs=1M count=10

The value of 10M is arbitrary but feel free to choose another value.

Now perform a new cycle discovery+login and the GUI will show you the LUNs!

And now a question for Red Hat developers: hey guys, why don’t you show a simple alert???

P.S. On #rhev we found that this behaviour will be modified with 3.1 release: administrator will see the unselectable LUNs also ;-)

RHEV guest migration: how to troubleshoot generic errors

As you know, Red Hat Enterprise Virtualization (RHEV) supports the so-called “Live Migration” between two hypervisors belonging to the same cluster. This feature allows to migrate a virtual machine from an hypervisor to another without the need to power down all the VMs executed by that host.

After the first installation could happen that the RHEV Manager interface shows a generic alert “Migration failed due to Error: Fatal error during migration“.

The first thing to do is to login to your management server and analyze your /etc/rhevm/rhevm.log in order to achieve a little bit more information:

2012-11-13 14:46:27,030 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-11-thread-50) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration mMessage Fatal error during migration

Damn, this isn’t useful. It’s better to go deep analyzing more specific logs on the source hypervisor.

I suggest to tail the /var/log/libvirt.log and launch the migration. You will see a log like this:

2012-11-13 14:42:17.914+0000: 24249: debug : virDomainMigrateToURI:5594 : Using peer2peer migration
2012-11-13 14:42:17.914+0000: 24249: debug : virDomainMigratePeer2Peer:4969 : dom=0x7f97fc0e34b0, (VM: name=vm_name, uuid=2548e682-7a9a-4520-bbb4-3e79923d03a9), xmlin=(null), flags=3, dname=(null), dconnuri=qemu+tls://destination_hypervisor/system, uri=(null), bandwidth=0
2012-11-13 14:42:17.915+0000: 24249: debug : virDomainMigratePeer2Peer:4997 : Using migration protocol 3
2012-11-13 14:42:17.964+0000: 24249: error : virNetClientProgramDispatchError:174 : internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009

The same error appears on vdsm.log:

Thread-358::ERROR::2012-11-13 14:45:09,869::vm::177::vm.Vm::(_recover) vmId=`2548e682-7a9a-4520-bbb4-3e79923d03a9`::internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009
Thread-358::ERROR::2012-11-13 14:45:10,833::vm::232::vm.Vm::(run) vmId=`2548e682-7a9a-4520-bbb4-3e79923d03a9`::Traceback (most recent call last):
File “/usr/share/vdsm/vm.py”, line 224, in run
File “/usr/share/vdsm/libvirtvm.py”, line 423, in _startUnderlyingMigration
File “/usr/share/vdsm/libvirtvm.py”, line 445, in f
File “/usr/share/vdsm/libvirtconnection.py”, line 63, in wrapper
File “/usr/lib64/python2.6/site-packages/libvirt.py”, line 1039, in migrateToURI
libvirtError: internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009

What does it means? I’ve found that libvirt uses the UUID (universally unique identifier) instead of IP address or hostname but… what is an UUID? Take a look at the Wikipedia page.

For some strange reason my two hypervisors had the same UUID. How to solve this situation? You can simply generate a new UUID by executing this command:

# uuidgen

Then you have to edit the libvirtd configuration file…

# vi /etc/libvirt/libvirtd.conf

…edit the commented option host_uuid (uncomment and substitute the example uuid)…

#host_uuid = “00000000-0000-0000-0000-000000000000”

…and lastly restart the vdsmd service

# service vdsmd restart

If you’re using oVirt instead of RHEV you have to restart the libvirtd service.

If the service restart doesn’t solve the issue try to reboot the hypervisor.

Don’t forget that hypervisors need to reach each other on different ways. Check your iptables configuration for these rules:

# vdsm
-A INPUT -p tcp –dport 54321 -j ACCEPT
# libvirt tls
-A INPUT -p tcp –dport 16514 -j ACCEPT
# SSH
-A INPUT -p tcp –dport 22 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport –dports 5634:6166 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport –dports 49152:49216 -j ACCEPT

If you forget them you will see this error on libvirt.log:

Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
Traceback (most recent call last):
File “/usr/share/vdsm/vm.py”, line 223, in run
File “/usr/share/vdsm/libvirtvm.py”, line 451, in _startUnderlyingMigration
File “/usr/share/vdsm/libvirtvm.py”, line 491, in f
File “/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py”, line 82, in wrapper
File “/usr/lib64/python2.7/site-packages/libvirt.py”, line 1034, in migrateToURI2
libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tls://ipaddress/system

Now enjoy your RHEV Live Migration! ;-)


Improve your iSCSI performance using jumbo frames

What are jumbo frames? Jumbo frames are ethernet frames with more than 1500 bytes of payload. Conventionally, jumbo frames can carry up to 9000 bytes of payload, but variations exist and some care must be taken when using the term.

Why use jumbo frames? Enabling them on your network equipment and on your NICs you will experiment a performance boost specially with iSCSI protocol that works over a standard ethernet network.

Implementation of Jumbo frames must be done with some rules:

  • Same MTU for all server present in the network
  • Network card must support a MTU over 1500
  • Switch must support a MTU over 1500
  • Switch must support a MTU over 1500 on a VLAN

How to enable jumbo frames on RHEL/CentOS? Enabling jumbo frames on linux is really simple: edit the NIC configuration and append MTU=9000.

Don’t forget to enable them also on your switch/router!

# vi /etc/sysconfig/network-script/ifcfg-<your_nic> # ex. eth0

MTU=9000

Then restart the single interface…

ifdown eth0; ifup eth0

…or the entire network service

service network restart

After all verify that the new configuration has been correctly applied:

# ifconfig eth0

If the configuration is ok you will see a response like this:

eth0      Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx
inet addr:x.x.x.x  Bcast:x.x.x.x  Mask:x.x.x.x
UP BROADCAST RUNNING MULTICAST  MTU:9000 Metric:1

If you’re using bonding, you need to enable jumbo frames only on bond device configuration:

# vi /etc/sysconfig/network-script/ifcfg-bond0

MTU=9000

Do you know iSCSI?

Internet SCSI (iSCSI) is a network protocol that allows you to use of the SCSI protocol over TCP/IP networks. It’s good alternative to Fibre Channel-based SANs. You can easily manage, mount and format iSCSI volumes under Linux.

Definitions: the iSCSI Target is the server that hosts and exports volumes to the clients. The iSCSI Initiator is the client that use the configured volumes.

On the server you need to install this package, start the related service and ensure that it starts on boot:

# yum -y install scsi-target-utils

# service tgtd start

# chkconfig tgtd on

On the client side you need to install this package, start the related service and ensure that it starts on boot:

# yum -y install iscsi-initiator-utils

# service iscsid start

# chkconfig iscsid on

Now configure your LUNs on the target:

# vim /etc/tgt/targets.conf

This is a basic configuration for target:

<target iqn.yyyy-mm.reverse-hostname:label>

# use backing-store to export a specific volume…

backing-store /dev/vol_group_name/logical_volume_name

# …or use direct-store to export the entire device

# direct-store /dev/sdb

</target>

Don’t forget to restart the tgtd service after a configuration update:

# service tgtd restart

Now it’s time to check if your LUNs are being exported correctly. The next command will show two LUNs for each target. The first one (LUN 0) is the controller, the second one (LUN 1) is your volume. Run this command on the target:

# tgtadm –lld iscsi –op show –mode target

Remember to enable iSCSI ports on iptables in order to accept connection on port 3260 for both TCP and UDP protocols!

Ok, your target is now fully configured. You can logon on your client and start to use the remote storage. On the client run these commands to show the exported volumes and login to them:

iscsiadm -m discovery -t sendtargets -p target_ipaddress

iscsiadm -m node -T target_name_iqn -p target_ipaddress –login

Now restart the iscsid service, use fdisk to show the mounted device on /dev and create partitions on it.

If you need to detach from the target you have to logout from it:

iscsiadm -m node -T target_name_iqn -p target_ipaddress –logout

How to rsync with ftp

As you know is not possible to perform a rsync with a ftp site. Here you can find a simple workaround to perform a remote backup.

First install rsync and curlftpfs…

sudo apt-get install rsync curlftpfs

…then create the mountpoint and allow access to your user…

sudo mkdir /mnt/yourftp

sudo chown youruser /mnt/yourftp

…enable the fuse mount for non-root users…

sudo vi /etc/fuse.conf

uncomment the parameter user_allow_other on the last line

…and then mount your ftp site

curlftpfs -o user=username:password,allow_other ftp.yoursite.com /mnt/yourftp

Now you can navigate your ftp like a classic filesystem folder!

Finally enjoy your rsync (example):

rsync -r -t -v –progress –delete /home/folder_to_backup/ /mnt/yourftp

Remember that if you need to sync folders with different name you have to add the last slash on the source dir!

P.S. Don’t forget to unmount your ftp site after the rsync:

sudo umount /mnt/yourftp