vconfig off-by-one?

Vlan tagging (aka RFC 802.1q) is a Layer2 protocol enabling more VLAN on a single ethernet connection. It works tagging ethernet frames so that each port accepts only the configured frames.

The vlan id is a 12bit value (from 0 to 4095), and usually network devices use the 4095 value (0xFFF) for the management network.
But the following command gave me an error
# vconfig add eth1 4095

And I discovered that – actually – the last valid VLAN ID is 4094. Here’s a brief discussion on the subject

How to manage duplicated UUID on RHEV 3.1

Do you remember the duplicated UID issue on RHEV 3.0? With v3.0 you can register multiple hosts with the same UUID but you can’t perform a live migration.

On RHEV 3.1 you cannot register more than one host with a specific UUID.

This means that you need to change it on each host before registering it to your RHEV Manager.

How to solve this situation? You can simply generate a new UUID by executing this command:

# uuidgen > /etc/vdsm/vdsm.id

then restart the vdsmd service…

# service vdsmd restart

…or reboot your hypervisor.

As you can see it is no longer necessary to edit the file libvirtd.conf on each hypervisor.


bashing ipython

iPython is a wonderful tool that avoids continuosly switching from bash to other utilities like bc, perl & co.

One of its limitation is the I/O redirection – at which bash is really good of. As iPython py-shell profile uses /bin/sh by default – thru os.system, I implemented a quick and dirty system replacement that diverts it into bash.

I added the following line here .ipython/profile_pysh/ipython_config.py

import os
def system2(cmd):
  pid = os.fork()
  if pid == 0:
    args = ['/bin/bash', '-c', cmd ]
    return os.execvp("/bin/bash", args)
  else:
    c_pid, status = os.waitpid(pid, 0)
    return status

print("Overriding os.system with bash")
os.system = system2

# or you can simply use subprocess if available
os.system = lambda cmd: subprocess.call(cmd, shell=True,executable='/bin/bash')

While py-shell profile runs commands for every call outside python globals(), other profiles use the `bang` syntax.
Eg. ! ls -l

To work it out too, I just changed the following line in
/usr/lib/python2.7/dist-packages/IPython/utils/_process_common.py
71 p = subprocess.Popen(cmd, shell=True,
72 executable='/bin/bash',
73 stdin=subprocess.PIPE,

Enterprisely sending your hostname to DHCP

DHCP protocol supports sending the client host name to the server via the DHCP HOST NAME option.
See RFC2132 appendix and the original
DHCP Vendor Extension at RFC1497

Further info on the option structure is in the bootp documentation:

You can configure it

On debian:
uncomment the ‘send host-name’ directive in /etc/dhcp3/dhclient.conf

In RHEL:
add the following line to the interface with the ip you want to associate

vim /etc/sysconfig/network-scripts/ifcfg-eth0
DHCP_HOSTNAME=your-nice-hostname

Importare una virtual machine dal formato di VMware in RHEV

Per prima cosa non si può fare direttamente, bisogna prima convertire la macchina virtuale per KVM e poi successivamente migrarla in RHEV.

Da VMWare a KVM

Cosa serve:

  • yum install fuse-devel
  • yum install qemu-img
  • installare VMware-vix-disklib-5.0.0-614080.x86_64.tar.gz da scaricare dal sito di VMWare

È bene prima di migrare la macchina disinstallare le guest additions di VMWare.

l’immagine di vmware e composta da piu file .vmdk. Bisogna convertirli con il tool vmware-vdiskmanager

export LD_LIBRARY_PATH=/usr/lib/vmware-vix-disklib/lib64:$LD_LIBRARY_PATH
vmware-vdiskmanager -r Ubuntu.vmdk -t 0 Ubuntu-2.vmdk

è preferibile usare il tool della stessa vmware su cui girava la macchina virtuale, altrimenti potrebbe dare degli errori.

Ora si dovrà importare l’immagine nel formato di kvm/qemu:

qemu-img convert Ubuntu-2.vmdk -O qcow2 Ubuntu-2.qemu

Con il comando:

vmware2libvirt -f Ubuntu.vmx > Ubunut-2.xml

viene generato un file xml che definisce la macchina virtuale. Se necessario modificare i path all’interno del file.

Per ora abbiamo finito, la migrazione verso KVM non e terminata, manca l’import della macchina virtuale (virsh -c qemu:///system define Ubuntu-2.xml) ma non e necessario per la migrazione verso RHEV.

Da KVM a RHEV

Per la migrazione da KVM a RHEV si usa la stessa procedura per le altre macchine virtuali. Il file Ubuntu-2.xml e` quello generato precedentemente. Modificare il file XML, settare il disco con l’immagine qemu.

virt-v2v -f /etc/virt-v2v.conf -o rhev -i libvirtxml -os whale.babel.it:/mnt2/EXPORT_Domain Ubunut-2.xml

Per maggiori informazioni consultare il manuale Red_Hat_Enterprise_Virtualization-3.0-V2V_Guide

News for juniors, Stuff that matters

I’ve been asked where a junior sysadmin should start for working with Red Hat stuff. The first thing that comes to my mind is this nice book.

Red Hat System Administration Primier:  explains what’s the sysadmin job, principles of security and social engineering, how an operating system works and how to monitor: processes, I/O, memory. I would skip the printer part ;)

An experienced admin knows where and how to find. An apprentice should fastly learn that too.

While the man is a great source, I would recommend a glimpse to the Red Hat Deployment Guide – mainly to be used as a reference.  If you don’t know how to use Yum and RPM, configure Network Interfaces, start Services and Daemons at boot, configure Web Servers and use Monitoring Tools that’s the right place to go.

This book is divided in several independent chapters. Unless you need to prepare for a certification you could skip the web interface way ;).

No more shortcut troubles with bash!

As you already know, bash heavily uses the readline library to provide keyboard shortcuts, and this library is configured via the /etc/inputrc.

While playing on a remote machine, I found that the CTRL+{left,right} arrow combination to move between words wasn’t working – and was instead printing “5C” and “5D”. It was necessary to tell bash to associate that combo with left|right move!

The first thing to do is to print the raw characters associated to CTRL+left. To do this type:
CTRL+V and then CTRL+left
You’ll see
^[[1;5D

Ok: that’s the sequence to map – where “^[” corresponds to the escape key aka \e.

Now let’s tell bash to bind that sequence with backward-word, and everything will work as expected!
# bind '"\e[1;5D": backward-word';

Now we can edit the /etc/inputrc , where I added two more association to the existing ones:
# Those two were still existing
"\e[5C": forward-word
"\e[5D": backward-word
# And now two more
"\e[1;5C": forward-word
"\e[1;5D": backward-word

You can even listen all configured bindings with
#bind -p

RHEV: extending a storage domain

Rhev – Red Hat Enterprise Virtualization 3.0 – stores data on special areas named Storage Domains (aka SD). An iscsi SD has lun attached – which is usually a Logical Volume (aka LV).

With current releases, you can’t grow a SD modifying the underlying LVs. This is probably due to the complex structure of the Storage Pool Manager which coordinates storage access from various hypervisors.

Let’s grow our “my_iscsi”. If we’re lucky we can do:
1- edit it in the rhev-manager interface;
2- add another LUN/Target to it.

In case the target lun exists but the rhev-manager can’t discover it, we may need to rediscover it trying to create a new storage domain. So:
a- try to create a new iscsi storage domain (don’t save it!);
b- discover for the missing target/lun so that rhev-manager is now aware of it;
c- close the create menu WITHOUT saving.

Now let’s re-select the “my_iscsi” and edit and we should see the new target! Once added to the SD click on “save” and we’re done.

RHEV guest migration: how to troubleshoot generic errors

As you know, Red Hat Enterprise Virtualization (RHEV) supports the so-called “Live Migration” between two hypervisors belonging to the same cluster. This feature allows to migrate a virtual machine from an hypervisor to another without the need to power down all the VMs executed by that host.

After the first installation could happen that the RHEV Manager interface shows a generic alert “Migration failed due to Error: Fatal error during migration“.

The first thing to do is to login to your management server and analyze your /etc/rhevm/rhevm.log in order to achieve a little bit more information:

2012-11-13 14:46:27,030 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-11-thread-50) Error code migrateErr and error message VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during migration mMessage Fatal error during migration

Damn, this isn’t useful. It’s better to go deep analyzing more specific logs on the source hypervisor.

I suggest to tail the /var/log/libvirt.log and launch the migration. You will see a log like this:

2012-11-13 14:42:17.914+0000: 24249: debug : virDomainMigrateToURI:5594 : Using peer2peer migration
2012-11-13 14:42:17.914+0000: 24249: debug : virDomainMigratePeer2Peer:4969 : dom=0x7f97fc0e34b0, (VM: name=vm_name, uuid=2548e682-7a9a-4520-bbb4-3e79923d03a9), xmlin=(null), flags=3, dname=(null), dconnuri=qemu+tls://destination_hypervisor/system, uri=(null), bandwidth=0
2012-11-13 14:42:17.915+0000: 24249: debug : virDomainMigratePeer2Peer:4997 : Using migration protocol 3
2012-11-13 14:42:17.964+0000: 24249: error : virNetClientProgramDispatchError:174 : internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009

The same error appears on vdsm.log:

Thread-358::ERROR::2012-11-13 14:45:09,869::vm::177::vm.Vm::(_recover) vmId=`2548e682-7a9a-4520-bbb4-3e79923d03a9`::internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009
Thread-358::ERROR::2012-11-13 14:45:10,833::vm::232::vm.Vm::(run) vmId=`2548e682-7a9a-4520-bbb4-3e79923d03a9`::Traceback (most recent call last):
File “/usr/share/vdsm/vm.py”, line 224, in run
File “/usr/share/vdsm/libvirtvm.py”, line 423, in _startUnderlyingMigration
File “/usr/share/vdsm/libvirtvm.py”, line 445, in f
File “/usr/share/vdsm/libvirtconnection.py”, line 63, in wrapper
File “/usr/lib64/python2.6/site-packages/libvirt.py”, line 1039, in migrateToURI
libvirtError: internal error Attempt to migrate guest to the same host 00020003-0004-0005-0006-000700080009

What does it means? I’ve found that libvirt uses the UUID (universally unique identifier) instead of IP address or hostname but… what is an UUID? Take a look at the Wikipedia page.

For some strange reason my two hypervisors had the same UUID. How to solve this situation? You can simply generate a new UUID by executing this command:

# uuidgen

Then you have to edit the libvirtd configuration file…

# vi /etc/libvirt/libvirtd.conf

…edit the commented option host_uuid (uncomment and substitute the example uuid)…

#host_uuid = “00000000-0000-0000-0000-000000000000”

…and lastly restart the vdsmd service

# service vdsmd restart

If you’re using oVirt instead of RHEV you have to restart the libvirtd service.

If the service restart doesn’t solve the issue try to reboot the hypervisor.

Don’t forget that hypervisors need to reach each other on different ways. Check your iptables configuration for these rules:

# vdsm
-A INPUT -p tcp –dport 54321 -j ACCEPT
# libvirt tls
-A INPUT -p tcp –dport 16514 -j ACCEPT
# SSH
-A INPUT -p tcp –dport 22 -j ACCEPT
# guest consoles
-A INPUT -p tcp -m multiport –dports 5634:6166 -j ACCEPT
# migration
-A INPUT -p tcp -m multiport –dports 49152:49216 -j ACCEPT

If you forget them you will see this error on libvirt.log:

Thread-3798::ERROR::2012-09-20 09:42:56,977::vm::240::vm.Vm::(run) vmId=`2bf3e6eb-49e4-42c7-8188-fc2aeeae2e86`::Failed to migrate
Traceback (most recent call last):
File “/usr/share/vdsm/vm.py”, line 223, in run
File “/usr/share/vdsm/libvirtvm.py”, line 451, in _startUnderlyingMigration
File “/usr/share/vdsm/libvirtvm.py”, line 491, in f
File “/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py”, line 82, in wrapper
File “/usr/lib64/python2.7/site-packages/libvirt.py”, line 1034, in migrateToURI2
libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tls://ipaddress/system

Now enjoy your RHEV Live Migration! ;-)


Improve your iSCSI performance using jumbo frames

What are jumbo frames? Jumbo frames are ethernet frames with more than 1500 bytes of payload. Conventionally, jumbo frames can carry up to 9000 bytes of payload, but variations exist and some care must be taken when using the term.

Why use jumbo frames? Enabling them on your network equipment and on your NICs you will experiment a performance boost specially with iSCSI protocol that works over a standard ethernet network.

Implementation of Jumbo frames must be done with some rules:

  • Same MTU for all server present in the network
  • Network card must support a MTU over 1500
  • Switch must support a MTU over 1500
  • Switch must support a MTU over 1500 on a VLAN

How to enable jumbo frames on RHEL/CentOS? Enabling jumbo frames on linux is really simple: edit the NIC configuration and append MTU=9000.

Don’t forget to enable them also on your switch/router!

# vi /etc/sysconfig/network-script/ifcfg-<your_nic> # ex. eth0

MTU=9000

Then restart the single interface…

ifdown eth0; ifup eth0

…or the entire network service

service network restart

After all verify that the new configuration has been correctly applied:

# ifconfig eth0

If the configuration is ok you will see a response like this:

eth0      Link encap:Ethernet  HWaddr xx:xx:xx:xx:xx:xx
inet addr:x.x.x.x  Bcast:x.x.x.x  Mask:x.x.x.x
UP BROADCAST RUNNING MULTICAST  MTU:9000 Metric:1

If you’re using bonding, you need to enable jumbo frames only on bond device configuration:

# vi /etc/sysconfig/network-script/ifcfg-bond0

MTU=9000