Fixing oracle agent for RHCS

I just found an issue in the oracle agent for rhcs. If you’re curious, check out here on github.

Essentially existing processes were searched via ps | grep | awk instead of ps | grep.

While grep returns nonzero if nothing matches, awk always returns zero, so the agent always waits the timeout before stopping the resource.

After fixing the script in /usr/share/cluster/ reloading the configuration causes the following error

# cman_tool -r version
/usr/share/cluster/cluster.rng:2027: element define: Relax-NG parser error : Some defines for ORACLEDB needs the combine attribute
Relax-NG schema /usr/share/cluster/cluster.rng failed to compile
cman_tool: Not reloading, configuration is not valid

After reading https://access.redhat.com/site/solutions/549963 I just noted that the backup copy of the oracle agent was disturbing the RHCS.

The following chmod seemed to do the trick.

#chmod -x /usr/share/cluster/oracledb.sh-2014-05-26

Enterprisely sending your hostname to DHCP

DHCP protocol supports sending the client host name to the server via the DHCP HOST NAME option.
See RFC2132 appendix and the original
DHCP Vendor Extension at RFC1497

Further info on the option structure is in the bootp documentation:

You can configure it

On debian:
uncomment the ‘send host-name’ directive in /etc/dhcp3/dhclient.conf

In RHEL:
add the following line to the interface with the ip you want to associate

vim /etc/sysconfig/network-scripts/ifcfg-eth0
DHCP_HOSTNAME=your-nice-hostname

Importare una virtual machine dal formato di VMware in RHEV

Per prima cosa non si può fare direttamente, bisogna prima convertire la macchina virtuale per KVM e poi successivamente migrarla in RHEV.

Da VMWare a KVM

Cosa serve:

  • yum install fuse-devel
  • yum install qemu-img
  • installare VMware-vix-disklib-5.0.0-614080.x86_64.tar.gz da scaricare dal sito di VMWare

È bene prima di migrare la macchina disinstallare le guest additions di VMWare.

l’immagine di vmware e composta da piu file .vmdk. Bisogna convertirli con il tool vmware-vdiskmanager

export LD_LIBRARY_PATH=/usr/lib/vmware-vix-disklib/lib64:$LD_LIBRARY_PATH
vmware-vdiskmanager -r Ubuntu.vmdk -t 0 Ubuntu-2.vmdk

è preferibile usare il tool della stessa vmware su cui girava la macchina virtuale, altrimenti potrebbe dare degli errori.

Ora si dovrà importare l’immagine nel formato di kvm/qemu:

qemu-img convert Ubuntu-2.vmdk -O qcow2 Ubuntu-2.qemu

Con il comando:

vmware2libvirt -f Ubuntu.vmx > Ubunut-2.xml

viene generato un file xml che definisce la macchina virtuale. Se necessario modificare i path all’interno del file.

Per ora abbiamo finito, la migrazione verso KVM non e terminata, manca l’import della macchina virtuale (virsh -c qemu:///system define Ubuntu-2.xml) ma non e necessario per la migrazione verso RHEV.

Da KVM a RHEV

Per la migrazione da KVM a RHEV si usa la stessa procedura per le altre macchine virtuali. Il file Ubuntu-2.xml e` quello generato precedentemente. Modificare il file XML, settare il disco con l’immagine qemu.

virt-v2v -f /etc/virt-v2v.conf -o rhev -i libvirtxml -os whale.babel.it:/mnt2/EXPORT_Domain Ubunut-2.xml

Per maggiori informazioni consultare il manuale Red_Hat_Enterprise_Virtualization-3.0-V2V_Guide

News for juniors, Stuff that matters

I’ve been asked where a junior sysadmin should start for working with Red Hat stuff. The first thing that comes to my mind is this nice book.

Red Hat System Administration Primier:  explains what’s the sysadmin job, principles of security and social engineering, how an operating system works and how to monitor: processes, I/O, memory. I would skip the printer part ;)

An experienced admin knows where and how to find. An apprentice should fastly learn that too.

While the man is a great source, I would recommend a glimpse to the Red Hat Deployment Guide – mainly to be used as a reference.  If you don’t know how to use Yum and RPM, configure Network Interfaces, start Services and Daemons at boot, configure Web Servers and use Monitoring Tools that’s the right place to go.

This book is divided in several independent chapters. Unless you need to prepare for a certification you could skip the web interface way ;).

RHEV: extending a storage domain

Rhev – Red Hat Enterprise Virtualization 3.0 – stores data on special areas named Storage Domains (aka SD). An iscsi SD has lun attached – which is usually a Logical Volume (aka LV).

With current releases, you can’t grow a SD modifying the underlying LVs. This is probably due to the complex structure of the Storage Pool Manager which coordinates storage access from various hypervisors.

Let’s grow our “my_iscsi”. If we’re lucky we can do:
1- edit it in the rhev-manager interface;
2- add another LUN/Target to it.

In case the target lun exists but the rhev-manager can’t discover it, we may need to rediscover it trying to create a new storage domain. So:
a- try to create a new iscsi storage domain (don’t save it!);
b- discover for the missing target/lun so that rhev-manager is now aware of it;
c- close the create menu WITHOUT saving.

Now let’s re-select the “my_iscsi” and edit and we should see the new target! Once added to the SD click on “save” and we’re done.

Finding real MAC addresses for bonding NICs on RedHat

I spent some time trying to find out the real MAC addresses for all NICs on RedHat AS3 and RedHat 5.3 since the HWADRR entry was deleted on all the ifcfg-ethX original files.

The ifconfig tool displays the real mac for the ACTIVE nic on the bond:

# ifconfig -a|grep HW
bond0     Link encap:Ethernet  HWaddr 00:50:8B:FB:5E:DA
bond1     Link encap:Ethernet  HWaddr 00:02:A5:4E:1F:E2
eth0      Link encap:Ethernet  HWaddr 00:50:8B:FB:5E:DA
eth1      Link encap:Ethernet  HWaddr 00:50:8B:FB:5E:DA
eth2      Link encap:Ethernet  HWaddr 00:02:A5:4E:1F:E2
eth3      Link encap:Ethernet  HWaddr 00:02:A5:4E:1F:E2

ethtool, dmidecode, etc, doesn’t report real MACs.

So the solutions is on these files:

/proc/net/bonding/bond0
/proc/net/bonding/bond1

Indeed:

$ cat /proc/net/bonding/bond0

bonding.c:v2.4.1 (September 15, 2003)
Bonding Mode: fault-tolerance (active-backup)

Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Multicast Mode: active slave only

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:50:8b:fb:5e:db

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:50:8b:fb:5e:da