Provisioning openstack on vmware infrastructure.

As I didn’t found extensive docs about provisioning Red Hat Openstack on a vmware infrastructure, I browsed the python code.

Python is a very expressive and clear language and you can get to the point in a moment!

I then was able to create the following instack.json to power-management a set of vmware machines.

Despite the many ways to pass ssh_* variables via ironic, the right way of running it via the instack.json is to:

– use the `pm_virt_type` instead of `ssh_virt_type`;
– express the ssh_key_content in the pm_password parameter like shown in the docs;
– set capabilities like profile and boot_option directly.

The key should be json-serialized on one line, replacing CR with ‘\n’.

{
    "nodes":[
        {
            "mac":[
                "00:0c:29:00:00:01"
            ],
            "capabilities": "profile:control,boot_option:local"
            "cpu":"8",
            "memory":"16384",
            "disk":"60",
            "arch":"x86_64",
            "pm_type":"pxe_ssh",
            "pm_virt_type": "vmware",
            "pm_addr":"172.18.0.1",
            "pm_user":"vmadmin",
            "pm_password":"-----BEGIN RSA PRIVATE KEY-----\nMY\nRSA\nKEY\n-----END RSA PRIVATE KEY-----"
        },
{..other nodes..} 

NetworkManager please, stay away from my docker0

To set a list of unmanaged-devices you can just do the following.

cat >> /etc/NetworkManager/NetworkManager.conf <<EOF

[keyfile]
unmanaged-devices=interface-name:vboxnet0;interface-name:virbr0;interface-name:docker0

EOF

and

sudo nmcli connection reload

Strangely I had to put this in NetworkManager.conf. Using
/etc/NetworkManager/conf.d/20-unmanaged-bridges.conf didn’t work.

Linux@Dell XPS/Inspiron

My new Fedora 21 running on the nice Dell Inspiron with Touchscreen.

KDE works smoothly with both the touchpad and the touch display, I just had to tune the touch display with

xinput_calibrator

following this nice tutorial.

Today I tweaked the screen brightess. KDE uses steps of 10% – making things un-smooth.

From this post https://askubuntu.com/a/588016/401397 I just

sudo yum -y install xbacklight

And remapped the light up|down with

xbacklight -inc 10
xbacklight -dec 5

In this way I can fine tune up to 5%.

Get the keymap for the “win” button with:

# xinput --list
⎡ Virtual core pointer                          id=2    [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
⎜   ↳ ELAN Touchscreen Pen                      id=11   [slave  pointer  (2)]
⎜   ↳ ELAN Touchscreen                          id=12   [slave  pointer  (2)]
⎜   ↳ DLL0674:00 06CB:75DB UNKNOWN              id=13   [slave  pointer  (2)]
⎣ Virtual core keyboard                         id=3    [master keyboard (2)]
    ↳ Virtual core XTEST keyboard               id=5    [slave  keyboard (3)]
    ↳ Power Button                              id=6    [slave  keyboard (3)]
    ↳ Video Bus                                 id=7    [slave  keyboard (3)]
    ↳ Power Button                              id=8    [slave  keyboard (3)]
    ↳ Sleep Button                              id=9    [slave  keyboard (3)]
    ↳ Integrated_Webcam_HD                      id=10   [slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard              id=14   [slave  keyboard (3)] <-------this device!
    ↳ Dell WMI hotkeys                          id=16   [slave  keyboard (3)]

Then listen for changes with

#xinput --test 14
key press   134 
key release 134 

Bridge management with iproute2

You can do simple management tasks on linux virtual bridges using iproute2.

While you can’t set STP or showmacs, you can create/delete bridges and add/remove interfaces.

The following commands are the same.

* add bridge

#brctl addbr ipbr0
#ip l a ipbr0 type bridge

* add interface to bridge

#brctl addif ipbr0 eth0
#ip l s eth0 master ipbr0

* remove interface from bridge

#brctl delif ipbr0 eth0
#ip l s eth0 nomaster

* remove bridge

#brctl delbr ipbr0
#ip l d ipbr0 

VIP loves privacy…with arptables!

If you want to hide your cluster vip for some time, you can play with

#ip link set eth3 arp off

But if your vip is on a virtual interface or a secondary ip, #ip link; can’t help you.

You can just

#sudo yum -y install arptables_jf
#arptables  -A IN -d $YOURVIP -j DROP

The syntax mimics iptables, so

#arptables-save ; # list rules
#arptables -F ; # flush rules

Clustered Volumes? That’s Logical

With lvm you can create clustered volumes allowing different nodes to mount different lvs on the same vg.

The standard workflow of creating partitions and vgs is:

parted /dev/mapper/datadisk mklabel gpt mkpart 1p "1 -1";
partprobe /dev/mapper/datadisk
pvcreate /dev/mapper/datadiskp1
vgcreate vg_xml /dev/mapper/datadiskp1
vgchange -c y vg_xml

If you get this error creating the LV

# lvcreate -n lv_xml vg_xml -l +100%FREE 
  Error locking on node bar-coll-mta-02: Volume group for uuid not found: gFyARW80mUikvaZafzYz773pLBWqc8etOgVrimSit4OmC98c1cIT0qfZfY3tZRxQ
  Failed to activate new LV.

That’s because one of the cluster nodes can’t see the newly created volume.
Probably the following will solve

#partprobe /dev/mapper/datadisk
#vgscan

Oracle grid…unattended ;)

You can setup a reproducible installation of Oracle Grid in this way:
1- run the ./runInstaller in graphical mode;
2- have care to setup ssh pki authentication for both oracle and grid users;
3- save the configured setup in a response file;
4- eventually edit the response file setting the password;

Now run:

 grid$ ./runInstaller  -silent -ignorePrereq -responseFile /home/grid/11.2.0.3-v3-grid.rsp 

This will setup the whole thing without your interaction. Using -ignorePrereq forces the installation even if some requirement is not met, so check twice if the missing prerequisites are actually required ;)

At the end, you’ll be told to

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/11.2.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: 
[db-01]
Execute /u01/app/11.2.0/grid/root.sh on the following nodes: 
[db-01, db-02]

As install user, execute the following script to complete the configuration.
        1. /u01/app/11.2.0/grid/cfgtoollogs/configToolAllCommands

        Note:
        1. This script must be run on the same system from where installer was run. 
        2. This script needs a small password properties file for configuration 
            assistants that require passwords (refer to install guide documentation).

To check if your grid infrastructure correctly setup the Automatic Storage Management, take the ASM sid (eg. ASM1, ASM2, ..)

#pgrep -fl pmon.*ASM
24490 asm_pmon_+ASM1
                          ^^^^

Then list the diskgroupd with

#export ORACLE_SID=+ASM1
#export ORACLE_HOME=/u01/app/11.2.0/grid #!!!BEWARE!!! REMOVE THE TRAILING SLASH!!!
#asmcmd  lsdsk # all the devices
Path
/dev/oraqdisk1
/dev/oraqdisk2
/dev/oraqdisk3
#asmcmd ls  # my diskgroup(s)
OCRVOTE/

#asmcmd lsdg # with infos
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576      3072     2146             1024             561              0             Y  OCRVOTE/

Consistent naming with iscsi + udev

If you need to export the same iscsi discs to different machines, you may want to name it consistently between various hosts.
When you setup iSCSI drives, Linux names them in the usual and unreliable /dev/sdX way (to get started with iSCSI use Michelangelo’s post http://vaunaspada.babel.it/blog/?p=596 )

Standard SCSI disks have a serial identifier: you can get it querying the /sys/block filesystem:

 #udevadm info --path=/sys/block/sdb --query=all | grep ID_SERIAL=

or checking them via their name

 #udevadm info --name=/dev/mapper/oraqdisk1 --query=all | grep DM_UUID=

And use it to identify the disk with an udev rule:

#cat > /etc/udev/rules.d/99-disk.rule <<EOF
KERNEL=="sd*", ENV{ID_SERIAL}=="the_disc_unique_id", NAME="disk0", OWNER="storage", GROUP="storage", MODE="0660"
EOF

To make sure the iscsi discs you export via tgtd have an unique serial id, you have to set it in /etc/tgt/targets.conf

<target iqn.2013-10.1.it.babel:be1>
    <backing-store /dev/mapper/VolGroup-lv_storage_0>
            scsi_id babel_testplant_s0
    </backing-store>
    <backing-store /dev/mapper/VolGroup-lv_storage_1>
        scsi_id babel_testplant_s1
    </backing-store>
    vendor_id iSCSI ACME Inc.
</target>

At this point you just have to create the following:

#cat > /etc/udev/rules.d/99-iscsi.rule <<EOF
KERNEL=="sd*", ENV{ID_SERIAL}=="babel_testplant_s0", NAME="iscsi0", OWNER="storage", GROUP="storage", MODE="0660"
KERNEL=="sd*", ENV{ID_SERIAL}=="babel_testplant_s1", NAME="iscsi1", OWNER="storage", GROUP="storage", MODE="0660"
EOF

And reload udev

#udevadm trigger

You can even do bulk-naming using globbing (aka “*” ) and environment variables (aka %E).

#
# Discover multipath devices named "oraq*" and create the corrisponding device in /dev/oraqdisk1
#    for DM_NAME check /etc/multipath.conf
SUBSYSTEM=="block", ENV{DM_NAME}=="oraq*", NAME="%E{DM_NAME}", OWNER="grid", GROUP="oinstall", MODE="0600"

ip route cheatsheet – link, address, tunnel

ip route is the new Linux ip and routing management suite.

ip l # list devices
ip l l eth0 # list only one
ip l s eth0 [down|up] # set link status
ip l s eth0 multicast [on|off] # set multicast status

ip a l # list addresses
ip -4 -o a # list just ipv4 addresses
ip a a 192.168.0.1/24 dev eth0 # set an ip
ip a d 192.168.0.1/32 dev eth0 # remove an ip
ip a f dev eth0 # remove all ips from eth0

ip r # list routes
ip r l m 172.23.0.4 # show the route for the given ip

ip ne # list arp table (ipv4 neighbour table)

# create a tunnel between two host
host1: ip tun add tunnel0 mode ipip remote 192.168.0.2
host1: ip a a 10.0.0.1/24 dev tunnel0
host1: ip l s tunnel0 up
host2: ip tun add tunnel0 mode ipip remote 192.168.0.1
host1: ip a a 10.0.0.2/24 dev tunnel0
host2: ip l s tunnel0 up