Consistent naming with iscsi + udev

If you need to export the same iscsi discs to different machines, you may want to name it consistently between various hosts.
When you setup iSCSI drives, Linux names them in the usual and unreliable /dev/sdX way (to get started with iSCSI use Michelangelo’s post )

Standard SCSI disks have a serial identifier: you can get it querying the /sys/block filesystem:

 #udevadm info --path=/sys/block/sdb --query=all | grep ID_SERIAL=

or checking them via their name

 #udevadm info --name=/dev/mapper/oraqdisk1 --query=all | grep DM_UUID=

And use it to identify the disk with an udev rule:

#cat > /etc/udev/rules.d/99-disk.rule <<EOF
KERNEL=="sd*", ENV{ID_SERIAL}=="the_disc_unique_id", NAME="disk0", OWNER="storage", GROUP="storage", MODE="0660"

To make sure the iscsi discs you export via tgtd have an unique serial id, you have to set it in /etc/tgt/targets.conf

    <backing-store /dev/mapper/VolGroup-lv_storage_0>
            scsi_id babel_testplant_s0
    <backing-store /dev/mapper/VolGroup-lv_storage_1>
        scsi_id babel_testplant_s1
    vendor_id iSCSI ACME Inc.

At this point you just have to create the following:

#cat > /etc/udev/rules.d/99-iscsi.rule <<EOF
KERNEL=="sd*", ENV{ID_SERIAL}=="babel_testplant_s0", NAME="iscsi0", OWNER="storage", GROUP="storage", MODE="0660"
KERNEL=="sd*", ENV{ID_SERIAL}=="babel_testplant_s1", NAME="iscsi1", OWNER="storage", GROUP="storage", MODE="0660"

And reload udev

#udevadm trigger

You can even do bulk-naming using globbing (aka “*” ) and environment variables (aka %E).

# Discover multipath devices named "oraq*" and create the corrisponding device in /dev/oraqdisk1
#    for DM_NAME check /etc/multipath.conf
SUBSYSTEM=="block", ENV{DM_NAME}=="oraq*", NAME="%E{DM_NAME}", OWNER="grid", GROUP="oinstall", MODE="0600"

One git to bring them all, and in a repo bind them.

I had to reunite various git repos under a new one. To do this without losing logs, I found a stackoverflow hint that worked for me.

# add and get the old repo data
git remote add old_repo
git fetch old_repo

# merge into my master without commit…
git merge -s ours –no-commit old_repo/master

# …we need to relocate in the foo/ subdirectory before
git read-tree –prefix=foo/ -u rack_remote/master

# now… commit!
git commit -m “Imported foo as a subtree.”

The #git log presents files in the old place, so git log foo/ doesn’t work. We can instead
diff between various releases simply with

git diff rev1 rev2 —

Statistics 101 with ipython

Today I needed to process some log files in search of some relations between data. After parsing the log file I got the following table.

data = [ 
('timestamp', 'elapsed', 'error', 'retry', 'size', 'hosts'),
(1379603191, 0.12, 2, 1, 123, 2313),
(1379603192, 12.43, 0, 1, 3223, 2303),
(1379609000, 0.43, 0, 1, 3223, 2303)

I easily converted this into a columned dict:

table = dict(zip( data[0], zip(*data[1:]) ))
'timestamp' : [ 1379603191, 1379603191, ..., 1379609000],
'elapsed': [0.12, 12.43, ..., 0.43],

In this way it was very easy to run basic stats:

print [k, max(v), min(v), stats.mean(v), stats.stdev(v) ] for k,v in table.items() ]

Check data distributions

from matplotlib import pyplot

And even look for basic correlation between columns:

from itertools import combination
from scipy.stats.stats import pearsonr
for f1, f2 in combinations(table.keys(), 2):
    r, p_value = pearsonr(table[f1], table[f2])
    print("the correlation between %s and %s is: %s" % (f1, f2, r))
    print("the probability of a given distribution (see manual) is: %s" % p_value)

Or draw scatter plots

from matplotlib import pyplot
for f1, f2 in combinations(table.keys(), 2):
    pyplot.scatter(table[f1], table[2], label="%s_%s" % (f1,f2))
    # add legend and other labels
    r, p = pearsonr(table[f1], table[f2])
    pyplot.title("Correlation: %s v %s, %s" % (f1, f2, r))
    pyplot.legend(loc='upper left') # show the legend in a suitable corner
    pyplot.savefig(f1 + "_" + f2 + ".png")

autotools and latest gcc version

Running an old autotools build I just found it didn’t work no more. The command in error was:
# gcc -lpthread -o foo foo.o

Manually putting -lpthread at the end was ok but as I was using autotools I couldn’t just change the order and – above all – it was fine for gcc to expect the library *after* the object file!

The solution – suggested on #gcc by ngladitz – was to use the right autotools variable for adding libraries.

After RTFM I just:
– foo_LDFLAGS=-lpthread # autotools put flags before …
+ foo_LDADD=-lpthread # … and libraries after

cleaning up and rebuilding fixed everything!

News for juniors, Stuff that matters

I’ve been asked where a junior sysadmin should start for working with Red Hat stuff. The first thing that comes to my mind is this nice book.

Red Hat System Administration Primier:  explains what’s the sysadmin job, principles of security and social engineering, how an operating system works and how to monitor: processes, I/O, memory. I would skip the printer part ;)

An experienced admin knows where and how to find. An apprentice should fastly learn that too.

While the man is a great source, I would recommend a glimpse to the Red Hat Deployment Guide – mainly to be used as a reference.  If you don’t know how to use Yum and RPM, configure Network Interfaces, start Services and Daemons at boot, configure Web Servers and use Monitoring Tools that’s the right place to go.

This book is divided in several independent chapters. Unless you need to prepare for a certification you could skip the web interface way ;).

No more shortcut troubles with bash!

As you already know, bash heavily uses the readline library to provide keyboard shortcuts, and this library is configured via the /etc/inputrc.

While playing on a remote machine, I found that the CTRL+{left,right} arrow combination to move between words wasn’t working – and was instead printing “5C” and “5D”. It was necessary to tell bash to associate that combo with left|right move!

The first thing to do is to print the raw characters associated to CTRL+left. To do this type:
CTRL+V and then CTRL+left
You’ll see

Ok: that’s the sequence to map – where “^[” corresponds to the escape key aka \e.

Now let’s tell bash to bind that sequence with backward-word, and everything will work as expected!
# bind '"\e[1;5D": backward-word';

Now we can edit the /etc/inputrc , where I added two more association to the existing ones:
# Those two were still existing
"\e[5C": forward-word
"\e[5D": backward-word
# And now two more
"\e[1;5C": forward-word
"\e[1;5D": backward-word

You can even listen all configured bindings with
#bind -p

How to rsync with ftp

As you know is not possible to perform a rsync with a ftp site. Here you can find a simple workaround to perform a remote backup.

First install rsync and curlftpfs…

sudo apt-get install rsync curlftpfs

…then create the mountpoint and allow access to your user…

sudo mkdir /mnt/yourftp

sudo chown youruser /mnt/yourftp

…enable the fuse mount for non-root users…

sudo vi /etc/fuse.conf

uncomment the parameter user_allow_other on the last line

…and then mount your ftp site

curlftpfs -o user=username:password,allow_other /mnt/yourftp

Now you can navigate your ftp like a classic filesystem folder!

Finally enjoy your rsync (example):

rsync -r -t -v –progress –delete /home/folder_to_backup/ /mnt/yourftp

Remember that if you need to sync folders with different name you have to add the last slash on the source dir!

P.S. Don’t forget to unmount your ftp site after the rsync:

sudo umount /mnt/yourftp

Gnuplot for postfix

After wasting time with spreadsheets I decided to return back to the univeristy times and use Gnuplot.

The result was this nice script - – that monitors postfix queues and uses Little’s Law to print queues thruput. Running with -g plots immediatly the graph on your X display.

Now our gnuplot fast-track. Run #gnuplot and type

# don't have to write the file name: gnuplot uses variables ;)
f = "/tmp/data.csv"

# format graph, show grid and titles
set xlabel "time"
set key outside bottom
set ylabel "%"
set grid
set ylabel "items"
set title "Postfix Queue Stats"

# Use a logarithmic scale on y axis, so that
# we can plot graphs based on different
# units (eg. mail/sec and kB/s)
set autoscale
set log y

# Our csv has a human-readable timestamp for
# x axis, so we tell gnuplot how to read the data:
# parse a time using a given format
set xdata time
set timefmt "%d-%m-%Y %H:%M:%S"

# ...and set the x label output to be
# for our graph
set format x "%H:%M"

# the boxes in the plot should be filled
# with a 0.5 transparency factor
set style fill solid 0.5 border

# now let's plot our csv (we assigned it to the "f" variable, remember?)
# first the 3rd column (using 1:3), then the 4th and 5th
# We started at 3 because 1:1 and 1:2 are used for the x axis.
# Gnuplot columns are space-separated, and the date format contains a space
# so covers 2 column (1:1 is for the date, 1:2 for the hour)
# For each column, we set a title
# and a style (eg boxes aka histograms)
# with a color 1 (lc 1)
plot f using 1:3 title "tot" with boxes lc 1, \\
f using 1:4 title "active" with boxes, \\
f using 1:5 title "kB" with lines

tar: strip that path

There are a couple of options of gnu Tar that can save you some time:

  • -C lets you change the directory before adding|extracting file from an archive
  • –strip-components=X lets you extract an archive stripping the heading path  until the Xth level


# tar cf /tmp/opt_postfix.tar -C /opt etc/postfix # backup /opt/etc/postfix without prepending /opt to the archive

# tar tvf  /tmp/opt_postfix.tar # check the archive

# tar xf  /tmp/opt_postfix.tar -C /opt2/etc2/ –strip-components=1 # unpack in another directory

Add a plus to sqlplus

Oracle default cli, sqlplus, doesn’t support readline, so you can’t navigate thru commands or search in history.
You’ll enable – between the others:

  • the four arrows
  • ^R
  • ESC .
  • CTRL+U and CTRL+Y

Use  one of those two readline wrapper you can really speed up your job:
* rlfe
* rlwrap (apt-get install rlwrap)

After building, run
# rlwrap sqlplus