Oracle’s mysql.connector for python

Oracle released a pure-python mysql connector with connection pooling support.

Create a connection pool is really easy. You can try the following snippets in ipython

import mysql.connector

auth = {
"database": "test",
"user":     "user",
"password":     "secret"
}

# the first call instantiates the pool
mypool = mysql.connector.connect(
pool_name = "mypool",
pool_size = 3,
**auth)

All the subsequent calls to connect(pool_name=”mypool”) will be managed by the pool

# this won't create another connection
#  but lend one from the pool
conn =  mysql.connector.connect(
    pool_name = "mypool",
    pool_size = 3,
    **auth)

# now get a cursor and play
c = conn.cursor()
c.execute("show databases")
c.close()

Closing the connection will just release the connection to the pool: we’re not closing the socket!

conn.close()

Generating svg from xml with python etree

Python rocks at managing xml files. Having to convert some vss shapes to odt, I just:

# vss2xhtml file.vss > file.xhtml

Then I parsed the xhtml containing multiple images with python

from xml.etree import ElementTree as ET
xml_header = """
        
        """
tree = ET.parse("file.xhtml")
# get all the images
images = tree.findall(r'//{http://www.w3.org/2000/svg}svg')

# enumerate avoids i=0, i+=1
for i, x in enumerate(images):
    destination_file = "image_%s.svg" % i
    with open(destination_file, 'w') as fd:
        fd.write(xml_header)
        fd.write(ET.tostring(x))

routes made easy

The legacy routes configuration on RH-like was ugly and error prone. You had to compile files like the following:

# route-eth0
ADDRESS0=10.10.10.0
NETMASK0=255.255.255.0
GATEWAY0=192.168.0.253
ADDRESS1=172.16.1.0
NETMASK1=255.255.255.0
GATEWAY1=192.168.0.254

You had to preserve enumeration and evaluate netmasks. This was probably due to the usage of route script, which synopsis is

route add -net $ADDRESS0 netmask $NETMASK0 gw $GATEWAY0

The “new” iproute2 suite allows a new format of route files, compatible with the route dumping.

#route-eth0
10.10.10.0/24 via 192.168.0.253 dev eth0
172.16.1.0/26 via 192.168.0.254 dev eth0

At this point it’s easy to create our route-ethX files starting from the #ip route; output.

#ip route list scope global | grep -- eth0 | grep -v 'default' > route-eth0

In this case we filtered out two kind of entries:
* the default gateway, that could be managed via DHCP or other means like /etc/sysconfig/network:GATEWAY
* non global scope routes, like the ones set by #ip; when assigning addresses.
Check

#man ip |less +/rt_scope

Eg.

#ip -4 -o a list eth2; # show the ip
8: eth2    inet 192.168.0.40/26 brd 192.168.0.63 scope global eth2

#ip route | grep eth2 # show all eth2-related routes
192.168.0.0/26 dev eth2  proto kernel  scope link  src 192.168.0.40    #scope link!
10.0.10.0/24 via 192.168.0.1 dev eth2 

Per-class heap plots with py-jstack

Having to profile a webapp that filled the Java Heap, I wrote a simple python module that plots the size of the heap consumed by each class.

It uses the files generated by #jmap -histo $(pidof java); which tracks the memory per-class consumption.

Once you generate your files with something like:

while sleep 5; do
    jmap -histo $(pidof java) > /tmp/histo.$(date %s)
done

You can load the jplot module included in https://github.com/ioggstream/py-jstack/.
Using ipython makes things even easier!

#git clone https://github.com/ioggstream/py-jstack/ 
#cd py-jstack;
#ipython;
ipython$ import jplot

Once you loaded the module, you have to list the files to parse
and generate a table containing the classes and their memory occupation in time
for the first 30 greedy classes.

ipython$ files = ! ls /tmp/histo.*
ipython$ table = jplot.jhisto(files, limit=30, delta=False)

What does the `table` dictionary contain? A list of #instance and memory size in time

ipython$ cls = 'java.lang.String'
ipython$ print(table[cls][:10]) 
[(452588.0, 18103520.0), # values in 1st file
 (186198.0, 7447920.0), # values in 2nd file
 (229789.0, 9191560.0), # values in 3rd file
...]
ipython$ memory_for_string = zip(*table[cls])[1]
ipython$ max_memory_for_string = max(memory_for_string)

Using matplotlib we can plot too, and have a glimpse of which class is misbehaving…

ipython$ jplot.plot_classes(table, limit=10)

One git to bring them all, and in a repo bind them.

I had to reunite various git repos under a new one. To do this without losing logs, I found a stackoverflow hint that worked for me.

# add and get the old repo data
git remote add old_repo git@git.example.com:/foo/
git fetch old_repo

# merge into my master without commit…
git merge -s ours –no-commit old_repo/master

# …we need to relocate in the foo/ subdirectory before
git read-tree –prefix=foo/ -u rack_remote/master

# now… commit!
git commit -m “Imported foo as a subtree.”

The #git log presents files in the old place, so git log foo/ doesn’t work. We can instead
diff between various releases simply with

git diff rev1 rev2 —

autotools and latest gcc version

Running an old autotools build I just found it didn’t work no more. The command in error was:
# gcc -lpthread -o foo foo.o

Manually putting -lpthread at the end was ok but as I was using autotools I couldn’t just change the order and – above all – it was fine for gcc to expect the library *after* the object file!

The solution – suggested on #gcc by ngladitz – was to use the right autotools variable for adding libraries.

After RTFM I just:
– foo_LDFLAGS=-lpthread # autotools put flags before …
+ foo_LDADD=-lpthread # … and libraries after

cleaning up and rebuilding fixed everything!

Launch your patch (and Decode right)!

Ubuntu contributions are managed via the launchpad platform. Once you subscribe you can contribute checking out projects with bzr.

Ubuntu comes with a bzr “launchpad” plugin. You can register your ssh public key here

https://launchpad.net/~user/+editsshkeys

then join with:

#bzr launchpad-login user

Checkout your source project – eg u1ftp, the ubuntu one ftp gateway.

#bzr branch lp:u1ftp u1ftp-origin

Do your patch, commit and then push it!

bzr push lp:~rpolli/u1ftp/unicode_support

Apps in 10 minutes: Flask cheatsheet

Flask is a micro framework for python webapp. It’s really useful for creating prototypes and small application focusing on the logic.

Here’s a cheatsheet!

#!/usr/bin/python
# Import Flask
# and md5 algorithm
from flask import Flask
from flask import request, abort
from hashlib import md5

# Create a new application
app = Flask(__name__)

# Bind methods and functions
@app.route(‘/’)
def index():
 return ‘Index Page’

@app.route(‘/hello’, methods = [‘GET’, ‘POST’])
def hello():
 ”””Greets the user.”””
 # Those are get parameters
 user, client = map(request.args.get, [‘user’, ‘client’])
 return ‘Hello %s!’ % user

# do this BEFORE each request
@app.before_request
def authorizer():
 ”””Authenticate user and password passed via GET or POST.
  In this sample the password is the md5(user).
  Use the abort function to raise an http error!
 ”””
 params = [‘user’, ‘client’,’passwd’,’algorithm’]
 user, client, passwd, algorithm = map(request.values.get, params)
 if passwd != md5(user).hexdigest():
  abort(401)

@app.errorhandler(401)
def unauthorized(e):
 ”””Use error handlers to present error pages.”””
 return “Unauthorized user”, 404

# Run the app
if __name__ == ‘__main__’:
 app.debug = True
 app.run(host=’0.0.0.0′, port=9000)

A jar of Perl: PAR – II : repacking

After reading the previous post on PAR, you may want to unpack, modify and repack an existing PAR application.

Let’s play the game with a ficticious my-perl-webapp.bin

= First: unpack and analyze =

#mkdir /tmp/tmpdir/;
#unzip my-perl-webapp.bin -d /tmp/tmpdir/;
# cd /tmp/tmpdir

In tmpdir we’ll find:
the usual ./script/ directory with all the perl files run by the application;
the main.pl auto-generated by par;
the MANIFEST and META.yml files;
the ./lib/ directory with all the dependencies.

= Second: modify whatever =

Now you have all the perl files and you can modify and fix whatever you want.
Check and test before rebuild!

= Third: repack (simple) =

The first time we’ll try to repack with a simple:

# pp -P -o /tmp/my-perl-webapp-1.bin script/{all .pl files but main.pl } ;

Remember that main.pl is auto-generated by PAR.
When everything is done, check if all the dependencies have been added (ex. confronting package content with unzip -t)

= Fourth: repack (working) =

Repacking may require some more work. The pp command may not notify all the required dependencies you need.
You can check the MANIFEST for a list of files to add to your package.

Add to the package all the files present in the original one. You can do it with

# pp -a lib/file1.pm -a lib/file2.pm … -P -o …

To speed up things we’ll use find:

# find lib -type f -printf ” -a %p ” | \
xargs pp -P \
-o /tmp/my-perl-webapp-1.bin \
script/{all .pl files but main.pl }
-a MANIFEST \
-a META.yml

If you don’t remember how find -printf works you can check #man

Enjoy,
R.