Best filesystem for backups, temp or logs

If you have a habit of storing all your temporary files on your desktop (or any other folder) and forgot to remove them later, or if you use a device to store your backups, you will find that your computer get filled up easily with tons of files that you have no use for. Once that happens, cleaning up your computer becomes a tedious task and a troublesome chores. Here’s a quick and easy way to watch a folder for old files and delete them automatically.

Using a filesystem, like as limit-fs, that automatically check the used space and clean the oldest files if the space is about to saturate.

Use limit-fs is really simple, if the directory where you use to save backups or temporary files is ~/backups you will mount the filesystem over there by:


$ limit-fs ~/backups

Don’t worry about what already was in the folder, all contents already present before mounting will remains at the same place.

You can specify some options to control the behavior, check limit-fs github page for more information.

Set command output as facts with ansible

Having to check ntp configuration on a distributed cluster, I had to parse the “`timedatectl“` output into a dict and apply various checks.

I did this via the (infamous) ;) jinja templates|pipelines.

# This is the check_time.yml playbook.

- name: Register the timedatectl output even in check mode. This command doesn't modify server configuration.
  shell: "timedatectl | grep ': '"
  check_mode: no
  register: timedatectl_output

# Note that:
#  - to use timedatectl_output into with_items we need to QUOTE-AND-BRACE it
#  - we can default the previously indefined timedatectl_status dictionary via
#       variable | default(VALUE)
#  - 
- name: Process timedatectl_output lines one at a time and update repeatedly the timedatectl_status variable using combine().
  set_fact:
    timedatectl_status: >
      {{
        timedatectl_status | default({}) |
        combine(
          dict([ item.partition(': ')[::2]|map('trim') ])
        )
      }}
  with_items: "{{timedatectl_output.stdout_lines}}"

Now we can check ;)

- name: Clock synchronized
  fail: msg="Clock unsynchronized {{timedatectl_status}}"
  when: "{{timedatectl_status['NTP synchronized'] == 'no' }}"

- name: All hw clocks are utc
  fail: msg="hwclock not utc {{timedatectl_status}}"
  when: "{{timedatectl_status['RTC in local TZ'] == 'no' }}"


Terraforming the clouds

Terraform is an infrastructure configuration manager by HashiCorp (Vagrant) like CloudFormation or Heat, supporting
various infrastructure providers including Amazon, VirtuaBox, …

Terraform reads *.tf and creates an execution plan containing all resources:

– instances
– volumes
– networks
– ..

You can check an example configuration here on github:

Unfortunately, it uses a custom but readable format instead of yaml.

# Create a 75GB volume on openstack
resource "openstack_blockstorage_volume_v1" "master-docker-vol" {
  name = "mastervol"
  size = 75
}

# Create a nova vm with the given colume attached
resource "openstack_compute_instance_v2" "machine" {
  name = "test"
  region = "${var.openstack_region}"
  image_id = "${var.master_image_id}"
  flavor_name = "${var.master_instance_size}"
  availability_zone = "${var.openstack_availability_zone}"
  key_pair = "${var.openstack_keypair}"
  security_groups = ["default"]
  metadata {
    ssh_user = "cloud-user"
  }
  volume {
    volume_id = "${openstack_blockstorage_volume_v1.master-docker-vol.id}"
  }
}


Further resources (eg. openstack volumes|floating_ip, digitalocean droplets, docker containers, ..)
can be defined via plugins.

At the end of every deployment cycle, terraform updates the `terraform.tstate` state file (which may
be stored on s3 or on shared storage) describing the actual infrastructure.

Upon configuration changes, terraform creates and shows a new execution plan,
that you can eventually apply.

As there’s no ansible provisioner, a terraform.py script can be used to extract an inventory file from a `terraform.tstate`.

MySQL 8.0 Innodb Cluster looks at MongoDB

MySQL turns 8.0 and the technical preview integrates a new “InnoDB Cluster”. The overall architecture reminds MongoDB:

– group replication with a single master, similar to replica-sets;
– a mysqlsh able to create replication group and local instances supporting js and python;
– a MySQL Router as a gateway to appservers, to be deployed on each client machine like the mongos.

Once installed, you can create a RG with a few commands:

su - rpolli
mysqlsh

\py  # enable python mode. Create 3 instances in  ~/sandbox-dir/{3310,3320,3330}

for port in (3310, 3320, 3330, 3340, 3350):
    dba.deploy_local_instance(port,{"password":"secret"});

Now we have 5 mysql instances listening on various ports. Create a cluster and check the newly created mysql_innodb_cluster_metadata schema.

\connect root:root@localhost:3310

cluster = dba.create_cluster('polli', 'pollikey');

\sql  # switch to sql mode

SHOW DATABASES;

| Database                      |
+-------------------------------+
| information_schema            |
| mysql                         |
| mysql_innodb_cluster_metadata |
| performance_schema            |
| sys                           |

Go back to the python mode and add the remaining instances to the cluster.

\py  # return to python mode again

# Eventually re-get the cluster.
cluster = dba.get_cluster('polli',{'masterKey':'pollikey'})  # masterKey is a shared secret between nodes.

# Add the other nodes
for port in ports[1:]:
    cluster.add_instance('root@localhost:' + str(port),'secret');

# Check status
cluster.status()  # BEWARE! The output is a str :( not a dict
{
    "clusterName": "polli",
    "defaultReplicaSet": {
        "status": "Cluster tolerant to up to 2 failures.",
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "localhost:3320": {
                        "address": "localhost:3320",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3330": {
                        "address": "localhost:3330",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    }
                    ....
                }
            }
        }
    }
}

Now check the failover feature.

dba.kill_local_instance(3310)  # Successfully killed

# Parse the output with...
import json
json.loads(cluster.status())["defaultReplicaSet"]["topology"].keys()  # localhost:3320 WOW!


Once set up, created users will span the whole group.

\sql
CREATE USER 'admin'@'%' IDENTIFIED BY 'secret';
GRANT ALL ON *.* TO 'admin'@'%'  WITH GRANT OPTION;

Now let’s connect to different cluster nodes.

mysql -uadmin -P3310 -psecret -e 'create database this_works_on_master;'  # OK
mysql -uadmin -P3320 -psecret -e 'create database wont_work_on_slave_even_if_admin;'  
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

The default setup allows writings only on master *even for admin|super users* that can be overriden as usual.

mysql> SHOW VARIABLES LIKE '%only' 
mysql> show variables like '%only';
+-------------------------------+-------+
| Variable_name                 | Value |
+-------------------------------+-------+
...
| read_only                     | ON    |
| super_read_only               | ON    |
...
+-------------------------------+-------+
mysql> set global super_read_only = OFF;  -- just for root
mysql> set global super_read_only = ON;  

mysql> set global read_only = OFF;  -- for all allowed users
mysql> 

Mongodb python driver is topology-aware. MySQL connectors instead rely on mysql-router for connecting to the right primary.

Provisioning openstack on vmware infrastructure.

As I didn’t found extensive docs about provisioning Red Hat Openstack on a vmware infrastructure, I browsed the python code.

Python is a very expressive and clear language and you can get to the point in a moment!

I then was able to create the following instack.json to power-management a set of vmware machines.

Despite the many ways to pass ssh_* variables via ironic, the right way of running it via the instack.json is to:

– use the `pm_virt_type` instead of `ssh_virt_type`;
– express the ssh_key_content in the pm_password parameter like shown in the docs;
– set capabilities like profile and boot_option directly.

The key should be json-serialized on one line, replacing CR with ‘\n’.

{
    "nodes":[
        {
            "mac":[
                "00:0c:29:00:00:01"
            ],
            "capabilities": "profile:control,boot_option:local"
            "cpu":"8",
            "memory":"16384",
            "disk":"60",
            "arch":"x86_64",
            "pm_type":"pxe_ssh",
            "pm_virt_type": "vmware",
            "pm_addr":"172.18.0.1",
            "pm_user":"vmadmin",
            "pm_password":"-----BEGIN RSA PRIVATE KEY-----\nMY\nRSA\nKEY\n-----END RSA PRIVATE KEY-----"
        },
{..other nodes..} 

FullText Indexing IPv6 addresses with MySQL 5.7

MySQL 5.7 supports generated fields. This is particularly useful for searching the string representation of numeric stored ip addresses:

CREATE TABLE catalog(
ip varbinary(16) not null,
hostname varchar(64) not null,
label varchar(64),
ip_ntoa varchar(64) generated always as (inet6_ntoa(ip)) STORED, -- generate and store fields with the address representation
fulltext key (hostname, ip_ntoa, label)
);

When inserting values

INSERT INTO catalog(ip,hostname,label) VALUES
(inet6_aton('127.0.0.1'), 'localhost', 'lo'),
(inet6_aton('192.168.0.1'), 'gimli', 'stage,ipv4'),
(inet6_aton('fdfe::5a55:caff:fefa:9089'), 'legolas', 'router,ipv6'),
(inet6_aton('fdfe::5a55:caff:fefa:9090'), 'boromir', 'router,ipv6')

you can search in OR mode with

SELECT hostname FROM catalog WHERE
  MATCH(ip_ntoa, hostname, label)
  AGAINST('9089 router');
-- returns every entry matching ANY needle
***1***
hostname: legolas
***2***
hostname: boromir

Or exact matches

SELECT hostname FROM catalog WHERE
  MATCH(ip_ntoa, hostname, label)
  AGAINST('+9089 +router' in boolean mode);
-- returns ONE entry matching ALL needles
***1***
hostname: legolas

Adding docker images to openshift 3.1

Openshift 3.1 is based on Kubernetes and Docker, and provides a small set of images including jboss EAP 6.4.

You can add new images in two steps:

1- create an ImageStream, that’s a docker image + a set of labels
2- create a Template using that ImageStream

To create the ImageStream read carefully the following description.

# Create the ImageStream
oc create -f - <<EOF
apiVersion: v1
kind: ImageStream
metadata:
  name: wildfly9-openshift
  namespace: openshift        # Set this to "openshift" if you want to make this image globally visible
spec:
  dockerImageRepository: docker.io/openshift/wildfly-90-centos7:latest  # The original docker hub repo
  tags:
  - annotations:
      description: Wildfly 9.0 S2I images.
      iconClass: icon-jboss
      sampleRef: 9.0.x 
      supports: wildfly:9,javaee:7,java:8,
      tags: builder,javaee,java,jboss
      version: "1.0"
    name: "1.0"
status:
  dockerImageRepository: ""


Roundcube: risolvere l’errore Net_LDAP2_RootDSE::construct() must be public

Per rosolvere il seguente errore in roundcube

PHP Fatal error: Access level to Net_LDAP2_RootDSE::__construct() must be public (as in class PEAR) in roundcubemail/vendor/pear-pear.php.net/Net_LDAP2/Net/LDAP2/RootDSE.php on line 238

Seguire i passi:

  • cd <roundcube-root-folder>
  • Installare composer.phar: curl -s https://getcomposer.org/installer | php
  • copiare il template composer.json-dist in composer.json
  • modificare il file composer.json, nella sezione “require” aggiungere la riga"pear-pear.php.net/net_ldap2": "~2.2.0",
  • lanciare il comando: php composer.phar update

docker multihost network: an epiphany of namespaces.

Playing with docker multihost network this week-end.

With multihost networking you can run communicating containers on different docker nodes.
The magic relies on:
– a shared kv store (Eg. consul) for ipaddresses;
– a netns for vxlan for communication with a bridge and no processes attached.

Every network created using the Overlay driver has its own network namespace.
And for every network (& its subnet combination), we create a linux bridge inside that dedicated namespace.
The host end of the veth pair is moved into this namespace and attached to the bridge (inside of that namespace).
Hence, if you look for the veth pair in the host namespace, you wont find any :-).

If you look for vxlan setup on the boot2docker distro you have to dig deep ;).
1- docker netns is stored in /var/run/docker/netns. To access it you need to

#ln -s /var/run/docker/netns /var/run;

2- Now you can look for the vxlan netns, which has the same id on every machine:

#ip netns ls | while read a; do
    ip netns exec $a ip l | grep vxlan -q && echo $a;done

The vxlan references the UDP port for communication (eg. dstport 46354).

87: vxlan1:  mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default
    link/ether da:69:8d:4d:b9:39 brd ff:ff:ff:ff:ff:ff promiscuity 1
    vxlan id 256 srcport 0 0 dstport 46354 proxy l2miss l3miss ageing 300
    bridge_slave

3- Every container with EXPOSEd ports has a veth paired with a veth in the vxlan netns;

4- the veth in vxlan netns are slaves of br0;

5- br0 has an ip, and is the default gw for containers.

Sede Legale e Unità Operativa
Via Panfilo Castaldi, 11
20124 Milano
Tel: +39 02.66.732.1
Fax: +39 02.66.732.300
Unità Operativa
Via Cristoforo Colombo, 163
00147 Roma
Tel: +39 06.9826.9600
Fax: +39 06.9826.9680
Copyright © 2022 - tutti i diritti riservati
Società iscritta al registro delle imprese
di Milano al numero 12938200156
P.IVA e C.F. 12938200156
Privacy Policy | Cookie Policy