Terraforming the clouds

Terraform is an infrastructure configuration manager by HashiCorp (Vagrant) like CloudFormation or Heat, supporting
various infrastructure providers including Amazon, VirtuaBox, …

Terraform reads *.tf and creates an execution plan containing all resources:

– instances
– volumes
– networks
– ..

You can check an example configuration here on github:

Unfortunately, it uses a custom but readable format instead of yaml.

# Create a 75GB volume on openstack
resource "openstack_blockstorage_volume_v1" "master-docker-vol" {
  name = "mastervol"
  size = 75
}

# Create a nova vm with the given colume attached
resource "openstack_compute_instance_v2" "machine" {
  name = "test"
  region = "${var.openstack_region}"
  image_id = "${var.master_image_id}"
  flavor_name = "${var.master_instance_size}"
  availability_zone = "${var.openstack_availability_zone}"
  key_pair = "${var.openstack_keypair}"
  security_groups = ["default"]
  metadata {
    ssh_user = "cloud-user"
  }
  volume {
    volume_id = "${openstack_blockstorage_volume_v1.master-docker-vol.id}"
  }
}


Further resources (eg. openstack volumes|floating_ip, digitalocean droplets, docker containers, ..)
can be defined via plugins.

At the end of every deployment cycle, terraform updates the `terraform.tstate` state file (which may
be stored on s3 or on shared storage) describing the actual infrastructure.

Upon configuration changes, terraform creates and shows a new execution plan,
that you can eventually apply.

As there’s no ansible provisioner, a terraform.py script can be used to extract an inventory file from a `terraform.tstate`.

MySQL 8.0 Innodb Cluster looks at MongoDB

MySQL turns 8.0 and the technical preview integrates a new “InnoDB Cluster”. The overall architecture reminds MongoDB:

– group replication with a single master, similar to replica-sets;
– a mysqlsh able to create replication group and local instances supporting js and python;
– a MySQL Router as a gateway to appservers, to be deployed on each client machine like the mongos.

Once installed, you can create a RG with a few commands:

su - rpolli
mysqlsh

\py  # enable python mode. Create 3 instances in  ~/sandbox-dir/{3310,3320,3330}

for port in (3310, 3320, 3330, 3340, 3350):
    dba.deploy_local_instance(port,{"password":"secret"});

Now we have 5 mysql instances listening on various ports. Create a cluster and check the newly created mysql_innodb_cluster_metadata schema.

\connect root:root@localhost:3310

cluster = dba.create_cluster('polli', 'pollikey');

\sql  # switch to sql mode

SHOW DATABASES;

| Database                      |
+-------------------------------+
| information_schema            |
| mysql                         |
| mysql_innodb_cluster_metadata |
| performance_schema            |
| sys                           |

Go back to the python mode and add the remaining instances to the cluster.

\py  # return to python mode again

# Eventually re-get the cluster.
cluster = dba.get_cluster('polli',{'masterKey':'pollikey'})  # masterKey is a shared secret between nodes.

# Add the other nodes
for port in ports[1:]:
    cluster.add_instance('root@localhost:' + str(port),'secret');

# Check status
cluster.status()  # BEWARE! The output is a str :( not a dict
{
    "clusterName": "polli",
    "defaultReplicaSet": {
        "status": "Cluster tolerant to up to 2 failures.",
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "localhost:3320": {
                        "address": "localhost:3320",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3330": {
                        "address": "localhost:3330",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    }
                    ....
                }
            }
        }
    }
}

Now check the failover feature.

dba.kill_local_instance(3310)  # Successfully killed

# Parse the output with...
import json
json.loads(cluster.status())["defaultReplicaSet"]["topology"].keys()  # localhost:3320 WOW!


Once set up, created users will span the whole group.

\sql
CREATE USER 'admin'@'%' IDENTIFIED BY 'secret';
GRANT ALL ON *.* TO 'admin'@'%'  WITH GRANT OPTION;

Now let’s connect to different cluster nodes.

mysql -uadmin -P3310 -psecret -e 'create database this_works_on_master;'  # OK
mysql -uadmin -P3320 -psecret -e 'create database wont_work_on_slave_even_if_admin;'  
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

The default setup allows writings only on master *even for admin|super users* that can be overriden as usual.

mysql> SHOW VARIABLES LIKE '%only' 
mysql> show variables like '%only';
+-------------------------------+-------+
| Variable_name                 | Value |
+-------------------------------+-------+
...
| read_only                     | ON    |
| super_read_only               | ON    |
...
+-------------------------------+-------+
mysql> set global super_read_only = OFF;  -- just for root
mysql> set global super_read_only = ON;  

mysql> set global read_only = OFF;  -- for all allowed users
mysql> 

Mongodb python driver is topology-aware. MySQL connectors instead rely on mysql-router for connecting to the right primary.

RHEV: recovery VM in Unknown state

If an operation that implies a state change on a VM fails, sometimes RHEV sets the VM status to ‘Unknown’.
This morning, after a fail of a ‘Power off’ operation on a VM in panic – due to a bug ([vdsm] AttributeError: GuestAgent instance has no attribute ‘_sock’) – the VM state was set to ‘Unknown’.
In this case basically you don’t can do anything…
If you know the real state of your VM, you can manually change it and restart the VM. So I set the state of my VM to 0 (stopped) and I restarted it.

[root@rhevm ~]# psql -U engine
psql (8.4.20)
Digita "help" per avere un aiuto.

engine=> select vm_guid from vm_static where vm_name='server.example.com';
vm_guid
--------------------------------------
2d1e72a1-16c4-4f38-a21e-78113669dd98
(1 riga)

engine=> select status from vm_dynamic where vm_guid='2d1e72a1-16c4-4f38-a21e-78113669dd98';
status
--------
7
(1 riga)

engine=> update vm_dynamic set status=0 where vm_guid='2d1e72a1-16c4-4f38-a21e-78113669dd98';
UPDATE 1
[oVirt shell (connected)]# action vm server.example.com start

job-id : 7f1ac179-047c-4d50-932f-3ae7970c96e2
status-state: complete
vm-id : 2d1e72a1-16c4-4f38-a21e-78113669dd98

Provisioning openstack on vmware infrastructure.

As I didn’t found extensive docs about provisioning Red Hat Openstack on a vmware infrastructure, I browsed the python code.

Python is a very expressive and clear language and you can get to the point in a moment!

I then was able to create the following instack.json to power-management a set of vmware machines.

Despite the many ways to pass ssh_* variables via ironic, the right way of running it via the instack.json is to:

– use the `pm_virt_type` instead of `ssh_virt_type`;
– express the ssh_key_content in the pm_password parameter like shown in the docs;
– set capabilities like profile and boot_option directly.

The key should be json-serialized on one line, replacing CR with ‘\n’.

{
    "nodes":[
        {
            "mac":[
                "00:0c:29:00:00:01"
            ],
            "capabilities": "profile:control,boot_option:local"
            "cpu":"8",
            "memory":"16384",
            "disk":"60",
            "arch":"x86_64",
            "pm_type":"pxe_ssh",
            "pm_virt_type": "vmware",
            "pm_addr":"172.18.0.1",
            "pm_user":"vmadmin",
            "pm_password":"-----BEGIN RSA PRIVATE KEY-----\nMY\nRSA\nKEY\n-----END RSA PRIVATE KEY-----"
        },
{..other nodes..} 

$ mv vaunaspada labs

Avete cercato vaunaspada.babel.it e vi siete ritrovati su labs.par-tec.it? Non siete vittime di un DNS spoofing, abbiamo solo dato una rinfrescata al blog, allineandolo al nuovo brand e look&feel.

Ciò che non cambia è la natura tecnica dei contenuti e l’entusiasmo del nostro team nel condividere le proprie esperienze con i propri simili.


Were you searching for vaunaspada.babel.it and you ended up on labs.par-tec.it? You are not the victim of a DNS spoofing, we just gave our blog a makeover, matching it with our (new) brand and look&feel.

What stays the same is the technical nature of the contents and the enthusiasm of our team in sharing their experience with others of their own kind.

Enjoy ;-)

FullText Indexing IPv6 addresses with MySQL 5.7

MySQL 5.7 supports generated fields. This is particularly useful for searching the string representation of numeric stored ip addresses:

CREATE TABLE catalog(
ip varbinary(16) not null,
hostname varchar(64) not null,
label varchar(64),
ip_ntoa varchar(64) generated always as (inet6_ntoa(ip)) STORED, -- generate and store fields with the address representation
fulltext key (hostname, ip_ntoa, label)
);

When inserting values

INSERT INTO catalog(ip,hostname,label) VALUES
(inet6_aton('127.0.0.1'), 'localhost', 'lo'),
(inet6_aton('192.168.0.1'), 'gimli', 'stage,ipv4'),
(inet6_aton('fdfe::5a55:caff:fefa:9089'), 'legolas', 'router,ipv6'),
(inet6_aton('fdfe::5a55:caff:fefa:9090'), 'boromir', 'router,ipv6')

you can search in OR mode with

SELECT hostname FROM catalog WHERE
  MATCH(ip_ntoa, hostname, label)
  AGAINST('9089 router');
-- returns every entry matching ANY needle
***1***
hostname: legolas
***2***
hostname: boromir

Or exact matches

SELECT hostname FROM catalog WHERE
  MATCH(ip_ntoa, hostname, label)
  AGAINST('+9089 +router' in boolean mode);
-- returns ONE entry matching ALL needles
***1***
hostname: legolas

Adding docker images to openshift 3.1

Openshift 3.1 is based on Kubernetes and Docker, and provides a small set of images including jboss EAP 6.4.

You can add new images in two steps:

1- create an ImageStream, that’s a docker image + a set of labels
2- create a Template using that ImageStream

To create the ImageStream read carefully the following description.

# Create the ImageStream
oc create -f - <<EOF
apiVersion: v1
kind: ImageStream
metadata:
  name: wildfly9-openshift
  namespace: openshift        # Set this to "openshift" if you want to make this image globally visible
spec:
  dockerImageRepository: docker.io/openshift/wildfly-90-centos7:latest  # The original docker hub repo
  tags:
  - annotations:
      description: Wildfly 9.0 S2I images.
      iconClass: icon-jboss
      sampleRef: 9.0.x 
      supports: wildfly:9,javaee:7,java:8,
      tags: builder,javaee,java,jboss
      version: "1.0"
    name: "1.0"
status:
  dockerImageRepository: ""


Roundcube: risolvere l’errore Net_LDAP2_RootDSE::construct() must be public

Per rosolvere il seguente errore in roundcube

PHP Fatal error: Access level to Net_LDAP2_RootDSE::__construct() must be public (as in class PEAR) in roundcubemail/vendor/pear-pear.php.net/Net_LDAP2/Net/LDAP2/RootDSE.php on line 238

Seguire i passi:

  • cd <roundcube-root-folder>
  • Installare composer.phar: curl -s https://getcomposer.org/installer | php
  • copiare il template composer.json-dist in composer.json
  • modificare il file composer.json, nella sezione “require” aggiungere la riga"pear-pear.php.net/net_ldap2": "~2.2.0",
  • lanciare il comando: php composer.phar update