MySQL 8.0 Innodb Cluster looks at MongoDB

MySQL turns 8.0 and the technical preview integrates a new “InnoDB Cluster”. The overall architecture reminds MongoDB:

– group replication with a single master, similar to replica-sets;
– a mysqlsh able to create replication group and local instances supporting js and python;
– a MySQL Router as a gateway to appservers, to be deployed on each client machine like the mongos.

Once installed, you can create a RG with a few commands:

su - rpolli

\py  # enable python mode. Create 3 instances in  ~/sandbox-dir/{3310,3320,3330}

for port in (3310, 3320, 3330, 3340, 3350):

Now we have 5 mysql instances listening on various ports. Create a cluster and check the newly created mysql_innodb_cluster_metadata schema.

\connect root:root@localhost:3310

cluster = dba.create_cluster('polli', 'pollikey');

\sql  # switch to sql mode


| Database                      |
| information_schema            |
| mysql                         |
| mysql_innodb_cluster_metadata |
| performance_schema            |
| sys                           |

Go back to the python mode and add the remaining instances to the cluster.

\py  # return to python mode again

# Eventually re-get the cluster.
cluster = dba.get_cluster('polli',{'masterKey':'pollikey'})  # masterKey is a shared secret between nodes.

# Add the other nodes
for port in ports[1:]:
    cluster.add_instance('root@localhost:' + str(port),'secret');

# Check status
cluster.status()  # BEWARE! The output is a str :( not a dict
    "clusterName": "polli",
    "defaultReplicaSet": {
        "status": "Cluster tolerant to up to 2 failures.",
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "localhost:3320": {
                        "address": "localhost:3320",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    "localhost:3330": {
                        "address": "localhost:3330",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}

Now check the failover feature.

dba.kill_local_instance(3310)  # Successfully killed

# Parse the output with...
import json
json.loads(cluster.status())["defaultReplicaSet"]["topology"].keys()  # localhost:3320 WOW!

Once set up, created users will span the whole group.

CREATE USER 'admin'@'%' IDENTIFIED BY 'secret';

Now let’s connect to different cluster nodes.

mysql -uadmin -P3310 -psecret -e 'create database this_works_on_master;'  # OK
mysql -uadmin -P3320 -psecret -e 'create database wont_work_on_slave_even_if_admin;'  
ERROR 1290 (HY000): The MySQL server is running with the --super-read-only option so it cannot execute this statement

The default setup allows writings only on master *even for admin|super users* that can be overriden as usual.

mysql> SHOW VARIABLES LIKE '%only' 
mysql> show variables like '%only';
| Variable_name                 | Value |
| read_only                     | ON    |
| super_read_only               | ON    |
mysql> set global super_read_only = OFF;  -- just for root
mysql> set global super_read_only = ON;  

mysql> set global read_only = OFF;  -- for all allowed users

Mongodb python driver is topology-aware. MySQL connectors instead rely on mysql-router for connecting to the right primary.

RHEV: recovery VM in Unknown state

If an operation that implies a state change on a VM fails, sometimes RHEV sets the VM status to ‘Unknown’.
This morning, after a fail of a ‘Power off’ operation on a VM in panic – due to a bug ([vdsm] AttributeError: GuestAgent instance has no attribute ‘_sock’) – the VM state was set to ‘Unknown’.
In this case basically you don’t can do anything…
If you know the real state of your VM, you can manually change it and restart the VM. So I set the state of my VM to 0 (stopped) and I restarted it.

[root@rhevm ~]# psql -U engine
psql (8.4.20)
Digita "help" per avere un aiuto.

engine=> select vm_guid from vm_static where vm_name='';
(1 riga)

engine=> select status from vm_dynamic where vm_guid='2d1e72a1-16c4-4f38-a21e-78113669dd98';
(1 riga)

engine=> update vm_dynamic set status=0 where vm_guid='2d1e72a1-16c4-4f38-a21e-78113669dd98';
[oVirt shell (connected)]# action vm start

job-id : 7f1ac179-047c-4d50-932f-3ae7970c96e2
status-state: complete
vm-id : 2d1e72a1-16c4-4f38-a21e-78113669dd98

Provisioning openstack on vmware infrastructure.

As I didn’t found extensive docs about provisioning Red Hat Openstack on a vmware infrastructure, I browsed the python code.

Python is a very expressive and clear language and you can get to the point in a moment!

I then was able to create the following instack.json to power-management a set of vmware machines.

Despite the many ways to pass ssh_* variables via ironic, the right way of running it via the instack.json is to:

– use the `pm_virt_type` instead of `ssh_virt_type`;
– express the ssh_key_content in the pm_password parameter like shown in the docs;
– set capabilities like profile and boot_option directly.

The key should be json-serialized on one line, replacing CR with ‘\n’.

            "capabilities": "profile:control,boot_option:local"
            "pm_virt_type": "vmware",
            "pm_password":"-----BEGIN RSA PRIVATE KEY-----\nMY\nRSA\nKEY\n-----END RSA PRIVATE KEY-----"
{..other nodes..} 


$ mv vaunaspada labs

Avete cercato e vi siete ritrovati su Non siete vittime di un DNS spoofing, abbiamo solo dato una rinfrescata al blog, allineandolo al nuovo brand e look&feel.

Ciò che non cambia è la natura tecnica dei contenuti e l’entusiasmo del nostro team nel condividere le proprie esperienze con i propri simili.

Were you searching for and you ended up on You are not the victim of a DNS spoofing, we just gave our blog a makeover, matching it with our (new) brand and look&feel.

What stays the same is the technical nature of the contents and the enthusiasm of our team in sharing their experience with others of their own kind.

Enjoy 😉

FullText Indexing IPv6 addresses with MySQL 5.7

MySQL 5.7 supports generated fields. This is particularly useful for searching the string representation of numeric stored ip addresses:

ip varbinary(16) not null,
hostname varchar(64) not null,
label varchar(64),
ip_ntoa varchar(64) generated always as (inet6_ntoa(ip)) STORED, -- generate and store fields with the address representation
fulltext key (hostname, ip_ntoa, label)

When inserting values

INSERT INTO catalog(ip,hostname,label) VALUES
(inet6_aton(''), 'localhost', 'lo'),
(inet6_aton(''), 'gimli', 'stage,ipv4'),
(inet6_aton('fdfe::5a55:caff:fefa:9089'), 'legolas', 'router,ipv6'),
(inet6_aton('fdfe::5a55:caff:fefa:9090'), 'boromir', 'router,ipv6')

you can search in OR mode with

SELECT hostname FROM catalog WHERE
  MATCH(ip_ntoa, hostname, label)
  AGAINST('9089 router');
-- returns every entry matching ANY needle
hostname: legolas
hostname: boromir

Or exact matches

SELECT hostname FROM catalog WHERE
  MATCH(ip_ntoa, hostname, label)
  AGAINST('+9089 +router' in boolean mode);
-- returns ONE entry matching ALL needles
hostname: legolas

Adding docker images to openshift 3.1

Openshift 3.1 is based on Kubernetes and Docker, and provides a small set of images including jboss EAP 6.4.

You can add new images in two steps:

1- create an ImageStream, that’s a docker image + a set of labels
2- create a Template using that ImageStream

To create the ImageStream read carefully the following description.

# Create the ImageStream
oc create -f - <<EOF
apiVersion: v1
kind: ImageStream
  name: wildfly9-openshift
  namespace: openshift        # Set this to "openshift" if you want to make this image globally visible
  dockerImageRepository:  # The original docker hub repo
  - annotations:
      description: Wildfly 9.0 S2I images.
      iconClass: icon-jboss
      sampleRef: 9.0.x 
      supports: wildfly:9,javaee:7,java:8,
      tags: builder,javaee,java,jboss
      version: "1.0"
    name: "1.0"
  dockerImageRepository: ""

Roundcube: risolvere l’errore Net_LDAP2_RootDSE::construct() must be public

Per rosolvere il seguente errore in roundcube

PHP Fatal error: Access level to Net_LDAP2_RootDSE::__construct() must be public (as in class PEAR) in roundcubemail/vendor/ on line 238

Seguire i passi:

  • cd <roundcube-root-folder>
  • Installare composer.phar: curl -s | php
  • copiare il template composer.json-dist in composer.json
  • modificare il file composer.json, nella sezione “require” aggiungere la riga"": "~2.2.0",
  • lanciare il comando: php composer.phar update

Back from MongoDB Essentials Training

This week I joined MongoDB Essentials training in Roma.

Mongo is a fast document oriented database supporting consistency, replication for HA and sharding for scaling read OR writes.

Transactions are at document level, so no joins and isolation levels.

A nice training – covering many database design technologies and giving even a theoretical overview of performance and scalability issues.

Concepts like Working Set, Replication types and issues, Indexes side-effects, Sharding and Hashing were introduced both theoretically and pratically.

Being a class of 10+ people with mixed background (MS, Linux, Oracle) it was hard to squeeze all this theory and practice in 3 days. The instructor asked us the parts we’d like to cover more: we pick Schema Design, Replication and Sharding.
Besides, such a large class give us a lot discussion and networking opportunity: we even created a freenode chatroom!

People interested in the subject can drop me a line and have a look at this github repo

Enjoy! R.