Remote dump of mysqlbinlog

In MySQL 5.6 the mysqlbinlog command allows:
– connecting to a remote server with  “-R”
– get all binlogs passing “–to-last-log”

To continuously dump the logs, use
– stay connected with “–stop-never” which implies “–to-last-log”

You can practice using this MySQL Community Docker Image, which provides all binaries and an easy master-slave setup.

You can try using this

# mkdir /tmp/test; cd /tmp/test
# wget https://raw.githubusercontent.com/ioggstream/mysql-community/master/ga/docker-compose.yml
# docker-compose -p test up # run master and server

# docker run --rm -ti --link=test_master_1:master --entrypoint=/bin/bash ioggstream/mysql-community -l

$mysqlbinlog -R  -h master -u root -p root  --stop-never master-bin.000001

Now just send some data to the master (eg. the sakila db)

For further info, check here http://dev.mysql.com/doc/refman/5.6/en/mysqlbinlog-backup.html

Oracle: DB_RECOVERY_FILE_DEST_SIZE

Today I experimented this nice oracle error at startup:

ORA-19815: WARNING: db_recovery_file_dest_size of 4G bytes is 99.76% used, and has 9895936 remaining bytes available.

You can check your RECOVERY size with

SELECT name, 
   space_limit/(1024*1024) limit in MB, 
   space_used/(1024*1024) used in MB, 
   space_reclaimable/(1024*1024) reclaimable in MB
FROM
   V$RECOVERY_FILE_DEST;

*In my case* the fastest way to solve was to mount readonly and grow the DB_RECOVERY_FILE_DEST_SIZE:

#sqlplus / as SYSDBA;
sqlplus# startup nomount;
sqlplus# ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 10g SCOPE=BOTH SID='*';

There are plenty of other use cases, please check what does fit for you!

How to rsync with ftp

As you know is not possible to perform a rsync with a ftp site. Here you can find a simple workaround to perform a remote backup.

First install rsync and curlftpfs…

sudo apt-get install rsync curlftpfs

…then create the mountpoint and allow access to your user…

sudo mkdir /mnt/yourftp

sudo chown youruser /mnt/yourftp

…enable the fuse mount for non-root users…

sudo vi /etc/fuse.conf

uncomment the parameter user_allow_other on the last line

…and then mount your ftp site

curlftpfs -o user=username:password,allow_other ftp.yoursite.com /mnt/yourftp

Now you can navigate your ftp like a classic filesystem folder!

Finally enjoy your rsync (example):

rsync -r -t -v –progress –delete /home/folder_to_backup/ /mnt/yourftp

Remember that if you need to sync folders with different name you have to add the last slash on the source dir!

P.S. Don’t forget to unmount your ftp site after the rsync:

sudo umount /mnt/yourftp

MySQL Backup with ZManda

The Zmanda suite has some scripts to simplify backups. The open release just uses mysqldump, but you can download a zrm-xtrabackup plugin here.

Once installed xtrabackup and zmanda recovery mysql just put the zrm-xtrabackup plugin in /usr/share/mysql-zrm/plugins/.

With xtrabackup you can:
1. create a raw backup into /var/lib/mysql-zrm/BackupSet1/20120716154427

# full=0;incremental=1; export full incremental
# mysql-zrm –action backup –backup-level $full –mysql-binlog-path /var/log/mysql/

2. create an incremental backup

# mysql-zrm –action backup –backup-level $incremental –mysql-binlog-path /var/log/mysql/

The incremental backup dumps the binlogs created since the last full backup.

3. check backup status with

# mysql-zrm-reporter

The missing stuff is a way to effectively check the backup files. While zrm provides a mysql-zrm-verify, it just checks the md5 of the files, not if the backup files are effectively usable under mysql.

MySQL: Quick&Dirty Point in Time Recovery using binlog

MySQL can log all queries in binary format, creating the famous binlog files. BinLogs are posthumous files, created by mysqld after that queries have been sent to the engine for committing.

While it’s not compulsory to use binlogs, they are a required if you want:

  • replicating queries to another server;
  • create point in time recovery.

Point in Time Recovery

A Point in Time Recovery is a sort of Time Machine, returning a snapshot of the db at a given time. You can simply aggregate all your binlogs with:

# mysqlbinlog binlog.000001 binlog.000002 > my-pitr.sql

or:

# for bl in binlog*; do
mysqlbinlog $bl
done >> my-pitr.sql

Once you’ve aggregate *all* your binlog, you can import the my-pitr.sql:

# mysql -e “source /path/to/my-pitr.sql”

Note that:

  1. you cannot import each binlog file separately, because you will lose temporary tables.
  2. binlogs archive all your databases, not a single one.

Purging Binlogs

As binlog tend to grow an occupy a lot of space, you can set up an expiration policy based on the backup one, and know some basic operation to be done on your binlogs:

  • SHOW MASTER STATUS; — see the last transaction position in log file
  • FLUSH LOGS; — close the latest binlog file and start a new one (so we won’t delete a file in use)
  • PURGE BINARY LOGS BEFORE ‘yyyy-mm-dd hh:mm:ss’; — remove logs until date

And let’s see eventual issues in deleting binlogs:

  1. deleting a file while mysql is using it;
  2. deleting a file containing transactions not stored on backups.

First issue is solved in two step:
a) know which file is currently used, with SHOW MASTER STATUS;

eg.
mysql> show master status \G
File: mysql-bin.000016
Position: 106

b) closing this log file and opening a new one (.00017) with FLUSH LOGS;

If we take a backup now, every transaction we could have missed is contained in binlogs > 00017

T0 -
| flush binlogs
T1 -
| take a backup
T2 - (backup successfull)
| purge old binlogs
T3 -
|
v

Nobody backups 389 Directory Server!

389 Directory Server  (like Sun/Oracle DS) has a nice tool for taking logical backups: db2ldif. Logical backups are – simply – backups taken in an exportable format – like LDIF or SQL. They differ from raw backups – which are taken copying or snapshotting data files.

db2ldif works triggering a backup in 389: it simply tells to slapd – the ldap daemon – to export a backup on a given file

As 389DS usually runs as user “nobody”, there’s a common error you’ll se when running it:


# db2ldif -s "dc=babel,dc=it" -U -a /backup-ldap/myds.$(date -I).ldif
...
db2ldif: can't open /backup-ldap/myds.2012-06-14.ldif: 13 (Permission denied)
...

Actually it’s not the command you’re running that lacks permissions, but the user “nobody”.

Just
# chown -R nobody /backup-ldap/
And everything will work fine!

MySQL Backup: that’s hot

Just downloaded, built and test a tool for Hot Backup of MySQL InnoDB databases: xtrabackup from Percona.

You can try the apt/rpm repository too.

You get two backup tools, that are quite easy to use:

  • xtrabackup – a C program that merges redo logs and InnoDB files to create Hot Backups;
  • innobackupex –  a perl wrapper around xtrabackup.

To backup your databases, prepare a mysql-owned /opt/backup_mysql/

#useradd backup -m -d /opt/backup_mysql;

Then run just run

# innobackupex  –user=root –password=password /opt/backup_mysql/

Once backup is finished (it took about 17 seconds for a 300Mb DB entirely fitting memory), you have a timestamped directory in /opt/backup_mysql/. You can use it as a new datadir to test your backup simply starting a new mysql instance.

UPDATE: latest innobackupex releases require you to APPLY redo logs to the datafiles. This is done in two steps
Apply redo logs:

# innobackupex --apply-log /opt/backup_mysql/2012-06-12_19-22-54/

Check innobackupex output

120612 19:22:57  innobackupex: completed OK!

REMEMBER: ALWAYS CHECK THAT YOUR BACKUP IS CONSISTENT! Unless you are sure to be able to recover the data your backup is not done ;)

To improve isolation you could even start this backup instance with another user, using eg. –user=backup. So, check backup permissions:

# chown -R backup.backup ~backup

Change user:

# su - backup

Start our server

$ /usr/bin/mysqld --datadir=/opt/backup_mysql/2012-06-12_19-22-54/ \
                  --socket=/opt/backup_mysql/backup_mysql.sock \
                  --skip-networking

This new instance won’t listen on any port (–skip-networking) but only on your backup_mysql.sock, so that it won’t interfere with existing instances.

Now we can check if our backup is fine, running mysql and issuing some queries:

# mysql --socket=/opt/backup_mysql/backup_mysql.sock -uroot -p
> show databases \G
*************************** 1. row ***************************
Database: employees
> use employees;
> show table status \G
... a lot of table informations ...
mysql> select count(salary) from salaries;
+---------------+
| count(salary) |
+---------------+
|       2844047 |
+---------------+
1 row in set (4.55 sec)
mysql> select sum(salary) as HR_Budget from salaries;
+--------------+
| HR_Budget    |
+--------------+
| 181480757419 |
+--------------+
1 row in set (0.83 sec)
> select sum(salary) as HR_Budget from salaries;

tar: strip that path

There are a couple of options of gnu Tar that can save you some time:

  • -C lets you change the directory before adding|extracting file from an archive
  • –strip-components=X lets you extract an archive stripping the heading path  until the Xth level

Ex.

# tar cf /tmp/opt_postfix.tar -C /opt etc/postfix # backup /opt/etc/postfix without prepending /opt to the archive

# tar tvf  /tmp/opt_postfix.tar # check the archive
etc/postfix
etc/postfix/main.cf
etc/postfix/master.cf

# tar xf  /tmp/opt_postfix.tar -C /opt2/etc2/ –strip-components=1 # unpack in another directory

Funambol moves to SourceForge

Funambol forge will move to SourceForge, so it’s time to backup the ldap-connector repository. Just:

  1. setup a local repo;
  2. let it point to  https://ldap-connector.forge.funambol.org/svn/ldap-connector/
  3. use svnsync to mirror the original one.

In bash:

svnadmin create /tmp/ldap-connector; cd  /tmp/ldap-connector/
head -n 1 hooks/pre-revprop-change.tmpl > hooks/pre-revprop-change # use a void script 
chmod +x hooks/pre-revprop-change

svnsync init .   https://ldap-connector.forge.funambol.org/svn/ldap-connector/
svnsync synchronize file:///tmp/ldap-connector/

This will start to cycle, checking out and committing every revision on your local (backup repo).

If your Nokia mobile is a test device…

then you need to backup your data frequently. Testing funambol ldap and caldav connector with my E65 makes me happy, but I always love to ensure never miss my data during tests.

So I saved my addressbook on memorycard (on linux I got no Nokia PC Suite) in that way:

1- go to addressbook, then Options > Select All

2- then Options > Copy > To Memory Card

Now you got your backup: let’s safely start our tests!

Peace, R.