Best filesystem for backups, temp or logs

If you have a habit of storing all your temporary files on your desktop (or any other folder) and forgot to remove them later, or if you use a device to store your backups, you will find that your computer get filled up easily with tons of files that you have no use for. Once that happens, cleaning up your computer becomes a tedious task and a troublesome chores. Here’s a quick and easy way to watch a folder for old files and delete them automatically.

Using a filesystem, like as limit-fs, that automatically check the used space and clean the oldest files if the space is about to saturate.

Use limit-fs is really simple, if the directory where you use to save backups or temporary files is ~/backups you will mount the filesystem over there by:


$ limit-fs ~/backups

Don’t worry about what already was in the folder, all contents already present before mounting will remains at the same place.

You can specify some options to control the behavior, check limit-fs github page for more information.

Using http proxies in openshift java projects

To use http proxies with java in openshift you should know:

– that tools like maven don’t honor http_proxy & co environment variables
– that each container image has its own build script (assemble) that does or does NOT take http_proxy into account.

Always check the image documentation if you need proxies:

- https://docs.openshift.com/online/using_images/s2i_images/java.html
- https://access.redhat.com/solutions/1758313
- https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/red_hat_jboss_enterprise_application_platform_for_openshift/configuring_eap_openshift_image#configuring_eap_env_vars

A general and flexible solution is:

– to provide a configuration/settings.xml in your project, eg.

github.com/ioggstream/java-project.git
- pom.xml
- src/
- configuration/settings.xml

– add in settings.xml

openshift interpolates every *PROXY* environment variable stripping stuff, so you may not always be able to do

 
  <proxies>                                                                                                                                                                                                       
   <proxy>
...
      <host>${env.HTTP_PROXY_HOST}</host>
...    

JBoss images support the following variables via the `assemble` script:

– HTTP*_PROXY_HOST
– HTTP*_PROXY_PORT

Another solution is to:

– get the assemble from the given image you’re using (different images, different assemble)
– customize it so that it uses environment variables to build a custom settings.xml to be used within the build
– add it to .s2i/bin/assemble

Here’s an example assemble supporting proxies https://github.com/ivanthelad/openshift-jee-sample/blob/jws/.sti/bin/assemble

Smoke testing openshift with ansible-galaxy

The ansible-galaxy ioggstream.ocp_health role can run a smoke test on openshift in minutes:

– etcd consistency
– rhn subscriptions
– master status
– registry, ipfailover and router instances

NOTE: it’s not a replacement of oadm diagnostics ;)


ansible-galaxy install ioggstream.ocp_health
# eventually tweak parameters
# vi /root/.ansible/roles/ioggstream.ocp_health/tests/ocp_health.yml
ansible-playbook --check /root/.ansible/roles/ioggstream.ocp_health/tests/ocp_health.yml

If you want to create a test project with two apps, one with a PVC and one with an ephemeral, set create_test_project.


ansible-playbook -v -e create_test_project=yes /root/.ansible/roles/ioggstream.ocp_health/tests/ocp_health.yml

Customizing openshift deployments configuration files

You may need to customize a configurationfile for eg. an openshift-router or the registry.
If the dc supports the TEMPLATE_FILE environment, you can do it in three steps, otherwise you should find
a hook to mount the file in an expected location.

First get the original configuration file and modify it as desired. In this example, we are increasing the maximum allowed connections.

 # oc rsh router-xxx cat /var/lib/haproxy/conf/haproxy-config.template > haproxy-config.template
 # vim haproxy-config.template  # modify as desired, eg.

--- /var/lib/haproxy/conf/haproxy-config.template       
+++ /var/lib/haproxy/conf/custom/haproxy-config.template       
@@ -7,6 +7,7 @@
 {{ $workingDir := .WorkingDir }}
 global
   # maxconn 4096
+  maxconn {{env "ROUTER_MAX_CONNECTIONS" "20000"}}
   daemon
 {{ with (env "ROUTER_SYSLOG_ADDRESS" "") }}
   log {{.}} local1 {{env "ROUTER_LOG_LEVEL" "warning"}}
@@ -39,6 +40,7 @@

 defaults
   # maxconn 4096
+  maxconn {{env "ROUTER_MAX_CONNECTIONS" "20000"}}
   # Add x-forwarded-for header.
 {{ if ne (env "ROUTER_SYSLOG_ADDRESS" "") ""}}
   option httplog

1- create a configmap from your new template file, eg.
2- reference the new file via the TEMPLATE_FILE environment if supported
3- use the volume feature to mount the configmap as a file

 
 # oc create configmap router-haproxy-34 --from-file=haproxy-config.template
 # oc set env dc/router TEMPLATE_FILE=/var/lib/haproxy/conf/custom/haproxy-config.template
 # oc volume dc/router --add --overwrite     \
      --name=config-volume     \
      --mount-path=/var/lib/haproxy/conf/custom     \
      --source='{"configMap": { "name": "router-haproxy-34"}}'

Now verify and rollout the new config.

 oc describe dc router
 oc rollout latest router

Brief OpenShift troubleshooting

If you have issues after an automagic openshift-on-openstack deployment:

1. Remember: every buildconfig created *before* the registry is not authorized to push the images

2. Remember: hawkular is a java application. Startup is slow. Just click there and wait for the startup

3. Ansible is your friend. To get container logs, just


ansible all -m shell -a 'ls /var/log/containers/CONTAINER_NAME*'

ansible all -m shell -a 'cat /var/log/containers/CONTAINER_NAME*' > CONTAINER_NAME.log

4. If a container don’t startup during the deployment, a broken image may have been downloaded

Jun 1 23:30:36 dev-7-infra-0 atomic-openshift-node: I0601 23:30:36.234103 32913 server.go:608] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"router-1-deploy", UID:"033670a9-470e-11e7-878f-fa163eac2bf7", APIVersion:"v1", ResourceVersion:"936", FieldPath:""}): type: 'Warning' reason: 'FailedSync' Error syncing pod, skipping: failed to "StartContainer" for "POD" with RunContainerError: "runContainer: Error response from daemon: {\"message\":\"invalid header field value \\\"oci runtime error: container_linux.go:247: starting container process caused \\\\\\\"exec: \\\\\\\\\\\\\\\"/pod\\\\\\\\\\\\\\\": stat /pod: no such file or directory\\\\\\\"\\\\n\\\"\"}"

Cleanup docker repo


docker ps -aq | xargs docker rm
docker rmi 90e9207f44f0 --force

5. Run oadm diagnostics on the master ;)

6. Check #oc get hostsubnet

OpenShift cockpit quickstart

Enabling openshift cockpit with the latest releases is quite simple, but requires using a local system account.

1- Install cockpit


yum install cockpit cockpit-kubernetes
iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 9090 -j ACCEPT
systemctl enable cockpit.service --now

2- Create a custom user to be used for cockpit administration


useradd -m -k /home/cloud-user cockpit
passwd cockpit #

3- access cockpit via a tunnel from the management network using the user cockpit credentials.


ssh -D11111 cloud-user@bastion
firefox http://master-ip:9090

Openshift 3.4: broken ansible dependencies

The new ansible openshift 3.4 installation playbook is very nice.

Just set deploy variables in the inventory and everything will raise from the ground magically…

Well, not immediately tough. Due to this bug you need to:

– downgrade ansible to 2.2.0.0 (the latest is 2.2.1.0)

Or the playbook will try do serialize python objects which are actually strings.

Eg. if your configuration contains:

– name: “MyServer”

Ansible looks for a MyServer() class instead of using str(“MyServer”)

Trace http calls with python-requests

Today python-requests is the de-facto standard library for rest calls.

As everything goes on TLS, you can trace api calls with the following:


import httplib as http_client
http_client.HTTPConnection.debuglevel = 1
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True

Set command output as facts with ansible

Having to check ntp configuration on a distributed cluster, I had to parse the “`timedatectl“` output into a dict and apply various checks.

I did this via the (infamous) ;) jinja templates|pipelines.

# This is the check_time.yml playbook.

- name: Register the timedatectl output even in check mode. This command doesn't modify server configuration.
  shell: "timedatectl | grep ': '"
  check_mode: no
  register: timedatectl_output

# Note that:
#  - to use timedatectl_output into with_items we need to QUOTE-AND-BRACE it
#  - we can default the previously indefined timedatectl_status dictionary via
#       variable | default(VALUE)
#  - 
- name: Process timedatectl_output lines one at a time and update repeatedly the timedatectl_status variable using combine().
  set_fact:
    timedatectl_status: >
      {{
        timedatectl_status | default({}) |
        combine(
          dict([ item.partition(': ')[::2]|map('trim') ])
        )
      }}
  with_items: "{{timedatectl_output.stdout_lines}}"

Now we can check ;)

- name: Clock synchronized
  fail: msg="Clock unsynchronized {{timedatectl_status}}"
  when: "{{timedatectl_status['NTP synchronized'] == 'no' }}"

- name: All hw clocks are utc
  fail: msg="hwclock not utc {{timedatectl_status}}"
  when: "{{timedatectl_status['RTC in local TZ'] == 'no' }}"


MySQL JSON fields on the ground!

Having to add a series of custom fields to a quite relational application, I decided to try the new JSON fields.

As of now you can:

– create json fields
– manipulate them with json_extract, json_unquote
– create generated fields from json entries

You can not:

– index json fields directly, create a generated field and index it
– retain the original json datatype (eg. string, int), as json_extract always returns strings.

Let’s start with a simple flask app:

# requirements.txt
mysql-connector-python
Flask-SQLAlchemy==2.0
SQLAlchemy>=1.1.3

Let’s create a simple flask app connected to a db.

import flask
import flask_sqlalchemy
from sqlalchemy.dialects.mysql import JSON

# A simple flask app connected to a db
app = flask.Flask('app')
app.config['SQLALCHEMY_DATABASE_URI']='mysql+mysqlconnector://root:secret@localhost:3306/test'
db = flask_sqlalchemy.SQLAlchemy(app)

Add a class to the playground and create it on the db. We need sqlalchemy>=1.1 to support the JSON type!

# The model
class MyJson(db.Model):
    name = db.Column(db.String(16), primary_key=True)
    json = db.Column(JSON, nullable=True)

    def __init__(self, name, json=None):
        self.name = name
        self.json = json

# Create table
db.create_all()

Thanks to flask-sqlalchemy we can just db.session ;)

# Add an entry
entry = MyJson('jon', {'do': 'it', 'now': 1})
db.session.add(entry)
db.session.commit()

We can now verify using a raw select that the entry is now serialized on db

# Get entry in Standard SQL
entries = db.engine.execute(db.select(columns=['*'], from_obj=MyJson)).fetchall()
(name, json_as_string), = first_entry  # unpack result (it's just one!)
assert isinstance(json_as_string, basestring) 

A raw select to extract json fields now:

entries = db.engine.execute(db.select(columns=[name, 'json_extract(json, "$.now")'], from_obj=MyJson)).fetchall()

(name, json_now), = first_entry  # unpack result (it's just one!)
assert isinstance(json_now, basestring) 
assert json_now != entry.json['now']  # '1' != 1 
Sede Legale e Unità Operativa
Via Panfilo Castaldi, 11
20124 Milano
Tel: +39 02.66.732.1
Fax: +39 02.66.732.300
Unità Operativa
Via Cristoforo Colombo, 163
00147 Roma
Tel: +39 06.9826.9600
Fax: +39 06.9826.9680
Copyright © 2022 - tutti i diritti riservati
Società iscritta al registro delle imprese
di Milano al numero 12938200156
P.IVA e C.F. 12938200156
Privacy Policy | Cookie Policy