2018-04-18

Creating Singularity container

For some project I had to create a Singularity container (because of the environment where it needs to run). Singularity is a container technology used by scientists. It turned out that it is very simple. Although there is not recent enough Singularity package in the normal Fedora repos, they have nice guide on how to build your own packages and that worked for me on a first try. First "Singularity" build file (similar to "Dockerfile" file) - I have based it on a recent Fedora docker image:
$ cat Singularity 
Bootstrap: docker
From: fedora:latest

%help
    This is a container for ... project
    See https://gitlab.com/...
    Email ...

%labels
    Homepage https://gitlab.com/...
    Author ...
    Maintainer ...
    Version 0.1

%files
    /home/where/is/your/project /projectX

%post
    dnf -y install python2-biopython python2-numpy python2-tabulate python2-scikit-learn pymol mono-core python2-unittest2 python2-svgwrite python2-requests
    chown -R 1000:1000 /projectX   # probably not important

%test
    cd /projectX
    python -m unittest discover

%environment
    export LC_ALL=C

%runscript
    exec /projectX/worker.sh
Majority of above is not needed: e.g. "%help" have completely free form, keys in "%labels" does not seem to be codified, "%test" which is ran as a last step of a build process is also optional. To build it:
$ sudo singularity build --writable projectXy.simg Singularity   # *.simg is a native format of singularity-2.4.6
$ sudo singularity build --writable projectX.img projectX.simg   # where the project is supposed to run, there is 2.3.2 which needs older *.img, so convert *.simg into it
Mine original idea was to have the project in writable container (so the option "--writable" above), but that would require me to run it as root again (or I'm missing something), so I have ended up with the solution of running the container in read-only mode and mounting mine project into it to have a read-write-able directory where I can generate the data:
$ echo "cd /projectX; ./worker.sh" \
      | singularity exec --bind projectX/:/projectX projectX.img bash
So far it looks like it just works.

2018-02-08

TaskWarior and listing tasks completed in specified time range

Besides mountain of inefficient papers (good thing about paper is you can lose it easily), I'm using TaskWarrior to manage my TODOs. Today I'm creating list of things I have finished in last (fiscal) year, so its reporting and filtering capabilities come handy. But as not too advanced user, it took me some time to discover how to list tasks completed in specified time range:

$ task end.after:2017-03-01 end.before:2018-03-01 completed

ID UUID     Created    Completed  Age  P Project         Tags R Due        Description                                                                                                                                                        
[...]                                                                                                                   
 - 5248839d 2017-01-24 2017-03-20 1.0y H                                   Prepare 'Investigating differences in code coverage reports' presentation
[...]

2018-02-01

Running wsgi application in OpenShift v.3 for the first time

For some time I'm running publicly available web application Brno People team uses to determine technical interests of employee candidates. The app was running on the OpenShift v.2, but that was discontinued and I had to port it to OpenShift v.3. I was postponing the task for multiple moths and got to the state when v.2 instances were finally discontinued. It turned out that porting is not that hard. This is what I have done.

Note that I'm using Red Hat's Employee account, so some paths might be different when OpenShift is being used "normally" (you will see something like ....starter-us-east-1.openshift.com instead of mine ....rh-us-east-1.openshift.com).

Because I need to put some private data into the image, I want the image to be accessible only from mine OpenShift Online account. Anyway, I have created Dockerfile based on Fedora Dockerfile template for Python (is it official?) like this:

FROM fedora
MAINTAINER Jan Hutar <jhutar@redhat.com>
RUN dnf -y update && dnf clean all
RUN dnf -y install subversion python mod_wsgi && dnf clean all
RUN ...
VOLUME ["/xyz/data/"]
WORKDIR /xyz
EXPOSE 8080
USER root
CMD ["python", "/xyz/application"]

TODO for the container is: move to Python 3 (so I do not need to install python2 and dependencies, figure out how to have the private data available to the container but not being part of it (it is quite big directory structure), go through these nice General Container Image Guidelines and explore this Image Metadata thingy.

Once I had that, I needed to login to the OpenShift's registry, build my image locally, test it and push it:

sudo docker build --tag selftest .
sudo docker run -ti --publish 8080:8080 --volume $( pwd )/data/:/xyz/data/ xyz   # now I can check if all looks sane with `firefox http://localhost:8080`
oc whoami -t   # this shows token I can use below
sudo docker login -u <username> -p <token> registry.rh-us-east-1.openshift.com
sudo docker tag xyz registry.rh-us-east-1.openshift.com/xyz/xyz
sudo docker push registry.rh-us-east-1.openshift.com/xyz/xyz

Now I have used Console on https://console.rh-us-east-1.openshift.com/ to create new application, then added a deployment to that application with Add to Project -> Deploy Image and selected (well, I could use cli tool oc for that):

  • surprisingly you do not choose "Image Name" here
  • but you choose "Image Stream Tag" with:
    • Namespace: selftest
    • Image Stream: selftest
    • Tag: latest

Next step looks logical, but I got stuck on it for some time, but OpenShift folks helped me (thanks Jiří!). I just need to be aware of OpenShift Online Restrictions.

So, because I wanted persistent storage and because my account uses Amazon EC2, I can not use "Shared Access (RWX)" storage type (useful when new pod is starting while old pod is still running), I had to change process of new pods start to first stop old and the start new: Applications -> Deployments -> my deployment -> Actions -> Edit -> Strategy Type: Recreate. I have created a storage with "RWO (Read-Write-Once)" access mode, added it to the deployment (... -> Actions -> Add Storage) and made sure that that storage is the only one attached to the deployment (... -> Actions -> Edit YAML and check that keys spec.template.spec.containers.env.volumeMounts and spec.template.spec.volumes only contain one volume you have just attached). In my case, there is this in the YAML definition:

[...]
    spec:
      containers:
        - env:
          [...]
          volumeMounts:
            - mountPath: /xyz/data
              name: volume-jt8t6
      [...]
      volumes:
        - name: volume-jt8t6
          persistentVolumeClaim:
            claimName: xyz-storage-claim
[...]

When working with this, I have also used ... -> Actions -> Pause Rollouts. It is also possible to configure environment variable for a deployment in ... -> Actions -> Edit -> Environment Variables which is useful to pass passwords and stuff into your app (so I do not need to store them in the image). In the app I use something like import os; SMTP_SERVER_PASSWORD = os.getenv('XYZ_SMTP_SERVER_PASSWORD', default='') to read that.

To make the app available from outside world, I have created a route in Applications -> Routes -> Create Route. It created domain like http://xyz-route-xyz.6923.rh-us-east-1.openshiftapps.com for me.

Now, looks like everything works for me and I'm kinda surprised how easy it was. I plan to get nicer domain and configure its CNAME DNS record and to explore monitoring possibilities OpenShift have. I'll see how it goes.

2017-12-25

Monitoring Satellite 5 with PCP (Performance Co-Pilot)

During some performance testing we have done, I have used PCP to monitor basic stats about Red Hat Satellite 5 (could be applied to Spacewalk). I was unable to make it sufficient, but maybe somebody could fix and enhance it. I have taken lots from lzap. First of all, install PCP (PostgreSQL and Apache PMDA lives in RHEL Optional repo as of now, in CentOS7 it seems to be directly in base repo):
subscription-manager repos --enable rhel-6-server-optional-rpms
yum -y install pcp pcp-pmda-postgresql pcp-pmda-apache
subscription-manager repos --disable rhel-6-server-optional-rpms
Now start services:
chkconfig pmcd on
chkconfig pmlogger on
service pmcd restart
service pmlogger restart
Install PostgreSQL and Apache monitoring plugins
cd /var/lib/pcp/pmdas/postgresql
./Install   # select "c(ollector)" when it asks
cd /var/lib/pcp/pmdas/apache
echo -e "<Location /server-status>\n  SetHandler server-status\n  Allow from all\n</Location>\nExtendedStatus On" >>/etc/httpd/conf/httpd.conf
service httpd restart
./Install
# Configure hot proc
cat >/var/lib/pcp/pmdas/proc/hotproc.conf <<EOF
> #pmdahotproc
> Version 1.0
> fname == "java" || fname == "httpd"
> EOF
And because I have Graphite/Grafana setup available, I was pumping selected metrices there (from RHEL6 which is with SysV):
# tail -n 1 /etc/rc.local
pcp2graphite --graphite-host carbon.example.com --prefix "pcp-jhutar." --host localhost - kernel.all.load mem.util.used mem.util.swapCached filesys.full network.interface.out.bytes network.interface.in.bytes disk.dm.read disk.dm.write apache.requests_per_sec apache.bytes_per_sec apache.busy_servers apache.idle_servers postgresql.stat.all_tables.idx_scan postgresql.stat.all_tables.seq_scan postgresql.stat.database.tup_inserted postgresql.stat.database.tup_returned postgresql.stat.database.tup_deleted postgresql.stat.database.tup_fetched postgresql.stat.database.tup_updated filesys.full hotproc.memory.rss &

Problems I had with this

For some reasons I have not investigated closely, after some time PostgreSQL data were not visible in Grafana. Also I was unable to get hotproc data available in Grafana. Also I was experimenting with PCP's emulation of Graphite and its Grafana, but PCP's Graphite lack filters which makes its usage hard and not practical for anything beyond simple stats.

2017-12-22

"Error: Too many open files" when inside Docker container

Does not work: various ulimit settings for daemon

We have container build from this Dockerfile, running RHEL7 with oldish docker-1.10.3-59.el7.x86_64. Containers are started with:

# for i in $( seq 500 ); do
      docker run -h "$( hostname -s )container$i.example.com" -d --tmpfs /tmp --tmpfs /run -v /sys/fs/cgroup:/sys/fs/cgroup:ro --ulimit nofile=10000:10000 r7perfsat
  done

and we have set limits for a docker service on a docker host:

# cat /etc/systemd/system/docker.service.d/limits.conf
[Service]
LimitNOFILE=10485760
LimitNPROC=10485760

but we have still seen issues with "Too many open files" inside the container. It could happen when installing package with yum (resulting into corrupted rpm database, rm -rf /var/lib/rpm/__db.00*; rpm --rebuilddb; saved it though) and when enabling service (our containers have systemd in them on purpose):

# systemctl restart osad
Error: Too many open files
# echo $?
0

Because I was stupid, I have not checked journal (in the container) in the moment when I have spotted the failure for the first time:

Dec 21 10:18:54 b08-h19-r620container247.example.com journalctl[39]: Failed to create inotify watch: Too many open files
Dec 21 10:18:54 b08-h19-r620container247.example.com systemd[1]: systemd-journal-flush.service: main process exited, code=exited, status=1/FAILURE
Dec 21 10:18:54 b08-h19-r620container247.example.com systemd[1]: inotify_init1() failed: Too many open files
Dec 21 10:18:54 b08-h19-r620container247.example.com systemd[1]: inotify_init1() failed: Too many open files

Does work: fs.inotify.max_user_instances

At the end I have ran into some issue and very last comment there had a think I have not seen before. At the end I have ended up with:

# cat /etc/sysctl.d/40-max-user-watches.conf
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576

Default on a different machine is:

# sysctl -a 2>&1 | grep fs.inotify.max_user_
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192

Looks like increasing fs.inotify.max_user_instances helped and our containers are stable.

2017-11-04

Working local DNS for your libvirtd guests

Update 2017-12-25: possibly better way: Definitive solution to libvirt guest naming

This is basically just a copy&paste of commands from this great post: [Howto] Automated DNS resolution for KVM/libvirt guests with a local domain and Automatic DNS updates from libvirt guests which already saved me a lots of typing. So with my favorite domain:

Make libvirtd's dnsmasq to act as authoritative nameserver for example.com domain:

# virsh net-dumpxml default
<network>
  <name>default</name>
  <uuid>2ed15952-d1c0-4819-bde5-c8f7278ce3ac</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <mac address='52:54:00:a4:40:a7'/>
  <domain name='example.com' localOnly='yes'/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

And restart that network:

# virsh net-edit default   # do the edits here
# virsh net-destroy default
# virsh net-start default

Now configure NetworkManager to start its own dnsmasq which acts like your local caching nameserver and forwards all requests for example.com domain to 192.168.122.1 nameserver (which is libvirtd's dnsmasq):

# cat /etc/NetworkManager/conf.d/localdns.conf
[main]
dns=dnsmasq
# cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf
server=/example.com/192.168.122.1

And restart NetworkManager:

# systemctl restart NetworkManager

Now if I have guest with hostname set (check HOSTNAME=... in /etc/sysconfig/network on RHEL6 and below or hostnamectl set-hostname ... on RHEL7) to "satellite.example.com", I can ping it from both virtualization host and another guests on that host by hostname. If you have some old OS release on the guest (like RHEL 6.5 from what I have tried, 6.8 do not need this), set hostname with DHCP_HOSTNAME=... in /etc/sysconfig/network-scripts/ifcfg-eth0 (on the guest) to make this to work.

2017-08-13

Quick Python performance tuning cheat-sheet

Just a few commands without any context:

Profiling with cProfile

This helped me to find slowest functions, because when optimizing, I need to focus on these (best ration of work needed vs. benefits). This helped me to find function which did some unnecessary calsulations over and over again:

$ python -m cProfile -o cProfile-first_try.out ./layout-generate.py ...
$ python -m pstats cProfile-first_try.out 
Welcome to the profile statistics browser.
cProfile-first_try.out% sort
Valid sort keys (unique prefixes are accepted):
cumulative -- cumulative time
module -- file name
ncalls -- call count
pcalls -- primitive call count
file -- file name
line -- line number
name -- function name
calls -- call count
stdname -- standard name
nfl -- name/file/line
filename -- file name
cumtime -- cumulative time
time -- internal time
tottime -- internal time
cProfile-first_try.out% sort tottime
cProfile-first_try.out% stats 10
Sat Aug 12 23:19:40 2017    cProfile-first_try.out

         18508294 function calls (18501563 primitive calls) in 8.369 seconds

   Ordered by: internal time
   List reduced from 2447 to 10 due to restriction <10>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    27837    4.230    0.000    5.015    0.000 ./utils_matrix2layout.py:14(get_distance_matrix_2d)
    10002    1.356    0.000    1.513    0.000 ./utils_matrix2layout.py:244(get_measured_error_2d)
  5674796    0.572    0.000    0.572    0.000 /usr/lib64/python2.7/collections.py:90(__iter__)
  5340664    0.219    0.000    0.219    0.000 {math.sqrt}
  5432768    0.189    0.000    0.189    0.000 {abs}
   230401    0.183    0.000    0.183    0.000 /usr/lib64/python2.7/collections.py:71(__setitem__)
        1    0.178    0.178    0.282    0.282 ./utils_matrix2layout.py:543(count_angles_layout)
    10018    0.119    0.000    0.345    0.000 /usr/lib64/python2.7/_abcoll.py:548(update)
        1    0.102    0.102    6.749    6.749 ./utils_matrix2layout.py:393(iterate_evolution)
     1142    0.092    0.000    0.111    0.000 /usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py:1299(svd)

To explain the columns, Instant User’s Manual says:

tottime
for the total time spent in the given function (and excluding time made in calls to sub-functions)
cumtime
is the cumulative time spent in this and all subfunctions (from invocation till exit). This figure is accurate even for recursive functions.

Lets compile to C with Cython

Simply performing this on a module which does most of the work gave me about 20% speedup:

# dnf install python2-Cython
$ cython utils_matrix2layout.py
$ gcc `python2-config --cflags --ldflags` -shared utils_matrix2layout.c -o utils_matrix2layout.so

There is much more to do to optimize it, but that would need additional work, so not now :-) Some helpful links: