Which processes have most open files and consumes most memmory?

For some testing, I wanted to watch number of open files by process and memory consumed (1) by all the processes (2) of same name to get some global overview. Graphing this over time is another exercise which can show trends.

E.g. following number of open files (includes all libraries loaded by a binary, opened sockets...) comes from freshly installed Spacewalk server from last evening and it is not surprising IMO:

# lsof | cut -d ' ' -f 1 | sort | uniq -c | sort -n | tail
    121 cobblerd
    121 sshd
    122 gdbus
    131 master
    264 gssproxy
    282 gmain
    344 tuned
   1256 httpd
   4390 postgres
  25432 java

And this is total memory per processes with same name from same server - again nothing unexpected:

# ps --no-headers -eo rss,comm >a; for comm in $( sed 's/^\s*[0-9]\+\s*\(.*\)$/\1/' a | sort -u ); do size=$( grep "\s$comm" a | sed 's/^\s*\([0-9]\+\)\s*.*$/\1/' | paste -sd+ - | bc ); echo "$size $comm"; done | sort -n | tail
16220 tuned
18104 beah-fwd-backen
18664 beah-srv
23544 firewalld
24432 cobblerd
26088 systemd
26176 beah-beaker-bac
71760 httpd
227900 postgres
1077956 java

BTW man ps says following about RSS (which is used above):

resident set size, the non-swapped physical memory that a task has used (in kiloBytes).


Serializing one task in an ansible playbook

In my workflow, I'm running playbook on all hosts from mine inventory, but in the middle I need to execute one command on a different system (lets creatively call it "central server") for all hosts in the inventory. And whats bad, that command is not capable to run in parallel, so I need to serialize it a bit. Initial version which does not do any serialization was:

- hosts: all
  remote_user: root
    - name: "Configure something on host"
      command: ...
    - name: "Configure something on central server for each host"
        some_command --host "{{ ansible_fqdn }}"
      delegate_to: centralserver.example.com
    - name: "Configure something else on host"
      command: ...

But "some_command" can not run multiple times in parallel and I can not fix it, so this is first way I have used to serialize it (so it runs only once on the central server at any time):

- hosts: all
  remote_user: root
    - name: "Configure something on host"
      command: ...
- hosts: all
  remote_user: root
  serial: 1
    - name: "Configure something on central server for each host"
        some_command --host "{{ ansible_fqdn }}"
      delegate_to: centralserver.example.com
- hosts: all
  remote_user: root
    - name: "Configure something else on host"
      command: ...

So I have created 3 plays from previous 1 in my playbook where the middle one is serialized by "serial: 1" option. I have not used "forks: 1", because you can set this value only in ansible.cfg or on ansible-playbook command line.

Another way was to keep only one play in a playbook, run given task only once and iterate over whole inventory:

- hosts: all
  remote_user: root
    - name: "Configure something on host"
      command: ...
    - name: "Configure something on central server for each host"
        some_command --host "{{ item }}"
      with_items: groups['all']
      run_once: true
      delegate_to: centralserver.example.com
    - name: "Configure something else on host"
      command: ...

In my case I needed hostname, so in the command I have used hostvariable {{ hostvars[item]['ansible_fqdn'] }}.


Running dockerd on VM so containers can be reached from other VMs

Recently I needed this kind of setup for some testing so wanted to share. This way, all your libvirt guests can talk directly to all your docker containers and vice versa. All nicely isolated on one system. All involved pieces are RHEL7.
Intentional schema of docker containers running in libvirt/KVM guest, all on one network
schema of docker containers running in libvirt/KVM guest, all on one network
It is not perfect (I'm weak at networking), so you can get IP assigned to your container by dockerd conflicting with some VM IP. This is because docker assigns IPs from defined range (sequentially) and VMs have random IPs from same range assigned by libvirtd. Also I have seen some disconnects from Docker VM when starting containers there and sshing to container from docker VM was also lagging.
Libvirt is just a default configuration with its default network.
On one of the guests I have installed Docker (on RHEL7 it is in rhel-7-server-extras-rpms repository) and changed it's configuration to use (to-be created) custom bridge:
[root@docker1 ~]# grep ^OPTIONS /etc/sysconfig/docker
OPTIONS='--selinux-enabled -b=bridge0'
As I already started Docker, I wanted to remove it's default docker0 bridge it created, so simply:
[root@docker1 ~]# ip link set docker0 down   # first bring it down
[root@docker1 ~]# brctl delbr docker0   # delete it (brctl is in bridge-utils package)
Now to create new bridge which will get "public" IP (in a scope of libvirt's network) assigned:
[root@docker1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
[root@docker1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bridge0 
[root@docker1 ~]# service network restart
[root@docker1 ~]# service docker restart
This way containers get IPs from same range virtual machines.


Mirroring git repository and making it accessible via git://

I needed to make some existing git repository, where I can not change way it is served, accessible via git://... Took some googling, so here is the story:
First, install git-daemon package. For RHEL 7 package lives in RHEL Server Optional.

Second, start git daemon via systemd services. I was confused here, because git-daemon ships with git@.service file, which is template unit file, but it does not contain magic %i or %I placeholder.

# rpm -ql git-daemon | grep systemd

Fortunately I do not need to know things. Being able to find them is enough. Basically you have to start (and enable) git.socket and enable it in firewall.

Next, clone copy of the git repository. This one is easy, you just need to create RepoName.git directory in /var/lib/git/ (default - see git@.service file) owned by used nobody (as git daemon runs under that user by default - see service file):

# mkdir /var/lib/git/RepoName.git
# chown nobody:nobody /var/lib/git/RepoName.git
# runuser -u nobody /bin/bash
$ cd /var/lib/git/
$ git clone --bare https://gitservice.example.com/RepoName.git
$ touch RepoName.git/git-daemon-export-ok   # this marks repo as exportable by daemon

Optionally enable git's archive protocol to be used on a repo. Put following into RepoName.git/config:

        uploadarch = true

Last: make the bare repo updating itself periodically from the source. Looks like you can not do simple git fetch:

# runuser -u nobody -- crontab -e
@hourly cd /var/lib/git/RepoName.git; git fetch -q origin master:master

Update 2017-03-21: If it can happen that somebody could change past in the repo, it would be good add --force to the git fetch command you run in the cron so local branches are overwritten when there is some non-fast-forward change in the upstream repo.

$ git fetch origin master:master
From https://gitlab.cee.redhat.com/satellite5qe/RHN-Satellite
 ! [rejected]        master     -> master  (non-fast-forward)

Also added || echo "Fetch of RepoName.git failed" at the end of cron command so I'll be warned when repo fails to sync.

To test if it works, just clone that with git clone git://gitmirror.example.com/RepoName.git.


Repeat command untill it passes in ansible playbook

Found numerous solutions, but they did not worked for me. Maybe it changed in Ansible 2.0 (I'm on ansible- So here is what worked for me.
I needed to repeat package installation command until it passes (i.e. returns exit code 0; it was failing because of extreme conditions with memory allocation issues):
    - name: "Install katello-agent"
      register: installed
      until: "{{ installed.rc }} == 0"
      retries: 10
      delay: 10
Note that although action: might look like something used only in old Ansible versions, it seems to be current way to do this do-until loops.