tag:blogger.com,1999:blog-13649188009379468082024-02-02T07:39:36.030+01:00Jan Hutař's blogjhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.comBlogger41125tag:blogger.com,1999:blog-1364918800937946808.post-91797827278612080922023-12-10T22:35:00.004+01:002023-12-10T22:36:01.959+01:00First steps with Fedora IoT on Raspberry Pi 4<p>This is just a quick description, brain dump, of how we started measuring temperature and moisture in various rooms in our flat. This first part describes setting up a server that will collect and present the data.</p><p>Here I was basically just following <a href="https://www.redhat.com/sysadmin/fedora-iot-raspberry-pi">How to install Fedora IoT on Raspberry Pi 4</a> post.<br /></p><p style="text-align: left;">First, I installed prerequisites on my Fedora workstation:</p><p style="text-align: left;"><span style="font-family: courier;">$ sudo dnf install gnupg2 arm-image-installer</span></p><p style="text-align: left;">Now download images and Fedora GPG key so I can verify the signature of the downloaded image:</p><p style="text-align: left;"><span style="font-family: courier;">$ wget https://download.fedoraproject.org/pub/alt/iot/39/IoT/aarch64/images/Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz<br />$ wget https://download.fedoraproject.org/pub/alt/iot/39/IoT/aarch64/images/Fedora-IoT-39-aarch64-20231103.1-CHECKSUM<br />$ wget https://fedoraproject.org/fedora.gpg</span><br /></p><p style="text-align: left;"></p><p style="text-align: left;"></p><p style="text-align: left;">Now verify the signature and check the downloaded key fingerprint matches what Fedora team publishes on a page with list of <a href="https://fedoraproject.org/security">current GPG keys fingerprints</a>.<br /></p><p style="text-align: left;"><span style="font-family: courier;">$ gpgv --keyring ./fedora.gpg Fedora-IoT-39-aarch64-20231103.1-CHECKSUM<br />gpgv: Signature made Mon 06 Nov 2023 03:03:23 PM CET<br />gpgv: using RSA key E8F23996F23218640CB44CBE75CF5AC418B8E74C<br />gpgv: Good signature from "Fedora (39) <fedora-39-primary@fedoraproject.org>"<br /><br />$ gpg --show-keys fedora.gpg | grep -C 1 E8F23996F23218640CB44CBE75CF5AC418B8E74C<br />pub rsa4096 2022-08-09 [SCE]<br /> E8F23996F23218640CB44CBE75CF5AC418B8E74C<br />uid Fedora (39) <fedora-39-primary@fedoraproject.org></span></p><p style="text-align: left;">Also check checksum of downloaded image:</p><p style="text-align: left;"><span style="font-family: courier;">$ sha256sum -c Fedora-IoT-39-aarch64-20231103.1-CHECKSUM<br />Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz: OK<br />sha256sum: WARNING: 17 lines are improperly formatted</span><br /></p><p style="text-align: left;">I guess that warning in the output is there because the checksum file also contains GPG signature and sha256sum utility dislikes it, so I did not worried about it: <br /></p><p style="text-align: left;"><span style="font-family: courier;">$ cat Fedora-IoT-39-aarch64-20231103.1-CHECKSUM<br />-----BEGIN PGP SIGNED MESSAGE-----<br />Hash: SHA256<br /><br /># Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz: 712162312 bytes<br />SHA256 (Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz) = bb10ed4469f6ac1448162503b68f84e96f8e8410e5c8c9a4a56b5406bf13dff2<br />-----BEGIN PGP SIGNATURE-----<br /><br />iQI[...]<br />-----END PGP SIGNATURE-----</span><br /></p><p style="text-align: left;">Now I put SD card into the USB reader and connected it. It shows nicely in <i>lsblk</i> output as <i>/dev/sda</i>:<br /><br /><span style="font-family: courier;">$ lsblk <br />NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS<br />sda 8:0 1 29.7G 0 disk <br />├─sda1 8:1 1 512M 0 part <br />└─sda2 8:2 1 4.4G 0 part <br />zram0 252:0 0 8G 0 disk [SWAP]<br />nvme0n1 259:0 0 476.9G 0 disk <br />├─nvme0n1p1 259:1 0 1G 0 part /boot<br />├─nvme0n1p2 259:2 0 32G 0 part [SWAP]<br />└─nvme0n1p3 259:3 0 443.9G 0 part <br /> └─luks-c9494ef2-8c28-4817-befb-8ac43ff79ee3 253:0 0 443.9G 0 crypt /home<br /> /</span></p><p style="text-align: left;"><br /></p><p style="text-align: left;">So now I should have everything needed to write Fedora IoT image to the card:</p><p style="text-align: left;"><br /></p><span style="font-family: courier;">$ sudo arm-image-installer --image Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz --media /dev/sda --addkey /home/jhutar/.ssh/id_rsa.pub --norootpass --resizefs --target=rpi4 -y<br />[sudo] password for jhutar: <br /><br />=====================================================<br />= Selected Image: <br />= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz<br />= Selected Media : /dev/sda<br />= U-Boot Target : rpi4<br />= Root Password will be removed.<br />= Root partition will be resized<br />= SSH Public Key /home/jhutar/.ssh/id_rsa.pub will be added.<br />=====================================================<br /></span> <span style="font-family: courier;"><br />*****************************************************<br />*****************************************************<br />******** WARNING! ALL DATA WILL BE DESTROYED ********<br />*****************************************************<br />*****************************************************<br />= Writing: <br />= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz <br />= To: /dev/sda ....<br />4282384384 bytes (4.3 GB, 4.0 GiB) copied, 243 s, 17.6 MB/s<br />1024+0 records in<br />1024+0 records out<br />4294967296 bytes (4.3 GB, 4.0 GiB) copied, 243.92 s, 17.6 MB/s<br />= Writing image complete!<br />= Resizing /dev/sda ....<br />Checking that no-one is using this disk right now ... OK<br /><br />Disk /dev/sda: 29.72 GiB, 31914983424 bytes, 62333952 sectors<br />Disk model: UHSII uSD Reader<br />Units: sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disklabel type: dos<br />Disk identifier: 0xc1748067<br /><br />Old situation:<br /><br />Device Boot Start End Sectors Size Id Type<br />/dev/sda1 * 2048 1028095 1026048 501M 6 FAT16<br />/dev/sda2 1028096 3125247 2097152 1G 83 Linux<br />/dev/sda3 3125248 8388607 5263360 2.5G 83 Linux<br /><br />/dev/sda3: <br />New situation:<br />Disklabel type: dos<br />Disk identifier: 0xc1748067<br /><br />Device Boot Start End Sectors Size Id Type<br />/dev/sda1 * 2048 1028095 1026048 501M 6 FAT16<br />/dev/sda2 1028096 3125247 2097152 1G 83 Linux<br />/dev/sda3 3125248 62333951 59208704 28.2G 83 Linux<br /><br />The partition table has been altered.<br />Calling ioctl() to re-read partition table.<br />Syncing disks.<br />e2fsck 1.46.5 (30-Dec-2021)<br />/dev/sda3 has unsupported feature(s): FEATURE_C12<br />e2fsck: Get a newer version of e2fsck!<br /><br />root: ********** WARNING: Filesystem still has errors **********<br /><br />resize2fs 1.46.5 (30-Dec-2021)<br />Please run 'e2fsck -f /dev/sda3' first.<br /><br />= Raspberry Pi 4 Uboot is already in place, no changes needed.<br />= Removing the root password.<br />= Adding SSH key to authorized keys.<br /><br />= Installation Complete! Insert into the rpi4 and boot.</span><p style="text-align: left;">There are some errors there, right? Well, I ignored them. RPi booted nicely, I was able to setup everything (more on that in some later blog) but then I have ran out of storage. Only then I noticed root filesystem was not extended (exactly as the error message says).</p><p style="text-align: left;">After some online search I figured I need <i>e2fsprogs-1.47.0</i> or newer and (at the time?) it was only available in Fedora 39. So I upgraded and now I was able to write the image just fine:</p><p style="text-align: left;"><span style="font-family: courier;">$ sudo arm-image-installer --image Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz --media /dev/sda --addkey /home/jhutar/.ssh/id_rsa.pub --norootpass --resizefs --target=rpi4 -y<br />[sudo] password for jhutar: <br /><br />=====================================================<br />= Selected Image: <br />= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz<br />= Selected Media : /dev/sda<br />= U-Boot Target : rpi4<br />= Root Password will be removed.<br />= Root partition will be resized<br />= SSH Public Key /home/jhutar/.ssh/id_rsa.pub will be added.<br />=====================================================<br /> <br />*****************************************************<br />*****************************************************<br />******** WARNING! ALL DATA WILL BE DESTROYED ********<br />*****************************************************<br />*****************************************************<br />= Writing: <br />= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz <br />= To: /dev/sda ....<br />4282384384 bytes (4.3 GB, 4.0 GiB) copied, 245 s, 17.5 MB/s<br />1024+0 records in<br />1024+0 records out<br />4294967296 bytes (4.3 GB, 4.0 GiB) copied, 245.85 s, 17.5 MB/s<br />= Writing image complete!<br />= Resizing /dev/sda ....<br />Checking that no-one is using this disk right now ... OK<br /><br />Disk /dev/sda: 29.72 GiB, 31914983424 bytes, 62333952 sectors<br />Disk model: UHSII uSD Reader<br />Units: sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disklabel type: dos<br />Disk identifier: 0xc1748067<br /><br />Old situation:<br /><br />Device Boot Start End Sectors Size Id Type<br />/dev/sda1 * 2048 1028095 1026048 501M 6 FAT16<br />/dev/sda2 1028096 3125247 2097152 1G 83 Linux<br />/dev/sda3 3125248 8388607 5263360 2.5G 83 Linux<br /><br />/dev/sda3: <br />New situation:<br />Disklabel type: dos<br />Disk identifier: 0xc1748067<br /><br />Device Boot Start End Sectors Size Id Type<br />/dev/sda1 * 2048 1028095 1026048 501M 6 FAT16<br />/dev/sda2 1028096 3125247 2097152 1G 83 Linux<br />/dev/sda3 3125248 62333951 59208704 28.2G 83 Linux<br /><br />The partition table has been altered.<br />Calling ioctl() to re-read partition table.<br />Syncing disks.<br />e2fsck 1.47.0 (5-Feb-2023)<br />Pass 1: Checking inodes, blocks, and sizes<br />Pass 2: Checking directory structure<br />Pass 3: Checking directory connectivity<br />Pass 4: Checking reference counts<br />Pass 5: Checking group summary information<br />root: 32041/164640 files (0.6% non-contiguous), 449099/657920 blocks<br />resize2fs 1.47.0 (5-Feb-2023)<br />Resizing the filesystem on /dev/sda3 to 7401088 (4k) blocks.<br />The filesystem on /dev/sda3 is now 7401088 (4k) blocks long.<br /><br />= Raspberry Pi 4 Uboot is already in place, no changes needed.<br />= Removing the root password.<br />= Adding SSH key to authorized keys.<br /><br />= Installation Complete! Insert into the rpi4 and boot.</span></p><p style="text-align: left;">Stick the card into RPi, connect power and ethernet cable and voila, I'm now able to SSH to RPi. I got the IP from my router management console from DHCP leases section. <br /></p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-41530031903381127822023-08-10T22:39:00.003+02:002023-09-13T21:17:03.734+02:00Kinda SQL "join" in Prometheus<p>I'm using Prometheus query language, PromQL, quite a bit these days. But all I do are very simple queries like <code>sum(...)</code> or <code>rate(...[5m])</code> on a OpenShift cluster I work with.</p>
<p>For few weeks now, mine inner me was bothered with one slightly more complex stuff. To filter one metric by label from different metric - something like JOIN in SQL world. Specifically, I wanted to see number of pods running on each cluster node with "worker" role.</p>
<p>We have (I'm on OpenShift 4.13) <code>kube_node_role{role="worker"}</code> (AFAICT this is what we call "vector" in PromQL) that have these labels:</p>
<pre>Name container endpoint job namespace node prometheus role service Value
kube_node_role kube-rbac-proxy-main https-main kube-state-metrics openshift-monitoring ip-1-2-3-4.ec2.internal openshift-monitoring/k8s worker kube-state-metrics 1
kube_node_role kube-rbac-proxy-main https-main kube-state-metrics openshift-monitoring ip-1-2-3-5.ec2.internal openshift-monitoring/k8s worker kube-state-metrics 1
[...]
</pre>
<p>And we have <code>kube_pod_info</code> with these labels:</p>
<pre>Name container created_by_kind created_by_name endpoint host_ip host_network job namespace node pod pod_ip priority_class prometheus service uid Value
kube_pod_info kube-rbac-proxy-main <none> <none> https-main 10.201.24.232 false kube-state-metrics openshift-etcd ip-1-2-3-6.ec2.internal etcd-guard-ip-10-201-24-232.ec2.internal 10.128.2.14 system-cluster-critical openshift-monitoring/k8s kube-state-metrics a2eec7b0-9f29-42b4-853d-6919d963ffa1 1
kube_pod_info kube-rbac-proxy-main <none> <none> https-main 10.201.24.232 false kube-state-metrics openshift-etcd ip-1-2-3-6.ec2.internal revision-pruner-13-ip-10-201-24-232.ec2.internal 10.128.2.4 system-node-critical openshift-monitoring/k8s kube-state-metrics df5cdd67-b0f5-4896-b0b0-85095a9f3122 1
</pre>
<p>We will use <code>on(...)</code> and <code>group_left(...)</code> PromQL operators. I had some issues understanding what these do, so here is mine interpretation:</p>
<p><code>*</code> because values are always 1 in these vectors, it is safe to multiply these.</p>
<p><code>on(...)</code> allows me to define common label(s) that should be used to match two different vectors.</p>
<p><code>group_left(...)</code> ... thinking, thinking, nah. I forgot mine mental model here :-/</p>
<p>And this is the final query I used:</p>
<pre>
sum(
kube_pod_info{} * on(node) group_left(role) kube_node_role{role="worker"}
) by(node)
</pre>
<p>These links helped me a lot:</p>
<ul>
<li><a href="https://www.robustperception.io/how-to-have-labels-for-machine-roles/">How to have labels for machine roles</a></li>
<li><a href="https://iximiuz.com/en/posts/prometheus-vector-matching/">Prometheus Cheat Sheet - How to Join Multiple Metrics (Vector Matching)</a></li>
<li><a href="https://prometheus.io/docs/prometheus/latest/querying/operators/">PromQL operators</a></li>
</ul>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-58165960856563378712022-11-20T21:36:00.002+01:002022-11-20T21:36:15.406+01:00Tekton notes<p>Some time ago I was tasked to create a pipeline in Tekton and here comes some my notes I would like to know few days back :-)</p><ol style="text-align: left;"><li> It is not that hard. It is just a fancy way how split your shell automation script :-)</li><li>Tasks are not that useful on it's own (I think), you have to stack them into a Pipeline, but Tekton <a href="https://tekton.dev/docs/getting-started/tasks/">Getting started with Tasks</a> is nice start. Once you need more details, see <a href="https://tekton.dev/docs/pipelines/tasks/">Tasks</a>.</li><li>Pipelines are the core thing and starting with <a href="https://tekton.dev/docs/getting-started/pipelines/">Getting Started with Pipelines</a> helped me a lot. Later I was looking into <a href="https://tekton.dev/docs/pipelines/pipelines/">Pipelines</a> as well.</li><li>Blog post <a href="https://earthly.dev/blog/building-k8s-tekton/">Building in Kubernetes Using Tekton</a> was also very helpful. Also used my company's <a href="https://access.redhat.com/documentation/en-us/openshift_container_platform/4.11/html-single/cicd/index">CI/CD guide</a> here ad there.</li><li><a href="https://hub.tekton.dev/">Tekton Hub</a> is full of tasks (and more) and I was able to easilly see documentation for them and more importantly the actual YAML behind them - having a practical examples of how the tasks could look like behind simple hello-world tasks was very helpful. E.g. see <a href="https://hub.tekton.dev/tekton/task/kubernetes-actions">kubernetes-actions</a> and <a href="https://hub.tekton.dev/tekton/task/git-clone">git-clone</a> or <a href="https://hub.tekton.dev/tekton/task/git-cli">git-cli</a>.<br /></li><li>To test things, I have used <a href="https://kind.sigs.k8s.io/docs/user/quick-start/">Kind</a> as "Getting started" guide suggested and Tekton installed there really easily.</li><li>Creating user on Kind to be able to follow some Tekton how-tos out there that are building container using Tekton was beyond mine possibilities. I did not needed to build images, so I'm good.</li><li>To be able to talk to the app running in Kind cluster, I used <a href="https://kind.sigs.k8s.io/docs/user/ingress#ingress-nginx">Ingress NGINX</a> and it's <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target">rewrite rule</a> annotation as mine app did not liked extra data in URI. Mine specific example: <a href="https://github.com/jhutar/my-tekton-perfscale-experiment/blob/16c3789b0fda7d209cf7d1616bf78994bcc14768/learning/our-demo-app-manually/perfcale-demo-app-ingress.yaml">perfcale-demo-app-ingress.yaml</a>.<br /></li><li><a href="https://tekton.dev/docs/pipelines/pipelines/#using-results">Results</a> are quite simple concept. You just configure them in the task and in the script you redirect the value (their size is quite limited) to filename stored in some variable.</li><li>When something does not make sense, you can always add a step to your task with <span style="font-family: courier;">sleep 1000</span> and <span style="font-family: courier;">kubectl exec -ti pod/... -- bash</span>.</li><li>Every pipeline run name have to be unique. It would be boring to create new ones with <span style="font-family: courier;">kubectl apply -f ...</span> on each of the attempts I have done without some script, but having <span style="font-family: courier;"><a href="https://github.com/jhutar/my-tekton-perfscale-experiment/blob/16c3789b0fda7d209cf7d1616bf78994bcc14768/pipeline-run.yaml#L4">generateName</a></span> in pipeline run metadata and using <span style="font-family: courier;">kubectl create -f ...</span> saved my day.<br /></li></ol><p>At the end mine <a href="https://github.com/jhutar/my-tekton-perfscale-experiment/blob/16c3789b0fda7d209cf7d1616bf78994bcc14768/pipeline.yaml">pipeline</a> worked like this:</p><ol style="text-align: left;"><li>Clones required repos:</li><ol><li>Demo application: <a href="https://github.com/jhutar/perfscale-demo-app">perfscale-demo-app</a></li><li>YAMLs and misc: <a href="https://github.com/jhutar/my-tekton-perfscale-experiment">my-tekton-perfscale-experiment</a></li><li>Results repo: <a href="https://github.com/jhutar/my-tekton-perfscale-experiment-results">my-tekton-perfscale-experiment-results</a></li></ol><li>Deploys the demo application (no need to build images as it is done by <a href="https://quay.io/">quay.io</a>)<br /></li><ol><li>It is a simple bank-like application exposing REST API</li><li>There is a <a href="https://locust.io/">Locust</a> framework based perf test included with the application that stresses the API and measures RPS</li><li>Application consist of one pod with PostgreSQL and another one with application itself and Gunicorn application server</li></ol><li>Populates test data into the application (code for it is built in into the demo application for ease of use)<br /></li><li>Runs the Locust framework based perf test from demo application's repository, but wrapped in thin <a href="https://github.com/redhat-performance/opl/">OPL helper</a> that stores the test results in nice JSON</li><li>Runs a script that loads historical results for the same test with same parameters and determines if new result is PASS or FAIL</li><li>Adds a new result into results repository and pushes it to GitHub</li><li>Deletes a demo app deployment</li></ol><p> The commands I have used most when working on the pipeline were:</p><ul style="text-align: left;"><li><span style="font-family: courier;">kubectl apply --filename pipeline.yaml</span> - to apply changes I have done to the pipeline<br /></li><li><span style="font-family: courier;">kubectl create --filename pipeline-run.yaml</span> - to create new pipeline run with random suffix<br /></li><li><span style="font-family: courier;">tkn pipelinerun logs --follow --last --all --prefix</span> - to follow logs of the current pipeline run</li><li><span style="font-family: courier;">tkn pipelinerun delete --all --force</span> - to remove all previous pipeline runs<br /></li></ul><p><br /></p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-18126528632747997302021-11-06T01:19:00.004+01:002021-11-06T01:19:42.840+01:00Use Google Chat webhook API to send message to channel<p>Sending message to the Google Chat (chat.google.com, recently integrated to mail.google.com/chat/) is surprisingly simple with their <a href="https://developers.google.com/chat/how-tos/webhooks">webhook API</a>. Just took me some time to figure out a data structure to send (although it it very simple as I found on <a href="https://developers.google.com/chat/quickstart/incoming-bot-python">Incoming webhook with Python</a> page):</p>
<pre>
curl -X POST -H "Content-Type: application/json; charset=UTF-8" --data '{"text": "Hello @jhutar, how are you?"}' "https://chat.googleapis.com/v1/spaces/.../messages?key=...&token=..."
{
"name": "spaces/.../messages/...",
"sender": {
"name": "users/...",
"displayName": "Jenkins incomming webhook",
"avatarUrl": "",
"email": "",
"domainId": "",
"type": "BOT",
"isAnonymous": false
},
"text": "Hello @jhutar",
"cards": [],
"previewText": "",
"annotations": [],
"thread": {
"name": "spaces/.../threads/..."
},
"space": {
"name": "spaces/...",
"type": "ROOM",
"singleUserBotDm": false,
"threaded": true,
"displayName": "Name of the channel"
},
"fallbackText": "",
"argumentText": "Hello @jhutar, how are you?",
"attachment": [],
"createTime": "2021-10-11T22:07:39.490063Z",
"lastUpdateTime": "2021-10-11T22:07:39.490063Z"
}
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-71125558516024005962021-11-05T00:15:00.001+01:002021-11-05T00:15:11.779+01:00Using redirect() on https:// site handled by Flask -> Gunicorn -> Nginx redirects me to http<p>And this might be hard to notice as we usually configure Nginx to also redirect all http requests to https, so at the end you end up on correct link, but going through http is not nice and it can also break CORS as I was told.</p>
<p>There are two parts of the problem.</p>
<p>First, Nginx need to set certain headers when proxying application running in Gunicorn (e.g. see them in <a href='https://docs.gunicorn.org/en/stable/deploy.html#nginx-configuration'>Deploying Gunicornbehind Nginx</a>):</p>
<pre>
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
proxy_pass http://my_app;
</pre>
<p>Second, Flask app needs to know to use content of these headers to overwrite normal request metadata (it is called <a href='https://werkzeug.palletsprojects.com/en/2.0.x/middleware/proxy_fix/'>Proxy Fix</a> and brought to us by Werkzung which is a Flask's dependency):</p>
<pre>
from flask import Flask
from werkzeug.middleware.proxy_fix import ProxyFix
app = Flask(__name__, instance_relative_config=True)
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1)
</pre>
<p>Obligatory note: see the docs linked above as these numbers are actually important from security point of view.</p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-87358077566462774052021-10-14T15:57:00.001+02:002021-10-14T15:57:49.159+02:00Accessing Red Hat OpenShift Streams for Apache Kafka from Python<p>Recently Red Hat launched a way how to get <a href="https://developers.redhat.com/products/red-hat-openshift-streams-for-apache-kafka/getting-started">managed Kafka instance</a> and you can get one for 2 days for free. There is a limit for 1 MB per second. So far I was only using Kafka without any auth and without any encription, so here is what I had to do to make it work - typing here so I do not need to reinvent once I forgot it :-) I'm using <a href="https://kafka-python.readthedocs.io/">python-kafka</a>.</p>
<p>I have created a cluster and under it's "Connection" menu item I got bootstrap server <code>jhutar--c-jc--gksg-rukm-fu-a.bf2.kafka-stage.rhcloud.com:443</code>. It also advised me to create a service account, so I created one and it generated "Client ID" like <code>srvc-acct-00000000-0000-0000-0000-000000000000</code> and "Client secret" like <code>00000000-0000-0000-0000-000000000000</code>. Although "SASL/OAUTHBEARER" authentication method is recommended, as of now it is too complicated for my poor head, so I used "SASL/PLAIN" where you just use "Client ID" as username and "Client secret" as password. To create a topic, there is UI as well</p>
<p>To create producer and consumer:</p>
<pre>
producer = KafkaProducer(
bootstrap_servers='jhutar--c-jc--gksg-rukm-fu-a.bf2.kafka-stage.rhcloud.com:443',
sasl_plain_username='srvc-acct-00000000-0000-0000-0000-000000000000',
sasl_plain_password='00000000-0000-0000-0000-000000000000',
security_protocol='SASL_SSL',
sasl_mechanism='PLAIN',
)
</pre>
<p>And consumer needs same parameters:</p>
<pre>
consumer = KafkaConsumer(
'<topic>',
bootstrap_servers='jhutar--c-jc--gksg-rukm-fu-a.bf2.kafka-stage.rhcloud.com:443',
sasl_plain_username='srvc-acct-00000000-0000-0000-0000-000000000000',
sasl_plain_password='00000000-0000-0000-0000-000000000000',
security_protocol='SASL_SSL',
sasl_mechanism='PLAIN',
)
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-22883572951202149962020-06-19T08:53:00.000+02:002020-06-19T08:53:28.104+02:00How to access oldish Dell DRAC console? Install old Firefox and Java in Docker containerSometimes I need to access DRAC console (i.e. "remote screen") of some older Dell system - in this case it is PowerEdge R610 and "About" says "Integrated Dell Remote Access Controller 6 - Enterprise, Version 1.98, © 2008-2011 Dell Inc.". I have tried bunch of browsers on my not super recent Fedora 30 and it failed. One solution I have found is to access the console with Firefox and IcedTea plugin from Fedora 27. VM feels too heavy for this usecase, so just allow connections to my X server:<br />
<blockquote class="tr_bq">
# xhost local:root </blockquote>
start Fedora 27 container with some extra vars and mounts:<br />
<blockquote class="tr_bq">
# docker run \<br />
--network host \<br />
-e DISPLAY=:0.0 \<br />
-v /tmp/.X11-unix:/tmp/.X11-unix \<br />
-v /root/.Xauthority:/root/.Xauthority:rw \<br />
-ti fedora:27 /bin/bash</blockquote>
and then in the container install all needed and run the browser:<br />
<blockquote class="tr_bq">
# dnf -y install firefox xorg-x11-xauth icedtea-web<br />
# firefox</blockquote>
Now I'm able to open the console, hooray!<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiULtz3bj-hnrjwhhqCRVL3xLEozCrLL8-SXKTMHn9B4H9aHkJp1k7aqlNzabXZLyxwbpBc1j1x2C4GSzPFo_IF0y9iPbigS_v3kmLOX9mSqsRPYzsrzf-_lVPFnV987L35mbItEAMciVjI/s1600/Screenshot_2020-06-19_08-47-49.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="744" data-original-width="1281" height="231" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiULtz3bj-hnrjwhhqCRVL3xLEozCrLL8-SXKTMHn9B4H9aHkJp1k7aqlNzabXZLyxwbpBc1j1x2C4GSzPFo_IF0y9iPbigS_v3kmLOX9mSqsRPYzsrzf-_lVPFnV987L35mbItEAMciVjI/s400/Screenshot_2020-06-19_08-47-49.jpg" width="400" /></a></div>
<br />jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-36153388354555893732020-06-14T21:45:00.000+02:002020-06-14T21:46:02.343+02:00Dumping my notes on Jenkins shared library use in declarative pipeline<div>
Recently I have tasked myself to work on how we organize our Jenkins jobs code. We are using <a href="https://www.jenkins.io/doc/book/pipeline/syntax/#declarative-pipeline">Jenkins declarative pipeline</a> (officially that is "simplified and opinionated syntax on top of the Pipeline
sub-systems", but basically it is "nicely structured, but they forbid you to use mostly anything fancy in you Groovy code").</div>
<div>
<br /></div>
<div>
In our case we have lots (and it is slowly growing) of jobs which are running tests from different directories in a same way, but with different parameters. Also we have another set of jobs that are checking something and in some case they trigger the test jobs. This all loudly calls for sharing the code, so I wanted to take a look at how to do it.</div>
<div>
<br /></div>
<div>
Here is list of tabs I'm closing now once I'm done :)</div>
<br />
<ul style="text-align: left;">
<li><a href="https://www.jenkins.io/doc/book/pipeline/shared-libraries/">Shared libraries</a> - Create another git repo with <span style="font-family: "courier";">vars/</span> directory and share the code there</li>
<li><a href="https://www.jenkins.io/doc/book/pipeline/shared-libraries/#defining-declarative-pipelines">Defining Declarative Pipelines</a> - You can define function that implements whole declarative pipeline (you can also define custom step, but you can not define shared stage)</li>
<li><a href="http://docs.groovy-lang.org/next/html/documentation/working-with-collections.html#Collections-Maps">Groovy Maps</a> - If you want to pass lots of parameters to the function, passing a Map is very handy and easy to read - in Python equivalent would be passing a dict I think</li>
<li><a href="https://stackoverflow.com/questions/39832862/jenkins-cannot-define-variable-in-pipeline-stage">Cannot define variable in pipeline stage</a> - In declarative pipeline you can not define variable directly, use this workaround</li>
<li><a href="https://www.jenkins.io/doc/pipeline/steps/workflow-durable-task-step/#sh-shell-script">sh: Shell Script</a> - Jenkins help for sh step</li>
<li><a href="https://stackoverflow.com/questions/36547680/how-to-do-i-get-the-output-of-a-shell-command-executed-using-into-a-variable-fro">Catching both output and error code</a> - spoiler: <span style="font-family: "courier";">try { ... } catch ( ... ) { ... }</span> is also forbidden in declarative pipeline, use <span style="font-family: "courier";">script { ... }</span> again</li>
<li><a href="https://www.jenkins.io/doc/book/pipeline/syntax/#when">when</a> - this is way how to skip a stage in declarative pipeline based on something</li>
<li><a href="https://code-maven.com/groovy-files">Writing file in groovy</a> - This would probably work, but Jenkins was giving me lots of security related warnings</li>
<li><a href="https://stackoverflow.com/questions/52306401/library-variable-in-jenkins-shared-library">Library variable in jenkins shared library</a> - how to get variables from <span style="font-family: "courier";">vars/someVariables.groovy</span> in <span style="font-family: "courier";">vars/someFunction.groovy</span></li>
</ul>
<div>
<br /></div>
jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-17742802762329637252020-06-03T22:37:00.001+02:002020-06-03T22:37:58.784+02:00Different Numpy results on different systems<p>Recently mine wife in her compute intensive project noticed strange issue when same input date and same code produced different output on different hosts in the cloud she is using. She tracked it down to this simple python & numpy test case:</p>
<pre>
#!/usr/bin/env python
import numpy
a = [[0.67115835, -0.74131401], [0.74131401, 0.67115835]]
b = [[-4.95494273, -1.77170756, ...], [1.87737564, 4.99951546, ...]]
c = numpy.matmul(a, b)
print(c)
</pre>
<p>On one host it was returning (correct result):</p>
<pre>
[[-4.71727605 -4.89530718 -4.71727605 -4.89530718 -4.71727605 -4.89530718
...
</pre>
<p>On different host it was returning (wrong result):</p>
<pre>
[[ 0.34761728 0.12429531 0.34761728 0.12429531 -0.13170853 -0.35074431
...
</pre>
<p>We have been googling a bit and found some tips:</p>
<ul>
<li><a href='https://stackoverflow.com/questions/38228088/same-python-code-same-data-different-results-on-different-machines'>Same Python code, same data, different results on different machines</a>: There is a suggestion to use <code>export MKL_CBWR=AUTO</code> - this did not helped</li>
<li><a href='https://software.intel.com/en-us/forums/intel-distribution-for-python/topic/721634'>Different results on different computers when using Scipy stack compiled with MKL</a>: Some more tips are here: <code>export MKL_CBWR=AVX; export OMP_NUM_THREADS=1</code> or with <code>export MKL_CBWR=SSE4_2</code> - this did not helped again (anyway, our Numpy does not seem to be build with this Intel Math Kernel Library - see ldd output below)</li>
<li><a href='https://github.com/numpy/numpy/issues/11500'>How can I make sure svd having same result on different machine</a>: Here this is mentioned: <code>export NUMEXPR_NUM_THREADS=1; export OPENBLAS_NUM_THREADS=1</code> - no luck, but finally: <strong><code>export OPENBLAS_CORETYPE=prescott</code></strong> - bingo!</li>
</ul>
<p>According to <a href='<code>export OPENBLAS_CORETYPE=prescott</code>'>OpenBLAS ussage</a> instructions (OpenBLAS is "an optimized BLAS (Basic Linear Algebra Subprograms) library" if you have same knowleadge about it as I do), <code>OPENBLAS_CORETYPE</code> is environment variable which control the kernel selection. Looking at <a href="https://en.wikipedia.org/wiki/Pentium_4">Prescott</a> CPU description, it was launched in 2000, so is probably a safe default. Some more details about our setup:</p>
<p>Numpy in our setup is linked with these libraries:</p>
<pre>
ldd $( rpm -ql python3-numpy | grep '\.so$' ) | grep -v '\.so:$' | sed 's/([0-9a-zx]\+)/(...)/' | sort -u
/lib64/ld-linux-x86-64.so.2 (...)
libc.so.6 => /lib64/libc.so.6 (...)
libdl.so.2 => /lib64/libdl.so.2 (...)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (...)
libgfortran.so.5 => /lib64/libgfortran.so.5 (...)
libm.so.6 => /lib64/libm.so.6 (...)
libopenblasp.so.0 => /lib64/libopenblasp.so.0 (...)
libpthread.so.0 => /lib64/libpthread.so.0 (...)
libpython3.7m.so.1.0 => /lib64/libpython3.7m.so.1.0 (...)
libquadmath.so.0 => /lib64/libquadmath.so.0 (...)
libutil.so.1 => /lib64/libutil.so.1 (...)
linux-vdso.so.1 (...)
</pre>
<p>The code is packaged in Singularity containers and is running on <a href='https://www.metacentrum.cz/'>Metacentrum</a> cloud. Two machines we have hit were - the one with correct result:</p>
<pre>
Singularity> tail -n 28 /proc/cpuinfo
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel Xeon E3-12xx v2 (Ivy Bridge)
stepping : 9
microcode : 0x1
cpu MHz : 2199.998
cache size : 16384 KB
physical id : 15
siblings : 1
core id : 0
cpu cores : 1
apicid : 15
initial apicid : 15
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc rep_good nopl xtopology pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt
+tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm kaiser fsgsbase smep erms xsaveopt arat
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 4399.99
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
Singularity> uname -a
Linux [hostname] 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4 (2018-08-21) x86_64 x86_64 x86_64 GNU/Linux
</pre>
<p>The other host - the one with wrong results:</p>
<pre>
Singularity> tail -n 28 /proc/cpuinfo
processor : 63
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
stepping : 4
microcode : 0x200004d
cpu MHz : 2399.392
cache size : 22528 KB
physical id : 1
siblings : 32
core id : 15
cpu cores : 16
apicid : 63
initial apicid : 63
fpu : yes
fpu_exception : yes
cpuid level : 22
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology
+nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault
+epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt
+clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit
bogomips : 4201.71
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
Singularity> uname -a
Linux [hostname] 4.19.0-9-amd64 #1 SMP Debian 4.19.118-2 (2020-04-29) x86_64 x86_64 x86_64 GNU/Linux
</pre>
<p>Packages in the container are:</p>
<ul>
<li>python3-3.7.4-1.fc30.x86_64</li>
<li>python3-numpy-1.16.4-2.fc30.x86_64</li>
</ul>
<p>If you want to try, full test case is here:</p>
<pre>
import numpy
a = [[0.67115835,-0.74131401],[0.74131401,0.67115835]]
b = [[-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,-4.95494273,-1.77170756,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,0.64695557,-3.83073022,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,-1.91809893,2.14768601,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.99713467,-0.2208969,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936,3.96850733,4.57202936],[1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,1.87737564,4.99951546,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,3.8703295,-0.52141675,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,-2.14706037,1.84069058,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,0.15277681,-3.98429871,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674,-2.93121093,-5.91274674]]
c = numpy.matmul(a,b)
print(c)
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-78666288256989600692020-05-22T08:14:00.004+02:002020-06-03T21:28:53.464+02:00Breaking voice in Google Meet (Hangouts) with Firefox<p>Lot of ppl kept telling me how much distortions are there when I'm talking over Google Meet - not that everithing I say is gold, but sometimes I just want to get answer to my question :-) I'm using some (cheap) KOSS headset which connects via USB and integrates its own sound card.</p>
<p>After some digging this is what I did:</p>
<ol>
<li>Enabled Echo/Noise-Cancellation module on PulseAudio (<a href="https://www.freedesktop.org/wiki/Software/PulseAudio/">PulseAudio</a> is a sound system in Linux - it is a proxy for sound applications) and disabled automatic analog gain control: <a href="https://wiki.archlinux.org/index.php/PulseAudio/Troubleshooting#Enable_Echo/Noise-Cancellation">Enable Echo/Noise-Cancellation</a> (AFAICT this means PulseAudion won't attempt to automatically increase volume of the mic when I'm quiet) - you might not need this step as to me it is hard to believe sound server would have this bad results (note that you run <code>pulseaudio -k</code> as normal user - that "$" - and I had to restart Firefox I have been using to play some sound to hear the difference)</li>
<li>Then I have disabled automatic gain control and friends in Firefox: <a href="https://wiki.archlinux.org/index.php/Firefox/Tweaks#Disable_WebRTC_audio_post_processing">Disable WebRTC audio post processing</a></li>
<li>Hearing what you are recording was very useful: <a href="https://wiki.archlinux.org/index.php/PulseAudio/Troubleshooting#Echo_test">Echo test</a> (to stop it, just use <code>$ pactl unload-module module-loopback</code>)</li></ol><div>Note I'm on fedora-release-30-6.noarch, firefox-76.0-2.fc30.x86_64 and pulseaudio-12.2-9.fc30.x86_64.<br /></div><ol>
</ol>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-46347935150040522062020-04-03T10:55:00.001+02:002020-04-03T10:55:27.450+02:00Running insecure registry via Podman, starting on reboot<p>This is quite simple, there is a lot of docs out there, so just to put it on one place I do not need to look for it next time I want to install this "full stack solution":</p>
<ul>
<li><a href='https://computingforgeeks.com/create-docker-container-registry-with-podman-letsencrypt/'>Setup Docker Container Registry with Podman & Let’s Encrypt SSL</a></li>
<li><a href='https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/running_containers_as_systemd_services_with_podman'>Starting Containers with systemd</a></li>
</ul>
<h2>Install Podman</h2>
<pre>
# subscription-manager repos --enable rhel-7-server-extras-rpms
# yum install podman
</pre>
<h2>Start and configure registry</h2>
<pre>
# lvcreate data_perf54 --size 25G --name docker_registry
# mkfs.xfs /dev/mapper/data_xyz-docker_registry
# tail -n 1 /etc/fstab
/dev/mapper/data_xyz-docker_registry /var/lib/registry xfs defaults 0 0
# mount /var/lib/registry
# podman run --privileged -d --name registry-srv -p 5000:5000 -v /var/lib/registry:/var/lib/registry registry:2
</pre>
<h2>Surviving reboot</h2>
<pre>
# cat /etc/systemd/system/registry-srv-container.service
[Unit]
Description=Docker registry container
[Service]
Restart=always
ExecStart=/usr/bin/podman start -a registry-srv
ExecStop=/usr/bin/podman stop -t 30 registry-srv
[Install]
WantedBy=local.target
# systemctl enable registry-srv-container.service
# systemctl restart registry-srv-container.service
# systemctl status registry-srv-container.service
</pre>
<h2>Push to it</h2>
<pre>
# grep 'registries.insecure' -A 1 /etc/containers/registries.conf
[registries.insecure]
registries = ['your_hostname:5000']
# podman pull busybox
# podman tag docker.io/library/busybox $( hostname ):5000/busybox
# podman push $( hostname ):5000/busybox
</pre>
<h2>See registry's API</h2>
<pre>
# curl -s "http://$( hostname ):5000/v2/_catalog?n=100" | json_reformat
{
"repositories": [
"busybox"
]
}
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-35079965951568060682020-02-12T21:15:00.000+01:002020-02-12T21:15:32.539+01:00Changing Slack status from command-line<p>I'm used to track mine working time with a custom script and I have keyboard shortcut for start and stop actions. One thing the script does is that when I "stop" my work, it sets mine IRC nick to "jhutar_afk" and when I "start" my work it sets the nick back to plain "jhutar". This is e.g. handy taking a launch break or going for some errands. The same is possible with Slack:</p>
<p>I have started with <a href='https://medium.com/slack-developer-blog/how-to-set-a-slack-status-from-other-apps-ab4eef871339'>How to set a Slack status from other apps</a> article. It is very easy.</p>
<ol>
<li>Create <a href='https://api.slack.com/legacy/custom-integrations/legacy-tokens'>a legacy token</a> (I know, it is legacy - need to investigate how to use current way :-/) and put it to the variable <code>token='xoxp-0000000000-000000000000-000000000000-00000000000000000000000000000000'</code></li>
<li>Construct a json to send: <code>profile='{"status_text": "Away from keyboard", "status_emoji": ":tea:"}'</code></li>
<li>Send that to the API: <code> curl -X POST https://slack.com/api/users.profile.set --silent --data "profile=$profile" --data "token=$token"</code></li>
</ol>
<p>This way I can set various statuses. Then I have realized that what I really want is to set mine <a href="https://api.slack.com/docs/presence-and-status">presence</a> (that green or empty/black dot next to your nick). That is easy as well:</p>
<ol>
<li>Again (see above), prepare your token <code>token='xoxp-0000000000-000000000000-000000000000-00000000000000000000000000000000'</code></li>
<li>Use the Slack API: <code>curl -X POST https://slack.com/api/users.setPresence --silent --data "presence=away" --data "token=$token"</code> (to set yourself away) or use <code>presence=auto</code> (for a normal mode when Slack decides based on your activity)</li>
</ol>
<p>Given how long I was avoiding to actually add it to my script, it was very easy at the end :-)</p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-76765922657236806532020-02-05T09:59:00.002+01:002020-02-05T10:11:28.454+01:00My Prometheus@OpenShift cheat-sheet<p>Prometheus is monitoring solution in OpenShift and I'm reading some basic out of it after some performance tests. Here are the queries I'm using:</p>
<p>Get CPU consumption by pods xyz...:</p>
<pre>
sum(pod_name:container_cpu_usage:sum{pod_name=~'xyz.*',namespace='qa'})
</pre>
<p>Now for memory usage (these "POD" and "''" container names seems to be doubling the value):</p>
<pre>
sum(container_memory_usage_bytes{namespace='qa', pod_name=~'xyz.*', container_name!='POD', container_name!=''})
</pre>
<p>Also see these nice <a href='https://prometheus.io/docs/prometheus/latest/querying/examples/'>examples</a> on how to construct query.</p>
<p>To <a href='https://prometheus.io/docs/prometheus/latest/querying/api/'>Querying Prometheus via API</a>, I have used range query and this Python code:</p>
<pre>
assert start is not None and end is not None, \
"We need timerange to approach Prometheus"
# Get data from Prometheus
token = 'your `oc whoami -t`'
url = 'https://prometheus-k8s.openshift-monitoring.svc:9091/api/v1/query_range' # I'm running this inside the cluster, so I can use internal hostname
headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json',
}
params = {
'query': monitoring_query, # this will be some query from above
'step': monitoring_step, # using 60 seconds here
'start': start.strftime('%s'),
'end': end.strftime('%s'),
}
requests.packages.urllib3.disable_warnings(InsecureRequestWarning) # security is hard ;)
response = requests.get(url, headers=headers, params=params, verify=False)
# Check that what we got back seems OK
response.raise_for_status()
json_response = response.json()
assert json_response['status'] == 'success'
assert 'data' in json_response
assert 'result' in json_response['data']
assert len(json_response['data']['result']) == 1
assert 'values' in json_response['data']['result'][0]
data = [float(i[1]) for i in json_response['data']['result'][0]['values']]
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-28930557497174998932019-04-04T07:15:00.000+02:002019-04-04T07:25:45.360+02:00Checking if filesystem supports d_type via Ansible<p>I had a task to add an assert to Ansible playbook to check that root <code>/</code> filesystem <a href='https://linuxer.pro/2017/03/what-is-d_type-and-why-docker-overlayfs-need-it/'>supports d_type</a> (i.e. "directory entry type" and is important for Docker / Podman). Here is the result:</p>
<pre>
- name: Read root filesystem type and device
set_fact:
root_fstype: "{{ ansible_mounts | selectattr('mount', 'equalto', '/') | map(attribute='fstype') | join(',') }}"
root_device: "{{ ansible_mounts | selectattr('mount', 'equalto', '/') | map(attribute='device') | join(',') }}"
- name: If root filesystem is xfs, get more info from it
command:
xfs_info "{{ root_device }}"
register: xfs_info
ignore_errors: true
when: "root_fstype != 'ext4'"
- name: Check that root filesystem supports directory entry type (aka d_type)
assert:
that:
- "root_fstype == 'ext4' or ( root_fstype == 'xfs' and 'ftype=1' in xfs_info.stdout )"
</pre>
<p>First, we extract (more info on how we <a href='https://stackoverflow.com/questions/31895602/ansible-filter-a-list-by-its-attributes#31896249'>get one value from a list of dicts based on another value</a>) root filesystem type (e.g. "ext4" or "xfs") and device (e.g. <code>/dev/mapper/centos_something-root</code>) from Ansible facts obtained by <a href='https://docs.ansible.com/ansible/latest/modules/setup_module.html'>setup module</a> (use <code>ansible -u root -i inventory.ini -m setup all</code> to see all the facts). Then we load additional info by <code>xfs_info</code> utility if the fs type is "xfs". And last step is finally to assert for <strong>d_type</strong> support: "ext4" is a clear win, when we got "xfs", "ftype=1" in <code>xfs_info</code> output is needed.</p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-80006143943021584372019-03-14T10:14:00.000+01:002019-03-14T10:14:10.800+01:00Local variable in bash<p>Just a quick explanation on how <code>local</code> works in Bash I have been sending to somebody. Have this code:</p>
<pre>
$ cat /tmp/aaa
function without_local() {
variable1='hello'
echo "Function without_local: $variable1"
}
function with_local() {
local variable2='world'
echo "Function with_local: $variable2"
}
echo "(1) variable1='$variable1'; variable2='$variable2'"
without_local
echo "(2) variable1='$variable1'; variable2='$variable2'"
with_local
echo "(3) variable1='$variable1'; variable2='$variable2'"
</pre>
<p>And run it and notice that <code>variable1</code> behaves as global, <code>variable2</code> wont leave function's context:</p>
<pre>
$ bash /tmp/aaa
(1) variable1=''; variable2=''
Function without_local: hello
(2) variable1='hello'; variable2=''
Function with_local: world
(3) variable1='hello'; variable2=''
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-79926460944453018452018-11-16T08:16:00.000+01:002018-11-16T08:16:03.192+01:00Difference in bash's $@ and $* and how it is expanded<p>I keep forgetting about this and I'm always confused what is happening but it is not that difficult. Example:</p>
<pre>
$ function measurement_add() { python -c "import sys; print sys.argv[1:]" $@; }
$ measurement_add "Hello world" 1
['Hello', 'world', '1']
$ function measurement_add() { python -c "import sys; print sys.argv[1:]" $*; }
$ measurement_add "Hello world" 1
['Hello', 'world', '1']
$ function measurement_add() { python -c "import sys; print sys.argv[1:]" "$@"; }
$ measurement_add "Hello world" 1
['Hello world', '1']
$ function measurement_add() { python -c "import sys; print sys.argv[1:]" "$*"; }
$ measurement_add "Hello world" 1
['Hello world 1']
</pre>
Looking into <code>man bash</code> into <em>Special Parameters</em> section:
<pre>
* Expands to the positional parameters, starting from one. When
the expansion is not within double quotes, each positional
parameter expands to a separate word. In contexts where it is
performed, those words are subject to further word splitting and
pathname expansion. <strong>When the expansion occurs within double
quotes, it expands to a single word with the value of each
parameter separated by the first character of the IFS special
variable.</strong> That is, "$*" is equivalent to "$1c$2c...", where c
is the first character of the value of the IFS variable. If IFS
is unset, the parameters are separated by spaces. If IFS is
null, the parameters are joined without intervening separators.
@ Expands to the positional parameters, starting from one. <strong>When
the expansion occurs within double quotes, each parameter
expands to a separate word.</strong> That is, "$@" is equivalent to "$1"
"$2" ... If the double-quoted expansion occurs within a word,
the expansion of the first parameter is joined with the begin‐
ning part of the original word, and the expansion of the last
parameter is joined with the last part of the original word.
When there are no positional parameters, "$@" and $@ expand to
nothing (i.e., they are removed).
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-83849990821230240482018-09-28T14:03:00.001+02:002018-09-28T14:03:42.380+02:00Hide sidebar when viewing message in NeoMutt<p>When I need to copy&paste from the email I'm reading in the Mutt, I had to enter edit mode (press 'e'), so sidebar gets out of my way. So I was thinking: would it be possible to hide sidebar when you enter reading mode (i.e. "pager") and then show it again when you leave pager mode back to screen with list of your emails (i.e. "index" mode)?</p>
<pre>
message-hook ~A "set sidebar_visible = no"
macro pager q "<enter-command>set sidebar_visible = yes<enter><exit>"
</pre>
<p>So, this way you set sidebar invisible <a href="http://www.mutt.org/doc/manual/#message-hook">when you are viewing message</a> (that "<a href="http://www.mutt.org/doc/manual/#patterns">~A</a>" is a mutt pattern to specify "any message") and then when I press "q" to close the pager, it sets sidebar to be visible again and exits the pager.</p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-18288595595887071542018-04-18T03:13:00.001+02:002018-04-18T03:13:27.940+02:00Creating Singularity containerFor some project I had to create a <a href="http://singularity.lbl.gov/">Singularity</a> container (because of the environment where it needs to run). Singularity is a container technology used by scientists. It turned out that it is very simple. Although there is not recent enough Singularity package in the normal Fedora repos, they have nice guide on how to <a href="http://singularity.lbl.gov/install-linux#build-an-rpm-from-source">build your own packages</a> and that worked for me on a first try. First <a href="http://singularity.lbl.gov/docs-recipes">"Singularity" build file</a> (similar to "Dockerfile" file) - I have based it on a recent Fedora docker image:
<pre>
$ cat Singularity
Bootstrap: docker
From: fedora:latest
%help
This is a container for ... project
See https://gitlab.com/...
Email ...
%labels
Homepage https://gitlab.com/...
Author ...
Maintainer ...
Version 0.1
%files
/home/where/is/your/project /projectX
%post
dnf -y install python2-biopython python2-numpy python2-tabulate python2-scikit-learn pymol mono-core python2-unittest2 python2-svgwrite python2-requests
chown -R 1000:1000 /projectX # probably not important
%test
cd /projectX
python -m unittest discover
%environment
export LC_ALL=C
%runscript
exec /projectX/worker.sh
</pre>
Majority of above is not needed: e.g. "%help" have completely free form, keys in "%labels" does not seem to be codified, "%test" which is ran as a last step of a build process is also optional. To <a href="http://singularity.lbl.gov/docs-build-container">build it</a>:
<pre>
$ sudo singularity build --writable projectXy.simg Singularity # *.simg is a native format of singularity-2.4.6
$ sudo singularity build --writable projectX.img projectX.simg # where the project is supposed to run, there is 2.3.2 which needs older *.img, so convert *.simg into it
</pre>
Mine original idea was to have the project in writable container (so the option "--writable" above), but that would require me to run it as root again (or I'm missing something), so I have ended up with the solution of running the container in read-only mode and <a href="http://singularity.lbl.gov/docs-mount">mounting mine project into it</a> to have a read-write-able directory where I can generate the data:
<pre>
$ echo "cd /projectX; ./worker.sh" \
| singularity exec --bind projectX/:/projectX projectX.img bash
</pre>
So far it looks like it just works.jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-63127298384373437032018-02-08T08:50:00.004+01:002018-02-08T08:50:56.508+01:00TaskWarior and listing tasks completed in specified time range<p>Besides mountain of inefficient papers (good thing about paper is you can lose it easily), I'm using <a href="https://taskwarrior.org/">TaskWarrior</a> to manage my TODOs. Today I'm creating list of things I have finished in last (fiscal) year, so its <a href="https://taskwarrior.org/docs/report.html">reporting</a> and <a href="https://taskwarrior.org/docs/filter.html">filtering</a> capabilities come handy. But as not too advanced user, it took me some time to discover how to list tasks completed in specified time range:</p>
<pre>
$ task end.after:2017-03-01 end.before:2018-03-01 completed
ID UUID Created Completed Age P Project Tags R Due Description
[...]
- 5248839d 2017-01-24 2017-03-20 1.0y H Prepare 'Investigating differences in code coverage reports' presentation
[...]
</pre>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-8227741601253816892018-02-01T12:17:00.001+01:002018-02-20T14:51:51.490+01:00Running wsgi application in OpenShift v.3 for the first time<p>For some time I'm running publicly available web application Brno People team uses to determine technical interests of employee candidates. The app was running on the OpenShift v.2, but that was discontinued and I had to port it to <a href='https://www.openshift.com/'>OpenShift</a> v.3. I was postponing the task for multiple moths and got to the state when v.2 instances were finally discontinued. It turned out that porting is not that hard. This is what I have done.</p>
<p>Note that I'm using Red Hat's Employee account, so some paths might be different when OpenShift is being used "normally" (you will see something like <code>....starter-us-east-1.openshift.com</code> instead of mine <code>....rh-us-east-1.openshift.com</code>).</p>
<div style="float: right;">
<a href="https://www.openshift.com/" title="Powered by OpenShift Online">
<img alt="Powered by OpenShift Online" src="https://www.openshift.com/images/logos/powered_by_openshift.png">
</a>
</div>
<p>Because I need to put some private data into the image, I want the image to <a href='https://stackoverflow.com/questions/48560320/when-i-push-image-into-openshift-registry-is-it-private'>be accessible only</a> from mine OpenShift Online account. Anyway, I have created Dockerfile based on <a href='https://github.com/fedora-cloud/Fedora-Dockerfiles/tree/master/python'>Fedora Dockerfile template for Python</a> (is it official?) like this:</p>
<pre>
FROM fedora
MAINTAINER Jan Hutar <jhutar@redhat.com>
RUN dnf -y update && dnf clean all
RUN dnf -y install subversion python mod_wsgi && dnf clean all
RUN ...
VOLUME ["/xyz/data/"]
WORKDIR /xyz
EXPOSE 8080
USER root
CMD ["python", "/xyz/application"]
</pre>
<p>TODO for the container is: move to Python 3 (so I do not need to install python2 and dependencies, figure out how to have the private data available to the container but not being part of it (it is quite big directory structure), go through these nice <a href='https://docs.openshift.com/online/creating_images/guidelines.html'>General Container Image Guidelines</a> and explore this <a href='https://docs.openshift.com/online/creating_images/metadata.html'>Image Metadata thingy</a>.</p>
<p>Once I had that, I needed to <a href='https://stackoverflow.com/questions/40357773/unable-to-push-docker-image-to-openshift-origin-docker-registry'>login to the OpenShift's registry</a>, build my image locally, test it and push it:</p>
<pre>
sudo docker build --tag selftest .
sudo docker run -ti --publish 8080:8080 --volume $( pwd )/data/:/xyz/data/ xyz # now I can check if all looks sane with `firefox http://localhost:8080`
oc whoami -t # this shows token I can use below
sudo docker login -u <username> -p <token> registry.rh-us-east-1.openshift.com
sudo docker tag xyz registry.rh-us-east-1.openshift.com/xyz/xyz
sudo docker push registry.rh-us-east-1.openshift.com/xyz/xyz
</pre>
<p>Now I have used Console on https://console.rh-us-east-1.openshift.com/ to create new application, then added a deployment to that application with <em>Add to Project -> Deploy Image</em> and selected (well, I could <a href='https://docs.openshift.com/online/dev_guide/application_lifecycle/new_app.html'>use cli tool oc</a> for that):</p>
<ul>
<li>surprisingly you do not choose "Image Name" here</li>
<li>but you choose "Image Stream Tag" with:<ul>
<li>Namespace: selftest</li>
<li>Image Stream: selftest</li>
<li>Tag: latest</li>
</ul></li>
</ul>
<p>Next step looks logical, but I got stuck on it for some time, but <a href=''>OpenShift folks</a> helped me (thanks Jiří!). I just need to be aware of <a href='https://docs.openshift.com/online/architecture/additional_concepts/storage.html#pv-restrictions'>OpenShift Online Restrictions</a>.</p>
<p>So, because I wanted persistent storage and because my account uses Amazon EC2, I can not use "Shared Access (RWX)" storage type (useful when new pod is starting while old pod is still running), I had to change process of new pods start to first stop old and the start new: <em>Applications -> Deployments -> my deployment -> Actions -> Edit -> Strategy Type: Recreate</em>. I have created a storage with "RWO (Read-Write-Once)" access mode, added it to the deployment (<em>... -> Actions -> Add Storage</em>) and made sure that that storage is the only one attached to the deployment (<em>... -> Actions -> Edit YAML</em> and check that keys <code>spec.template.spec.containers.env.volumeMounts</code> and <code>spec.template.spec.volumes</code> only contain one volume you have just attached). In my case, there is this in the YAML definition:</p>
<pre>
[...]
spec:
containers:
- env:
[...]
volumeMounts:
- mountPath: /xyz/data
name: volume-jt8t6
[...]
volumes:
- name: volume-jt8t6
persistentVolumeClaim:
claimName: xyz-storage-claim
[...]
</pre>
<p>When working with this, I have also used <em>... -> Actions -> Pause Rollouts</em>. It is also possible to configure environment variable for a deployment in <em>... -> Actions -> Edit -> Environment Variables</em> which is useful to pass passwords and stuff into your app (so I do not need to store them in the image). In the app I use something like <code>import os; SMTP_SERVER_PASSWORD = os.getenv('XYZ_SMTP_SERVER_PASSWORD', default='')</code> to read that.</p>
<p>To make the app available from outside world, I have created a route in <em>Applications -> Routes -> Create Route</em>. It created domain like <code>http://<span title='This was a route name I have used'>xyz-route</span>-xyz.6923.rh-us-east-1.openshiftapps.com</code> for me.</p>
<p>Now, looks like everything works for me and I'm kinda surprised how easy it was. I plan to get nicer domain and configure its CNAME DNS record and to explore monitoring possibilities OpenShift have. I'll see how it goes.</p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-21877446652371098672017-12-25T22:17:00.000+01:002017-12-25T22:17:09.550+01:00Monitoring Satellite 5 with PCP (Performance Co-Pilot)During some performance testing we have done, I have used <a href='http://pcp.io/'>PCP</a> to monitor basic stats about Red Hat Satellite 5 (could be applied to Spacewalk). I was unable to make it sufficient, but maybe somebody could fix and enhance it. I have taken lots from <a href='https://lukas.zapletalovi.com/'>lzap</a>. First of all, install PCP (PostgreSQL and Apache <abbr title="Performance Metric Domain Agents">PMDA</abbr> lives in RHEL Optional repo as of now, in CentOS7 it seems to be directly in base repo):
<pre>
subscription-manager repos --enable rhel-6-server-optional-rpms
yum -y install pcp pcp-pmda-postgresql pcp-pmda-apache
subscription-manager repos --disable rhel-6-server-optional-rpms
</pre>
Now start services:
<pre>
chkconfig pmcd on
chkconfig pmlogger on
service pmcd restart
service pmlogger restart
</pre>
Install PostgreSQL and Apache monitoring plugins
<pre>
cd /var/lib/pcp/pmdas/postgresql
./Install # select "c(ollector)" when it asks
cd /var/lib/pcp/pmdas/apache
echo -e "<Location /server-status>\n SetHandler server-status\n Allow from all\n</Location>\nExtendedStatus On" >>/etc/httpd/conf/httpd.conf
service httpd restart
./Install
# Configure hot proc
cat >/var/lib/pcp/pmdas/proc/hotproc.conf <<EOF
> #pmdahotproc
> Version 1.0
> fname == "java" || fname == "httpd"
> EOF
</pre>
And because I have Graphite/Grafana setup available, I was pumping selected metrices there (from RHEL6 which is with SysV):
<pre>
# tail -n 1 /etc/rc.local
pcp2graphite --graphite-host carbon.example.com --prefix "pcp-jhutar." --host localhost - kernel.all.load mem.util.used mem.util.swapCached filesys.full network.interface.out.bytes network.interface.in.bytes disk.dm.read disk.dm.write apache.requests_per_sec apache.bytes_per_sec apache.busy_servers apache.idle_servers postgresql.stat.all_tables.idx_scan postgresql.stat.all_tables.seq_scan postgresql.stat.database.tup_inserted postgresql.stat.database.tup_returned postgresql.stat.database.tup_deleted postgresql.stat.database.tup_fetched postgresql.stat.database.tup_updated filesys.full hotproc.memory.rss &
</pre>
<h2>Problems I had with this</h2>
For some reasons I have not investigated closely, after some time PostgreSQL data were not visible in Grafana.
Also I was unable to get hotproc data available in Grafana.
Also I was experimenting with PCP's emulation of Graphite and its Grafana, but PCP's Graphite lack filters which makes its usage hard and not practical for anything beyond simple stats.jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-77183377182050171372017-12-22T07:03:00.004+01:002017-12-22T07:03:58.473+01:00"Error: Too many open files" when inside Docker container<h2>Does not work: various ulimit settings for daemon</h2>
<p>We have container build from this <a href='https://github.com/redhat-performance/satellite-performance/blob/7888124d697b9ab6020d026d6c50a6859eb0198a/playbooks/satellite/roles/docker-host/files/Dockerfile'>Dockerfile</a>, running RHEL7 with oldish docker-1.10.3-59.el7.x86_64. Containers are started with:</p>
<pre>
# for i in $( seq 500 ); do
docker run -h "$( hostname -s )container$i.example.com" -d --tmpfs /tmp --tmpfs /run -v /sys/fs/cgroup:/sys/fs/cgroup:ro <strong>--ulimit nofile=10000:10000</strong> r7perfsat
done
</pre>
<p>and we have set limits for a docker service on a docker host:</p>
<pre>
# cat /etc/systemd/system/docker.service.d/limits.conf
[Service]
LimitNOFILE=10485760
LimitNPROC=10485760
</pre>
<p>but we have still seen issues with "Too many open files" inside the container. It could happen when installing package with yum (resulting into corrupted rpm database, <code>rm -rf /var/lib/rpm/__db.00*; rpm --rebuilddb;</code> saved it though) and when enabling service (our containers have systemd in them on purpose):</p>
<pre>
# systemctl restart osad
Error: Too many open files
# echo $?
0
</pre>
<p>Because I was stupid, I have not checked journal (in the container) in the moment when I have spotted the failure for the first time:</p>
<pre>
Dec 21 10:18:54 b08-h19-r620container247.example.com journalctl[39]: Failed to create inotify watch: Too many open files
Dec 21 10:18:54 b08-h19-r620container247.example.com systemd[1]: systemd-journal-flush.service: main process exited, code=exited, status=1/FAILURE
Dec 21 10:18:54 b08-h19-r620container247.example.com systemd[1]: inotify_init1() failed: Too many open files
Dec 21 10:18:54 b08-h19-r620container247.example.com systemd[1]: inotify_init1() failed: Too many open files
</pre>
<h2>Does work: fs.inotify.max_user_instances</h2>
<p>At the end I have ran into some <a href='https://github.com/moby/moby/issues/30287'>issue</a> and very last comment there had a think I have not seen before. At the end I have ended up with:</p>
<pre>
# cat /etc/sysctl.d/40-max-user-watches.conf
<strong>fs.inotify.max_user_instances=8192</strong>
fs.inotify.max_user_watches=1048576
</pre>
<p>Default on a different machine is:</p>
<pre>
# sysctl -a 2>&1 | grep fs.inotify.max_user_
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
</pre>
<p>Looks like increasing <strong>fs.inotify.max_user_instances</strong> helped and our containers are stable.</p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com1tag:blogger.com,1999:blog-1364918800937946808.post-83802995380705377472017-11-04T22:11:00.001+01:002017-12-25T22:19:14.128+01:00Working local DNS for your libvirtd guests<p>Update 2017-12-25: possibly better way: <a href="https://lukas.zapletalovi.com/2017/10/definitive-solution-to-libvirt-guest-naming.html">Definitive solution to libvirt guest naming</a></p>
<p>This is basically just a copy&paste of commands from this great post: <a href="https://liquidat.wordpress.com/2017/03/03/howto-automated-dns-resolution-for-kvmlibvirt-guests-with-a-local-domain/">[Howto] Automated DNS resolution for KVM/libvirt guests with a local domain</a> and <a href='https://m0dlx.com/blog/Automatic_DNS_updates_from_libvirt_guests.html'>Automatic DNS updates from libvirt guests</a> which already saved me a lots of typing. So with my favorite domain:</p>
<p>Make libvirtd's dnsmasq to act as authoritative nameserver for example.com domain:</p>
<pre>
# virsh net-dumpxml default
<network>
<name>default</name>
<uuid>2ed15952-d1c0-4819-bde5-c8f7278ce3ac</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:a4:40:a7'/>
<strong> <domain name='example.com' localOnly='yes'/></strong>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
</pre>
<p>And restart that network:</p>
<pre>
# virsh net-edit default # do the edits here
# virsh net-destroy default
# virsh net-start default
</pre>
<p>Now configure NetworkManager to start its own dnsmasq which acts like your local caching nameserver and forwards all requests for example.com domain to 192.168.122.1 nameserver (which is libvirtd's dnsmasq):</p>
<pre>
# cat /etc/NetworkManager/conf.d/localdns.conf
[main]
dns=dnsmasq
# cat /etc/NetworkManager/dnsmasq.d/libvirt_dnsmasq.conf
server=/example.com/192.168.122.1
</pre>
<p>And restart NetworkManager:</p>
<pre>
# systemctl restart NetworkManager
</pre>
<p>Now if I have guest with hostname set (check <code>HOSTNAME=...</code> in <code>/etc/sysconfig/network</code> on RHEL6 and below or <code>hostnamectl set-hostname ...</code> on RHEL7) to "satellite.example.com", I can ping it from both virtualization host and another guests on that host by hostname. If you have some old OS release on the guest (like RHEL 6.5 from what I have tried, 6.8 do not need this), set hostname with <code>DHCP_HOSTNAME=...</code> in <code>/etc/sysconfig/network-scripts/ifcfg-eth0</code> (on the guest) to make this to work.</p>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-17166007143864993942017-08-13T00:14:00.000+02:002017-08-13T00:14:37.787+02:00Quick Python performance tuning cheat-sheet<p>Just a few commands without any context:</p>
<h2>Profiling with cProfile</h2>
<p>This helped me to find slowest functions, because when optimizing, I need to focus on these (best ration of work needed vs. benefits). This helped me to find function which did some unnecessary calsulations over and over again:</p>
<pre>$ python -m cProfile -o cProfile-first_try.out ./layout-generate.py ...
$ python -m pstats cProfile-first_try.out
Welcome to the profile statistics browser.
cProfile-first_try.out% <em>sort</em>
Valid sort keys (unique prefixes are accepted):
cumulative -- cumulative time
module -- file name
ncalls -- call count
pcalls -- primitive call count
file -- file name
line -- line number
name -- function name
calls -- call count
stdname -- standard name
nfl -- name/file/line
filename -- file name
cumtime -- cumulative time
time -- internal time
tottime -- internal time
cProfile-first_try.out% <em>sort tottime</em>
cProfile-first_try.out% <em>stats 10</em>
Sat Aug 12 23:19:40 2017 cProfile-first_try.out
18508294 function calls (18501563 primitive calls) in 8.369 seconds
Ordered by: internal time
List reduced from 2447 to 10 due to restriction <10>
ncalls tottime percall cumtime percall filename:lineno(function)
27837 4.230 0.000 5.015 0.000 ./utils_matrix2layout.py:14(get_distance_matrix_2d)
10002 1.356 0.000 1.513 0.000 ./utils_matrix2layout.py:244(get_measured_error_2d)
5674796 0.572 0.000 0.572 0.000 /usr/lib64/python2.7/collections.py:90(__iter__)
5340664 0.219 0.000 0.219 0.000 {math.sqrt}
5432768 0.189 0.000 0.189 0.000 {abs}
230401 0.183 0.000 0.183 0.000 /usr/lib64/python2.7/collections.py:71(__setitem__)
1 0.178 0.178 0.282 0.282 ./utils_matrix2layout.py:543(count_angles_layout)
10018 0.119 0.000 0.345 0.000 /usr/lib64/python2.7/_abcoll.py:548(update)
1 0.102 0.102 6.749 6.749 ./utils_matrix2layout.py:393(iterate_evolution)
1142 0.092 0.000 0.111 0.000 /usr/lib64/python2.7/site-packages/numpy/linalg/linalg.py:1299(svd)
</pre>
<p>To explain the columns, <a href="https://docs.python.org/2/library/profile.html#instant-user-s-manual">Instant User’s Manual</a> says:</p>
<dl>
<dt>tottime<dt>
<dd>for the total time spent in the given function (and <strong>excluding</strong> time made in calls to sub-functions)<dd>
<dt>cumtime<dt>
<dd>is the cumulative time spent in <strong>this and all subfunctions</strong> (from invocation till exit). This figure is accurate even for recursive functions.<dd>
</dl>
<h2>Lets compile to C with <a href='http://docs.cython.org/en/latest/src/quickstart/build.html'>Cython</a></h2>
<p>Simply performing this on a module which does most of the work gave me about 20% speedup:</p>
<pre># dnf install python2-Cython
$ cython utils_matrix2layout.py
$ gcc `python2-config --cflags --ldflags` -shared utils_matrix2layout.c -o utils_matrix2layout.so
</pre>
<p>There is much more to do to optimize it, but that would need additional work, so not now :-) Some helpful links:</p>
<ul>
<li>use python2-config to <a href='https://stackoverflow.com/questions/34637319/hello-world-program-in-cython-fails-with-gcc-after-installation-of-python-dev-an'>get compile and linking options</a></li>
<li>when you want to create *.so instead of executable, you need to <a href='https://stackoverflow.com/questions/11116399/crt1-o-in-function-start-undefined-reference-to-main-in-linux'>use <code>-shared</code></a></li>
<li>bloq with <a href='http://abijithkp.me/technology/python/how-to/2014/07/26/how-to-use-cython-compiler/'>summary and nice FAQ</a></li>
</ul>jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0tag:blogger.com,1999:blog-1364918800937946808.post-63873758474984069662017-06-03T15:40:00.000+02:002017-06-03T15:40:20.085+02:00Hard times with Ansible's to_datetime filter<p>I was a bit stupid. Took me some time to figure out how this is supposed to work, so here it is.</p>
<p>In Ansible 2.2 there is a new "<a href='http://docs.ansible.com/ansible/playbooks_filters.html#other-useful-filters'>to_datetime</a>" filter (see bottom of that section) which transforms datetime string to datetime object.</p>
<p>Basic usage to convert string to datetime object, not that useful in this form:</p>
<pre>
$ ansible -c local -m debug -a "var=\"'2017-06-01 20:30:40'|to_datetime\"" localhost
localhost | SUCCESS => {
"'2017-06-01 20:30:40'|to_datetime": "2017-01-06 20:30:40",
"changed": false
}
</pre>
<p>You can parse datetime string with arbitrary format (see python documentation for <a href='https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior'>formatting options</a>):</p>
<pre>
$ ansible -c local -m debug -a "var=\"'06/01/2017'|to_datetime('%m/%d/%Y')\"" localhost
localhost | SUCCESS => {
"'06/01/2017'|to_datetime('%m/%d/%Y')": "2017-06-01 00:00:00",
"changed": false
}
</pre>
<p>In my case I wanted to parse start and end date of some registered task in ansible playbook (so in playbook string to parse would be <code>registered_variable.start</code>). Maybe you do not want datetime object, but UNIX timestamp from that (notice extra parenthesis):</p>
<pre>
$ ansible -c local -m debug -a "var=\"('2017-06-01 20:30:40.123456'|to_datetime('%Y-%m-%d %H:%M:%S.%f')).strftime('%s')\"" localhost
localhost | SUCCESS => {
"('2017-06-01 20:30:40.123456'|to_datetime('%Y-%m-%d %H:%M:%S.%f')).strftime('%s')": "1496341840",
"changed": false
}
</pre>
<p>But actually I just wanted to know how much time given task took, so I can simply substract two datetime objects and then use <code>.seconds</code> of the resulting timedelta object:</p>
<pre>
$ ansible -c local -m debug -a "var=\"( '2017-06-01 20:30:40.123456'|to_datetime('%Y-%m-%d %H:%M:%S.%f') - '2017-06-01 20:29:35.234567'|to_datetime('%Y-%m-%d %H:%M:%S.%f') ).seconds\"" localhost
localhost | SUCCESS => {
"( '2017-06-01 20:30:40.123456'|to_datetime('%Y-%m-%d %H:%M:%S.%f') - '2017-06-01 20:29:35.234567'|to_datetime('%Y-%m-%d %H:%M:%S.%f') ).seconds": "64",
"changed": false
}
</pre>
<p>In pre 2.2 version, you can use this inefficient call of local <code>date</code> command (you do not have to worry about that ugly '\\\"' escaping when in playbook):</p>
<pre>
$ ansible -c local -m debug -a "var=\"lookup('pipe', 'date -d \\\"2017-06-01 20:30:40.123456\\\" +%s')\"" localhost
localhost | SUCCESS => {
"changed": false,
"lookup('pipe', 'date -d \"2017-06-01 20:30:40.123456\" +%s')": "1496341840"
}
</pre>
Good luck!jhutarhttp://www.blogger.com/profile/03223464780225808571noreply@blogger.com0