2024-08-29

Troubles pasting ritch-text content from CLI to Confluence

Using some custom script I'm generating some markdown (because it is easy to write) content, maybe it is status report or something. fOR EXAMPLE this:

$ cat /tmp/report.md
# Monday
* Watching cat videos
* Fixing what I broke at Fry

Now you want to paste it to, say, Google Document. First step is to convert to HTML:

$ cat /tmp/report.md | multimarkdown
<h1 id="monday">Monday</h1>

<ul>
<li>Watching cat videos</li>
<li>Fixing what I broke at Fry</li>
</ul>

Or using Pandoc:

$ cat /tmp/report.md | pandoc --from=markdown --to=html
<h1 id="monday">Monday</h1>
<ul>
<li>Watching cat videos</li>
<li>Fixing what I broke at Fry</li>
</ul>

Now to transfer it, very convinient way I was using is to copy it to the clipboard (and then just paste in in the editor with Ctrl+V):

cat /tmp/report.md | multimarkdown | xclip -sel clip -t "text/html"

When I needed to paste into Altasian Confluence WYSIWYG editor, it did not worked for me for some reason - only plain text was copied and the formatting was lost. But copying from normal web page worked. What is the difference? Thanks to this great answer, I have examined how clipboard looks like when I copy snippet from a web browser and noticed this:

$ # Selected and copied something from the web browser
$ xclip -o -selection clipboard -t TARGETS
TIMESTAMP
TARGETS
MULTIPLE
SAVE_TARGETS
text/html
text/_moz_htmlcontext
text/_moz_htmlinfo
UTF8_STRING
COMPOUND_TEXT
TEXT
STRING
text/plain;charset=utf-8
text/plain
text/x-moz-url-priv
$ xclip -o -selection clipboard -t text/html
<meta http-equiv="content-type" content="text/html; charset=utf-8"><ol>
<li>copy something from you web browser</li>
<li>investigate available types</li>
</ol>

So looks like I need that <meta http-equiv="content-type" content="text/html; charset=utf-8"><ol> string, so let's add it:

$ (echo '<meta http-equiv="content-type" content="text/html; charset=utf-8">'; cat /tmp/report.md | multimarkdown) | xclip -sel clip -t "text/html"

And here we go, pasting to Conflence works now!

Note: For some reason I do not understand, it seems to work the old way now once I pasted the new way for a first time :-/ Maybe above is not needed at all, YMMW.

2023-12-10

First steps with Fedora IoT on Raspberry Pi 4

This is just a quick description, brain dump, of how we started measuring temperature and moisture in various rooms in our flat. This first part describes setting up a server that will collect and present the data.

Here I was basically just following How to install Fedora IoT on Raspberry Pi 4 post.

First, I installed prerequisites on my Fedora workstation:

$ sudo dnf install gnupg2 arm-image-installer

Now download images and Fedora GPG key so I can verify the signature of the downloaded image:

$ wget https://download.fedoraproject.org/pub/alt/iot/39/IoT/aarch64/images/Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz
$ wget https://download.fedoraproject.org/pub/alt/iot/39/IoT/aarch64/images/Fedora-IoT-39-aarch64-20231103.1-CHECKSUM
$ wget https://fedoraproject.org/fedora.gpg

Now verify the signature and check the downloaded key fingerprint matches what Fedora team publishes on a page with list of current GPG keys fingerprints.

$ gpgv --keyring ./fedora.gpg Fedora-IoT-39-aarch64-20231103.1-CHECKSUM
gpgv: Signature made Mon 06 Nov 2023 03:03:23 PM CET
gpgv:                using RSA key E8F23996F23218640CB44CBE75CF5AC418B8E74C
gpgv: Good signature from "Fedora (39) <fedora-39-primary@fedoraproject.org>"

$ gpg --show-keys fedora.gpg | grep -C 1 E8F23996F23218640CB44CBE75CF5AC418B8E74C
pub   rsa4096 2022-08-09 [SCE]
      E8F23996F23218640CB44CBE75CF5AC418B8E74C
uid                      Fedora (39) <fedora-39-primary@fedoraproject.org>

Also check checksum of downloaded image:

$ sha256sum -c Fedora-IoT-39-aarch64-20231103.1-CHECKSUM
Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz: OK
sha256sum: WARNING: 17 lines are improperly formatted

I guess that warning in the output is there because the checksum file also contains GPG signature and sha256sum utility dislikes it, so I did not worried about it:

$ cat Fedora-IoT-39-aarch64-20231103.1-CHECKSUM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

# Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz: 712162312 bytes
SHA256 (Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz) = bb10ed4469f6ac1448162503b68f84e96f8e8410e5c8c9a4a56b5406bf13dff2
-----BEGIN PGP SIGNATURE-----

iQI[...]
-----END PGP SIGNATURE-----

Now I put SD card into the USB reader and connected it. It shows nicely in lsblk output as /dev/sda:

$ lsblk
NAME                                          MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                                             8:0    1  29.7G  0 disk  
├─sda1                                          8:1    1   512M  0 part  
└─sda2                                          8:2    1   4.4G  0 part  
zram0                                         252:0    0     8G  0 disk  [SWAP]
nvme0n1                                       259:0    0 476.9G  0 disk  
├─nvme0n1p1                                   259:1    0     1G  0 part  /boot
├─nvme0n1p2                                   259:2    0    32G  0 part  [SWAP]
└─nvme0n1p3                                   259:3    0 443.9G  0 part  
  └─luks-c9494ef2-8c28-4817-befb-8ac43ff79ee3 253:0    0 443.9G  0 crypt /home
                                                                         /


So now I should have everything needed to write Fedora IoT image to the card:


$ sudo arm-image-installer --image Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz --media /dev/sda --addkey /home/jhutar/.ssh/id_rsa.pub --norootpass --resizefs --target=rpi4 -y
[sudo] password for jhutar:

=====================================================
= Selected Image:
= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz
= Selected Media : /dev/sda
= U-Boot Target : rpi4
= Root Password will be removed.
= Root partition will be resized
= SSH Public Key /home/jhutar/.ssh/id_rsa.pub will be added.
=====================================================

*****************************************************
*****************************************************
******** WARNING! ALL DATA WILL BE DESTROYED ********
*****************************************************
*****************************************************
= Writing:
= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz
= To: /dev/sda ....
4282384384 bytes (4.3 GB, 4.0 GiB) copied, 243 s, 17.6 MB/s
1024+0 records in
1024+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 243.92 s, 17.6 MB/s
= Writing image complete!
= Resizing /dev/sda ....
Checking that no-one is using this disk right now ... OK

Disk /dev/sda: 29.72 GiB, 31914983424 bytes, 62333952 sectors
Disk model: UHSII uSD Reader
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc1748067

Old situation:

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1028095 1026048 501M 6 FAT16
/dev/sda2 1028096 3125247 2097152 1G 83 Linux
/dev/sda3 3125248 8388607 5263360 2.5G 83 Linux

/dev/sda3:
New situation:
Disklabel type: dos
Disk identifier: 0xc1748067

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1028095 1026048 501M 6 FAT16
/dev/sda2 1028096 3125247 2097152 1G 83 Linux
/dev/sda3 3125248 62333951 59208704 28.2G 83 Linux

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
e2fsck 1.46.5 (30-Dec-2021)
/dev/sda3 has unsupported feature(s): FEATURE_C12
e2fsck: Get a newer version of e2fsck!

root: ********** WARNING: Filesystem still has errors **********

resize2fs 1.46.5 (30-Dec-2021)
Please run 'e2fsck -f /dev/sda3' first.

= Raspberry Pi 4 Uboot is already in place, no changes needed.
= Removing the root password.
= Adding SSH key to authorized keys.

= Installation Complete! Insert into the rpi4 and boot.

There are some errors there, right? Well, I ignored them. RPi booted nicely, I was able to setup everything (more on that in some later blog) but then I have ran out of storage. Only then I noticed root filesystem was not extended (exactly as the error message says).

After some online search I figured I need e2fsprogs-1.47.0 or newer and (at the time?) it was only available in Fedora 39. So I upgraded and now I was able to write the image just fine:

$ sudo arm-image-installer --image Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz --media /dev/sda --addkey /home/jhutar/.ssh/id_rsa.pub --norootpass --resizefs --target=rpi4 -y
[sudo] password for jhutar:

=====================================================
= Selected Image:                                 
= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz
= Selected Media : /dev/sda
= U-Boot Target : rpi4
= Root Password will be removed.
= Root partition will be resized
= SSH Public Key /home/jhutar/.ssh/id_rsa.pub will be added.
=====================================================
 
*****************************************************
*****************************************************
******** WARNING! ALL DATA WILL BE DESTROYED ********
*****************************************************
*****************************************************
= Writing:
= Fedora-IoT-39.20231103.1-20231103.1.aarch64.raw.xz
= To: /dev/sda ....
4282384384 bytes (4.3 GB, 4.0 GiB) copied, 245 s, 17.5 MB/s
1024+0 records in
1024+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 245.85 s, 17.5 MB/s
= Writing image complete!
= Resizing /dev/sda ....
Checking that no-one is using this disk right now ... OK

Disk /dev/sda: 29.72 GiB, 31914983424 bytes, 62333952 sectors
Disk model: UHSII uSD Reader
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc1748067

Old situation:

Device     Boot   Start     End Sectors  Size Id Type
/dev/sda1  *       2048 1028095 1026048  501M  6 FAT16
/dev/sda2       1028096 3125247 2097152    1G 83 Linux
/dev/sda3       3125248 8388607 5263360  2.5G 83 Linux

/dev/sda3:
New situation:
Disklabel type: dos
Disk identifier: 0xc1748067

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sda1  *       2048  1028095  1026048  501M  6 FAT16
/dev/sda2       1028096  3125247  2097152    1G 83 Linux
/dev/sda3       3125248 62333951 59208704 28.2G 83 Linux

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
e2fsck 1.47.0 (5-Feb-2023)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
root: 32041/164640 files (0.6% non-contiguous), 449099/657920 blocks
resize2fs 1.47.0 (5-Feb-2023)
Resizing the filesystem on /dev/sda3 to 7401088 (4k) blocks.
The filesystem on /dev/sda3 is now 7401088 (4k) blocks long.

= Raspberry Pi 4 Uboot is already in place, no changes needed.
= Removing the root password.
= Adding SSH key to authorized keys.

= Installation Complete! Insert into the rpi4 and boot.

Stick the card into RPi, connect power and ethernet cable and voila, I'm now able to SSH to RPi. I got the IP from my router management console from DHCP leases section.

2023-08-10

Kinda SQL "join" in Prometheus

I'm using Prometheus query language, PromQL, quite a bit these days. But all I do are very simple queries like sum(...) or rate(...[5m]) on a OpenShift cluster I work with.

For few weeks now, mine inner me was bothered with one slightly more complex stuff. To filter one metric by label from different metric - something like JOIN in SQL world. Specifically, I wanted to see number of pods running on each cluster node with "worker" role.

We have (I'm on OpenShift 4.13) kube_node_role{role="worker"} (AFAICT this is what we call "vector" in PromQL) that have these labels:

Name            container             endpoint    job                 namespace             node                     prometheus                role    service             Value
kube_node_role  kube-rbac-proxy-main  https-main  kube-state-metrics  openshift-monitoring  ip-1-2-3-4.ec2.internal  openshift-monitoring/k8s  worker  kube-state-metrics  1
kube_node_role  kube-rbac-proxy-main  https-main  kube-state-metrics  openshift-monitoring  ip-1-2-3-5.ec2.internal  openshift-monitoring/k8s  worker  kube-state-metrics  1
[...]

And we have kube_pod_info with these labels:

Name           container             created_by_kind  created_by_name  endpoint    host_ip        host_network  job                 namespace       node                     pod                                               pod_ip       priority_class           prometheus                service             uid                                   Value
kube_pod_info  kube-rbac-proxy-main  <none>           <none>           https-main  10.201.24.232  false         kube-state-metrics  openshift-etcd  ip-1-2-3-6.ec2.internal  etcd-guard-ip-10-201-24-232.ec2.internal          10.128.2.14  system-cluster-critical  openshift-monitoring/k8s  kube-state-metrics  a2eec7b0-9f29-42b4-853d-6919d963ffa1  1
kube_pod_info  kube-rbac-proxy-main  <none>           <none>           https-main  10.201.24.232  false         kube-state-metrics  openshift-etcd  ip-1-2-3-6.ec2.internal  revision-pruner-13-ip-10-201-24-232.ec2.internal  10.128.2.4   system-node-critical     openshift-monitoring/k8s  kube-state-metrics  df5cdd67-b0f5-4896-b0b0-85095a9f3122  1

We will use on(...) and group_left(...) PromQL operators. I had some issues understanding what these do, so here is mine interpretation:

* because values are always 1 in these vectors, it is safe to multiply these.

on(...) allows me to define common label(s) that should be used to match two different vectors.

group_left(...) ... thinking, thinking, nah. I forgot mine mental model here :-/

And this is the final query I used:

sum(
    kube_pod_info{} * on(node) group_left(role) kube_node_role{role="worker"}
) by(node)

These links helped me a lot:

2022-11-20

Tekton notes

Some time ago I was tasked to create a pipeline in Tekton and here comes some my notes I would like to know few days back :-)

  1.  It is not that hard. It is just a fancy way how split your shell automation script :-)
  2. Tasks are not that useful on it's own (I think), you have to stack them into a Pipeline, but Tekton Getting started with Tasks is nice start. Once you need more details, see Tasks.
  3. Pipelines are the core thing and starting with Getting Started with Pipelines helped me a lot. Later I was looking into Pipelines as well.
  4. Blog post Building in Kubernetes Using Tekton was also very helpful. Also used my company's CI/CD guide here ad there.
  5. Tekton Hub is full of tasks (and more) and I was able to easilly see documentation for them and more importantly the actual YAML behind them - having a practical examples of how the tasks could look like behind simple hello-world tasks was very helpful. E.g. see kubernetes-actions and git-clone or git-cli.
  6. To test things, I have used Kind as "Getting started" guide suggested and Tekton installed there really easily.
  7. Creating user on Kind to be able to follow some Tekton how-tos out there that are building container using Tekton was beyond mine possibilities. I did not needed to build images, so I'm good.
  8. To be able to talk to the app running in Kind cluster, I used Ingress NGINX and it's rewrite rule annotation as mine app did not liked extra data in URI. Mine specific example: perfcale-demo-app-ingress.yaml.
  9. Results are quite simple concept. You just configure them in the task and in the script you redirect the value (their size is quite limited) to filename stored in some variable.
  10. When something does not make sense, you can always add a step to your task with sleep 1000 and kubectl exec -ti pod/... -- bash.
  11. Every pipeline run name have to be unique. It would be boring to create new ones with kubectl apply -f ... on each of the attempts I have done without some script, but having generateName in pipeline run metadata and using kubectl create -f ... saved my day.

At the end mine pipeline worked like this:

  1. Clones required repos:
    1. Demo application: perfscale-demo-app
    2. YAMLs and misc: my-tekton-perfscale-experiment
    3. Results repo: my-tekton-perfscale-experiment-results
  2. Deploys the demo application (no need to build images as it is done by quay.io)
    1. It is a simple bank-like application exposing REST API
    2. There is a Locust framework based perf test included with the application that stresses the API and measures RPS
    3. Application consist of one pod with PostgreSQL and another one with application itself and Gunicorn application server
  3. Populates test data into the application (code for it is built in into the demo application for ease of use)
  4. Runs the Locust framework based perf test from demo application's repository, but wrapped in thin OPL helper that stores the test results in nice JSON
  5. Runs a script that loads historical results for the same test with same parameters and determines if new result is PASS or FAIL
  6. Adds a new result into results repository and pushes it to GitHub
  7. Deletes a demo app deployment

 The commands I have used most when working on the pipeline were:

  • kubectl apply --filename pipeline.yaml - to apply changes I have done to the pipeline
  • kubectl create --filename pipeline-run.yaml - to create new pipeline run with random suffix
  • tkn pipelinerun logs --follow --last --all --prefix - to follow logs of the current pipeline run
  • tkn pipelinerun delete --all --force - to remove all previous pipeline runs


2021-11-06

Use Google Chat webhook API to send message to channel

Sending message to the Google Chat (chat.google.com, recently integrated to mail.google.com/chat/) is surprisingly simple with their webhook API. Just took me some time to figure out a data structure to send (although it it very simple as I found on Incoming webhook with Python page):

curl -X POST -H "Content-Type: application/json; charset=UTF-8" --data '{"text": "Hello @jhutar, how are you?"}' "https://chat.googleapis.com/v1/spaces/.../messages?key=...&token=..."
{
  "name": "spaces/.../messages/...",
  "sender": {
    "name": "users/...",
    "displayName": "Jenkins incomming webhook",
    "avatarUrl": "",
    "email": "",
    "domainId": "",
    "type": "BOT",
    "isAnonymous": false
  },
  "text": "Hello @jhutar",
  "cards": [],
  "previewText": "",
  "annotations": [],
  "thread": {
    "name": "spaces/.../threads/..."
  },
  "space": {
    "name": "spaces/...",
    "type": "ROOM",
    "singleUserBotDm": false,
    "threaded": true,
    "displayName": "Name of the channel"
  },
  "fallbackText": "",
  "argumentText": "Hello @jhutar, how are you?",
  "attachment": [],
  "createTime": "2021-10-11T22:07:39.490063Z",
  "lastUpdateTime": "2021-10-11T22:07:39.490063Z"
}

2021-11-05

Using redirect() on https:// site handled by Flask -> Gunicorn -> Nginx redirects me to http

And this might be hard to notice as we usually configure Nginx to also redirect all http requests to https, so at the end you end up on correct link, but going through http is not nice and it can also break CORS as I was told.

There are two parts of the problem.

First, Nginx need to set certain headers when proxying application running in Gunicorn (e.g. see them in Deploying Gunicornbehind Nginx):

proxy_set_header    Host                $host;
proxy_set_header    X-Real-IP           $remote_addr;
proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
proxy_set_header    X-Forwarded-Proto   $scheme;
proxy_set_header    X-Forwarded-Host    $http_host;
proxy_pass  http://my_app;

Second, Flask app needs to know to use content of these headers to overwrite normal request metadata (it is called Proxy Fix and brought to us by Werkzung which is a Flask's dependency):

from flask import Flask
from werkzeug.middleware.proxy_fix import ProxyFix

app = Flask(__name__, instance_relative_config=True)
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_proto=1, x_host=1)

Obligatory note: see the docs linked above as these numbers are actually important from security point of view.

2021-10-14

Accessing Red Hat OpenShift Streams for Apache Kafka from Python

Recently Red Hat launched a way how to get managed Kafka instance and you can get one for 2 days for free. There is a limit for 1 MB per second. So far I was only using Kafka without any auth and without any encription, so here is what I had to do to make it work - typing here so I do not need to reinvent once I forgot it :-) I'm using python-kafka.

I have created a cluster and under it's "Connection" menu item I got bootstrap server jhutar--c-jc--gksg-rukm-fu-a.bf2.kafka-stage.rhcloud.com:443. It also advised me to create a service account, so I created one and it generated "Client ID" like srvc-acct-00000000-0000-0000-0000-000000000000 and "Client secret" like 00000000-0000-0000-0000-000000000000. Although "SASL/OAUTHBEARER" authentication method is recommended, as of now it is too complicated for my poor head, so I used "SASL/PLAIN" where you just use "Client ID" as username and "Client secret" as password. To create a topic, there is UI as well

To create producer and consumer:

producer = KafkaProducer(
    bootstrap_servers='jhutar--c-jc--gksg-rukm-fu-a.bf2.kafka-stage.rhcloud.com:443',
    sasl_plain_username='srvc-acct-00000000-0000-0000-0000-000000000000',
    sasl_plain_password='00000000-0000-0000-0000-000000000000',
    security_protocol='SASL_SSL',
    sasl_mechanism='PLAIN',
)

And consumer needs same parameters:

consumer = KafkaConsumer(
    '<topic>',
    bootstrap_servers='jhutar--c-jc--gksg-rukm-fu-a.bf2.kafka-stage.rhcloud.com:443',
    sasl_plain_username='srvc-acct-00000000-0000-0000-0000-000000000000',
    sasl_plain_password='00000000-0000-0000-0000-000000000000',
    security_protocol='SASL_SSL',
    sasl_mechanism='PLAIN',
)