Some time ago I was tasked to create a pipeline in Tekton and here comes some my notes I would like to know few days back :-)
- It is not that hard. It is just a fancy way how split your shell automation script :-)
- Tasks are not that useful on it's own (I think), you have to stack them into a Pipeline, but Tekton Getting started with Tasks is nice start. Once you need more details, see Tasks.
- Pipelines are the core thing and starting with Getting Started with Pipelines helped me a lot. Later I was looking into Pipelines as well.
- Blog post Building in Kubernetes Using Tekton was also very helpful. Also used my company's CI/CD guide here ad there.
- Tekton Hub is full of tasks (and more) and I was able to easilly see documentation for them and more importantly the actual YAML behind them - having a practical examples of how the tasks could look like behind simple hello-world tasks was very helpful. E.g. see kubernetes-actions and git-clone or git-cli.
- To test things, I have used Kind as "Getting started" guide suggested and Tekton installed there really easily.
- Creating user on Kind to be able to follow some Tekton how-tos out there that are building container using Tekton was beyond mine possibilities. I did not needed to build images, so I'm good.
- To be able to talk to the app running in Kind cluster, I used Ingress NGINX and it's rewrite rule annotation as mine app did not liked extra data in URI. Mine specific example: perfcale-demo-app-ingress.yaml.
- Results are quite simple concept. You just configure them in the task and in the script you redirect the value (their size is quite limited) to filename stored in some variable.
- When something does not make sense, you can always add a step to your task with sleep 1000 and kubectl exec -ti pod/... -- bash.
- Every pipeline run name have to be unique. It would be boring to create new ones with kubectl apply -f ... on each of the attempts I have done without some script, but having generateName in pipeline run metadata and using kubectl create -f ... saved my day.
At the end mine pipeline worked like this:
- Clones required repos:
- Demo application: perfscale-demo-app
- YAMLs and misc: my-tekton-perfscale-experiment
- Results repo: my-tekton-perfscale-experiment-results
- Deploys the demo application (no need to build images as it is done by quay.io)
- It is a simple bank-like application exposing REST API
- There is a Locust framework based perf test included with the application that stresses the API and measures RPS
- Application consist of one pod with PostgreSQL and another one with application itself and Gunicorn application server
- Populates test data into the application (code for it is built in into the demo application for ease of use)
- Runs the Locust framework based perf test from demo application's repository, but wrapped in thin OPL helper that stores the test results in nice JSON
- Runs a script that loads historical results for the same test with same parameters and determines if new result is PASS or FAIL
- Adds a new result into results repository and pushes it to GitHub
- Deletes a demo app deployment
The commands I have used most when working on the pipeline were:
- kubectl apply --filename pipeline.yaml - to apply changes I have done to the pipeline
- kubectl create --filename pipeline-run.yaml - to create new pipeline run with random suffix
- tkn pipelinerun logs --follow --last --all --prefix - to follow logs of the current pipeline run
- tkn pipelinerun delete --all --force - to remove all previous pipeline runs