update YAML file using YQ in GitlabCI - gitlab-ci

So basically, I have this command that runs in Gitlab CI to update a field in YAML configuration before packaging and pushing a Helm chart.
yq -i -y ".pod.image.imageTag="${CI_COMMIT_SHORT_SHA}"" deployment/values.yaml
values.yaml
pod:
image:
repository: my.private.repo/my-project
imageTag: 'latest'
nodegroupName: "nessie-nodegroup"
But I keep getting this error.
jq: error: syntax error, unexpected IDENT, expecting $end (Unix shell quoting issues?)
.pod.image.imageTag=4c0118bf
The variable is actually read but it looks like I'm doing something wrong in the yq command.
Any ideas where that error is coming from ? Trying with only one quote doesn't read the environment variable obviously. I already tried it.
Update:
Trying with :
yq -i -y '.pod.image.imageTag="${CI_COMMIT_SHORT_SHA}"' deployment/values.yaml
and
yq -i -y .pod.image.imageTag="${CI_COMMIT_SHORT_SHA}" deployment/values.yaml
didn't work either.

With the -y option I assume you are using the kislyuk/yq implementation.
Use jq's --arg option to introduce values from shell:
yq -i -y --arg tag "${CI_COMMIT_SHORT_SHA}" '.pod.image.imageTag=$tag' deployment/values.yaml

Since the Q has been tagged jq, it might be worth mentioning that the Go implementation of jq supports YAML, so e.g.:
CI_COMMIT_SHORT_SHA=foo
gojq --yaml-input --yaml-output --arg tag "${CI_COMMIT_SHORT_SHA}" '
.pod.image.imageTag=$tag
' values.yaml
produces
pod:
image:
imageTag: foo
repository: my.private.repo/my-project
nodegroupName: nessie-nodegroup
Notice, though, that gojq sorts the keys.

Related

Kubernetes rolling update with updating value in deployment file

I wanted to share a solution I did with kubernetes and have your opinion on best practice to do in such case. I'm still new to kubernetes.
I had a problem I wanted to be able to update my application by restarting my deployment pod that execute all the necessary action to do that already in command start.
I'm using microk8s and I wanted to just go to the good folder and execute microk8s kubectl apply -f myfilename and let kubernetes handle the rest with rolling update.
My issue was how to set dynamic value inside my .yaml file so the command would detect the change and start the process.
I've planned to do a bash script that do the job like the following:
file="my-file-deployment.yaml"
oldstr=`grep 'my' $file | xargs`
timestamp="$(date +"%Y-%m-%d-%H:%M:%S")"
newstr="value: my-version-$timestamp"
sed -i "s/$oldstr/$newstr/g" $file
echo "old version : $oldstr"
echo "Replaced String : $newstr"
sudo microk8s kubectl apply -f $file
on my deployment.yaml file I'm giving the following env:
env:
- name: version
value: my-version-2022-09-27-00:57:15
I'm switching with timestamp to a new value then I launch the command:
microk8s kubectl apply -f myfilename
it is working great for the moment. I still have to configure startupProbe to have a better rolling update execution because I'm having few second downtime which isn't cool.
Is there a better solution to work with rolling update using microk8s?
If you are trying to trigger a rolling update on your deployment (assuming it is a deployment), you can patch the deployment and let the cluster handle the rollout. Here's a trick I use and it's literally a one-liner:
kubectl -n {namespace} patch deployment {name-of-your-deployment} \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
This will patch your deployment, adding an annotation to the template block. In this way, the cluster thinks there is a change requiring an update to the deployment's pods, and will cycle them while following the rollingUpdate clause.
The date +'%s' will resolve to a different number each time so every time you run this, it will cause the cluster to cycle the deployment's pods.
We use this trick to force a rolling update when we have done an update that requires our pods to be restarted.
You can accompany this with the rollout status command to wait for the update to complete:
kubectl rollout status deployment/{name-of-your-deployment} -n {namespace}
So a complete line would be something like this if I wanted to rolling update my nginx deployment and wait for it to complete:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
&& kubectl rollout status deployment/nginx -n nginx
One caveat, though. Using kubectl patch does not make changes to the yamls on disk, so if you wanted a copy of the change recorded locally, such as for auditing purposes, similar to what you are doing at the moment, then you could adapt this to do it as a dry-run and redirect output to file:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
--dry-run=client \
-o yaml >patched-nginx.yaml

Invalid YAML ebextensions

I'm trying to deploy an app version to elastic beanstalk but my config file formatting is incorrect.
There's a lot of escaped quotes so I don't think this is correct but I'm not sure how to resolve it.
This is the line that's causing issues;
command: "sudo bash -c 'echo \"<img src=\'http://www.foo.com/img/custom_500.png\' alt=\'500\' style=\'left:50%;top:50%;position:fixed;margin-top:-235px;margin-left:-200px\'/>\"' > custom_50x.html"
Try without the opening and closing Quotes, like this:
command: sudo bash -c 'echo \"<img src=\'http://www.foo.com/img/custom_500.png\' alt=\'500\' style=\'left:50%;top:50%;position:fixed;margin-top:-235px;margin-left:-200px\'/>\"' > custom_50x.html
A useful tool for determining quickly if something is wrong is to use this online YAML Parser.

Rabbitmqadmin can't open with no error

I installed correctly RabbitMQ. It is working. I also enabled RabbitMQ management plugin with:
rabbitmq-plugins enable rabbitmq_management
after that any rabbitmqadmin commands do not seem to work and no error is displayed :
root#jessie:/usr# rabbitmqadmin --help
root#jessie:/usr#
what can I do ?
First you have to make sure you have installed python, check your python version using below commands,
python -V or python3 -V
if it's python 3 you have to change the header of the rabbitmqadmin script as below,
#!/usr/bin/env python3
otherwise it won't work.
Now make sure you give the permission by chmod 777 and run scripts as below,
To list down ques,
./rabbitmqadmin -f tsv -q list queues

zsh ssh-add -L parse error near `-L'

I'm trying to run
ssh-add -L
(or any other dashed option), and zsh returns zsh: parse error near `-L'. It's the first time I see zsh do that, and it doesn't do it with any other command.
Any ideas ?
First thing to find out is whether ssh-add is an alias or a shell function, rather than the binary executable /usr/bin/ssh-add.
Second, try to run the same command in a ZSH session without your custom ZSH configuration. To get a clean environment, run
env -i TERM=$TERM LC_ALL=$LC_ALL LANG=$LANG zsh -f
Then try ssh-add -L again and let us know what you see.
Moreover, please post the output of the following:
uname -a
zsh --version

Scripts installed by the deb package have wrong prefix

Building our own deb packages we've run into the issue of having to patch manually some scripts so they get the proper prefix.
In particular,
We're building mono
We're using official tarballs.
The scripts that end up with wrong prefix are: mcs, xbuild, nunit-console4, etc
An example of a wrong script:
#!/bin/sh
exec /root/7digital-mono/mono/bin/mono \
--debug $MONO_OPTIONS \
/root/7digital-mono/mono/lib/mono/2.0/nunit-console.exe "$#"
What should be the correct end result:
#!/bin/sh
exec /usr/bin/mono \
--debug $MONO_OPTIONS \
/usr/lib/mono/2.0/nunit-console.exe "$#"
The workaround we're using in our build-package script before calling dpkg-buildpackage:
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/mcs
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/xbuild
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/nunit-console
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/nunit-console2
sed -i s,`pwd`/mono,/usr,g $TARGET_DIR/bin/nunit-console4
Now, what is the CORRECT way to fix this? Full debian package creation scripts here.
Disclaimer: I know there are preview packages of Mono 3 here! But those don't work for Squeeze.
the proper way is to not call ./configure --prefix=$TARGET_DIR
this tells all the binaries/scripts/... that the installated files will end up in ${TARGET_DIR}, whereas they really should endup in /usr.
you can use the DESTDIR variable (as in make install DESTDIR=${TARGET_DIR}) to change (prefix) the installation target at install time (files will end-up in ${TARGET_DIR}/${prefix} but will only have ${prefix} "built-in")