I am trying to deploy to a kubernetes cluster an umbrella chart that contains the rabbitmq operator + rabbitmq. So in total 2 sub-charts.
The operator sub-chart first deploys the CRD needed "kind: RabbitmqCluster" for the rabbitmq sub-chart and everything is installed correctly when I install the umbrella chart. I see 2 containers, the operator and an instance of rabbitmq.
The problem arises when I want to uninstall the umbrella chart (helm uninstall...), the rabbit operator is removed (since it has a "kind: Deployment") but not the rabbitmq instance that it has created. To do so, I need to manually run kubectl delete rabbitmqcluster name of instance.
Is there a way to do so when the helm uninstall is run or am I barking up the wrong tree?
One way to solve this is to use an annotation using a helm hook to transform a job into a pre-delete operation.
Then in the spec part of the job, one can run kubectl commands if needed using a public image or whatever you like:
containers:
- name: kubectl
image: "k8s.gcr.io/hyperkube:v1.12.1"
imagePullPolicy: "IfNotPresent"
command:
- /bin/sh
- -c
- >
kubectl delete rabbitmqcluster {{ .Release.Name }}-rabbitmq -n {{ .Release.Namespace }};
sleep 10;
Related
I wanted to share a solution I did with kubernetes and have your opinion on best practice to do in such case. I'm still new to kubernetes.
I had a problem I wanted to be able to update my application by restarting my deployment pod that execute all the necessary action to do that already in command start.
I'm using microk8s and I wanted to just go to the good folder and execute microk8s kubectl apply -f myfilename and let kubernetes handle the rest with rolling update.
My issue was how to set dynamic value inside my .yaml file so the command would detect the change and start the process.
I've planned to do a bash script that do the job like the following:
file="my-file-deployment.yaml"
oldstr=`grep 'my' $file | xargs`
timestamp="$(date +"%Y-%m-%d-%H:%M:%S")"
newstr="value: my-version-$timestamp"
sed -i "s/$oldstr/$newstr/g" $file
echo "old version : $oldstr"
echo "Replaced String : $newstr"
sudo microk8s kubectl apply -f $file
on my deployment.yaml file I'm giving the following env:
env:
- name: version
value: my-version-2022-09-27-00:57:15
I'm switching with timestamp to a new value then I launch the command:
microk8s kubectl apply -f myfilename
it is working great for the moment. I still have to configure startupProbe to have a better rolling update execution because I'm having few second downtime which isn't cool.
Is there a better solution to work with rolling update using microk8s?
If you are trying to trigger a rolling update on your deployment (assuming it is a deployment), you can patch the deployment and let the cluster handle the rollout. Here's a trick I use and it's literally a one-liner:
kubectl -n {namespace} patch deployment {name-of-your-deployment} \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
This will patch your deployment, adding an annotation to the template block. In this way, the cluster thinks there is a change requiring an update to the deployment's pods, and will cycle them while following the rollingUpdate clause.
The date +'%s' will resolve to a different number each time so every time you run this, it will cause the cluster to cycle the deployment's pods.
We use this trick to force a rolling update when we have done an update that requires our pods to be restarted.
You can accompany this with the rollout status command to wait for the update to complete:
kubectl rollout status deployment/{name-of-your-deployment} -n {namespace}
So a complete line would be something like this if I wanted to rolling update my nginx deployment and wait for it to complete:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
&& kubectl rollout status deployment/nginx -n nginx
One caveat, though. Using kubectl patch does not make changes to the yamls on disk, so if you wanted a copy of the change recorded locally, such as for auditing purposes, similar to what you are doing at the moment, then you could adapt this to do it as a dry-run and redirect output to file:
kubectl -n nginx patch deployment nginx \
-p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" \
--dry-run=client \
-o yaml >patched-nginx.yaml
I have a problem with my GitLab job:
.render:
stage: validate
image: alpine:3.7
before_script: |
function render() {
apk add --no-cache gettext
envsubst < $1 > temp-file;
cp temp-file $1
rm temp-file
cat $1
}
This job has a function, that will replace environment variable of the file given to it. The job is based on the alpine image, to get the envsusbt command. ✔️
This .render job will then be extended by other validation jobs, and one of them being a job for validating helm charts. ✔️
But the issue is helm is not installed on the alpine image. Do I have to install it? I would've preferred to use the .render job like a tool, rather than extending it. ❌
What is the best way for one job to have both the envsubst command and the helm command at the same time?
render-test:
extends: .render
stage: validate
variables:
...
script: |
helm version
This job will fail, since helm is not present on the image
The solution I found was making an image that can have both the envsubst command and helm command. The image is hamuto/tools:1.0.0
I have done bitnami redis-cluster deployment using helm chart. I have followed below link for redis-cluster deployment-
https://github.com/bitnami/charts/tree/master/bitnami/redis-cluster
I wanted to enable redis logs, but not sure how to do it, as in the current bitnami redis image configuration file located at /opt/bitnami/redis/etc/redis.conf having parameter value logfile:"" as empty strings.
Please let me is there any ways to enable a redis server logs on each pod??
You can use helm --set flag to overwrite the default values in the redis.conf file.
// Add your custom configurations
$ export CUSTOM_CONFIG="logfile /data/logs/file.log"
// Apply those while installing
$ helm install redis bitnami/redis-cluster --set redis.configmap=$CUSTOM_CONFIG
You can check it from inside the pod:
$ kubectl exec -it redis-redis-cluster-0 -- cat /opt/bitnami/redis/etc/redis.conf
.
.
.
logfile /data/logs/file.log
I'm new to using Docker and docker-compose so apologies if I have some of the terminology wrong.
I've been provided with a Dockerfile and docker-compose.yml and have successfully got the images built and container up and running (by running docker-compose up -d), but I would like to update things to make my process a bit easier as occasionally I need to restart Apache on the container (WordPress) by accessing it using:
docker exec -it 89a145b5ea3e /bin/bash
Then typing:
service apache2 restart
My first problem is that there are two other services that I need to run for my project to work correctly and these don't automatically restart when I run the above service apache2 restart command.
The two commands I need to run are:
service memcached start
service cron start
I would like to know how to always run these commands when apache2 is restart.
Secondly, I would like to configure my Dockerfile or docker-compose.yml (not sure where I'm supposed to be adding this) so that this behaviour is baked in to the container/image when it is built.
I've managed to install the services by adding them to my Dockerfile but can't figure out how to get these services to run when the container is restart.
Below are the contents for relevant files:
Dockerfile:
FROM wordpress:5.1-php7.3-apache
RUN yes | apt-get update -y \
&& apt-get install -y vim \
&& apt-get install -y net-tools \
&& apt-get install -y memcached \
&& apt-get install -y cron
docker-compse.yml
version: "3.3"
services:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
wordpress:
container_name: my-site
build: .
depends_on:
- db
volumes:
- ./my-site-wp:/var/www/html/:consistent
ports:
- "8001:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: vagrant
WORDPRESS_DB_NAME: wp_database
volumes:
db_data:
my-site-wp:
...occasionally I need to restart Apache on the container (WordPress)...
Don't do that. It's a really, really bad habit. You're treating the container like a server where you go in and fix things that break. Think of it like it's a single application -- if it breaks, restart the whole dang thing.
docker-compose restart wordpress
Or restart the whole stack, even.
docker-compose restart
Treat your containers like cattle not pets:
Simply put, the “cattle not pets” mantra suggests that work shouldn’t grind to a halt when a piece of infrastructure breaks, nor should it take a full team of people (or one specialized owner) to nurse it back to health. Unlike a pet that requires love, attention and more money than you ever wanted to spend, your infrastructure should be made up of components you can treat like cattle – self-sufficient, easily replaced and manageable by the hundreds or thousands. Unlike VMs or physical servers that require special attention, containers can be spun up, replicated, destroyed and managed with much greater flexibility.)
Per each container in the compose file, you can add a run command flag in the yaml which will run a command AFTER your container has started. This will run during every start up. On the other hand, commands in the Dockerfile will only run when the image is being built. Ex:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
command: # bash command goes here
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
However, this is not what you are after. Why would you mess with a container from another container? The depends_on flag should restart the downstream services. It seems your memcache instance isn't docked and hence, you are trying to fit it in the application level logic, which is the antithesis of Docker. This code should be in the infra level from the machine or the orchestrator (eg. Kubernetes).
Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.