I have defined an environment variable for a container from a Configmap, But I want to apply changes automatically when changing the variable value in the ConfigMap.
Maybe we can target an environment variable in volume path !?
In the following lines I'll try to exhibit an idea (It can be considered as solution, at least for the moment), it consist of mounting the configmap values as Volume,
spec:
containers:
- name
...
volumeMounts:
- name: config-volume
mountPath: /etc/config #just an example
volumes:
- name: config-volume
configMap:
name : <name-of-configmap>
items:
- key: <key-in-onfigmap>
path: keys
As result we will get the value of our configMap Key inside a volume file (/etc/config/keys) we can ensure by executing theses commands
kubectl exec -it <name-of-pod> sh #to get a shell to the running container/pod
cat /etc/config/keys #
Note : there a delay time from the moment when the ConfigMap is updated to the moment when keys are projected to the pod (it can be as long as kubelet ConfigMap sync period + ttl of ConfigMap cache in kubelet )
Take a look to this to make it more clear, Best regards
Propagation of config map changes has been discussed for a long time and still not implemented: https://github.com/kubernetes/kubernetes/issues/22368
I suggest using helm upgrade process (or similar) to just rollout the same version of an app with the new settings. In this way you have additional controls: you can do a rolling update, you can rollback, you can do canary and so on.
Related
I would like to achieve something similar as below but in AWS - EKS
In my kubeadm k8s cluster, if I have to collect application log from pod, I will run below command
kubectl cp podName:/path/to/application/logs/logFile.log /location/on/master/
But with AWS I am not sure, how to collect logs like above?
One workaround is persist the logs on S3 with PV and PVC and then get them from there.
volumeMounts:
- name: logs-sharing
mountPath: /opt/myapp/AppServer/logs/container
volumes:
- name: logs-sharing
persistentVolumeClaim:
claimName: logs-sharing-pvc
Other way would be use sidecar container with a logging agent
But I would appreciate if there is any workaround which will as easy as the one I mentioned above that I follow with kubeadm
...as easy as the one I mentioned above that I follow with kubeadm
kubectl cp is a standard k8s command, you can use it for EKS, AKS, GKE etc.
I have several VMs running gilab-runner,and I'm using gitlab-ci to deploy microservices into those VMs. Now I want to monitoring those VMs with prometheus and grafana, but i need to setup node-exporter/cadvisor etc. service into those VMs.
My Ideas is using gitlab-ci to define a common job for those VMs.
I have already write the docker-compose.yml and .gitlab-ci.yml.
version: '3.8'
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
ports:
- "9100:9100"
cadvisor:
image: google/cadvisor
container_name: cadvisor
restart: unless-stopped
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
ports:
- "8080:8080"
deploy-workers:
tags:
- worker
stage: deploy-workers
script:
- docker-compose -f docker-compose.worker.yaml pull
- docker-compose -f docker-compose.worker.yaml down
- docker-compose -f docker-compose.worker.yaml up -d
then I register the runner in all my VMs with 'worker' tag.
However, only one worker job is triggered during ci.
I have about 20 VMs to go.
Do anyone have suggestions?
This is probably not a good way to be deploying your services onto virtual machines. You don't want to just launch your GitLab CI job and then hope that it results in what you want. Managing each VM separately is both going to be tedious and error-prone.
You probably want to do is have a method that has a declarative way to define/describe your infrastructure and the state of how that infrastructure should be configured and the applications running on it.
For example, you could:
Use a proper orchestrator, such as docker swarm or Kubernetes AND/OR
Use a provisioning tool, such as Ansible connected to each VM, or if your VMs run in the cloud, Terraform or similar.
In both these examples, you can leverage these tools from a single GitLab CI job and deploy changes to all of your VMs/clusters at once.
Using docker swarm
For example, instead of running your docker-compose on 20 hosts, you can join all 20 VMs to the same docker swarm.
Then in your compose file, you create a deploy key specifying how many replicas you want across the swarm, including numbers per node. Or use mode: global to simply specify you want one container of the service per host in your cluster.
services:
node-exporter:
deploy:
mode: global # deploy exactly one container per node in the swarm
# ...
cadvisor:
deploy:
mode: global # deploy exactly one container per node in the swarm
Then running docker stack deploy from any manager node will do the right thing to all your swarm worker nodes. Docker swarm will also automatically restart your containers if they fail.
See deploy reference.
Using swarm (or any orchestrator) has a lot of other benefits, too, like health checking, rollbacks, etc. that will make your deployment process a lot safer and more maintainable.
If you must use a job per host
Set a unique tag for each runner on each VM. Then use a parallel matrix with a job set to each tag.
job:
parallel:
matrix:
RUNNER: [vm1, vm2, vm3, vm4, vm5] # etc.
tags:
- $RUNNER
See run a matrix of parallel jobs
You want to make sure the tag is unique and covers all your hosts, or you may run the same job on the same host multiple times.
This will let you do what you were seeking to do. However, it's not an advisable practice. As a simple example: there's no guarantee that your docker-compose up will succeed and you may just take down your entire cluster all at once.
I run redis image with docker-compose
I passed redis.conf (and redis says "configuration loaded")
In redis.conf i added user
user pytest ><password> ~pytest/* on #set #get
And yet I can communicate with redis as anonymous
even with uncommented string
requirepass <password>
Redis docs about topics: Security and ACL do not answer how to restrict access to everyone. Probably I do not understand something fundamentally.
my docker-compose.yaml:
version: '3'
services:
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 6000s
timeout: 30s
retries: 50
restart: always
volumes:
- redis-db:/data
- redis.conf:/usr/local/etc/redis/redis.conf
command: ["redis-server", "/usr/local/etc/redis/redis.conf" ]
volumes:
redis-db:
redis.conf:
And yet I can communicate with redis as anonymous even with uncommented string
Because there's a default user, and you didn't disable it. If you want to totally disable anonymous access, you should add the following to your redis.conf:
user default off
Secondly, the configuration for user 'pytest' is incorrect. If you want to only allow user 'pytest' to have set and get command on the given key pattern, you should configure it as follows:
user pytest ><password> ~pytest/* on +set +get
You also need to ensure that the docker-compose is using your config file.
Assuming you have the redis.conf in the same directory as your docker-compose.yml the 'volumes' section in the service declaration would be.
- ./redis.conf:/usr/local/etc/redis/redis.conf
and also remove the named volume declaration in the bottom
redis.conf:
The users would be able to connect to Redis but without AUTH they can't perform any action if you enable
requirepass <password>
The right way to restrict GET, SET operations on the keys pytest/* would be
user pytest ><password> ~pytest/* on +set +get
I have a Kubernetes redis Pod, which I need to backup/restore its data through dump.rdb. When restore, I put dump.rdb under /data and launch the pod with this config:
containers:
- name: redis
volumeMounts:
- mountPath: /data/
name: data-volume
volumes:
- name: data-volume
hostPath:
path: /data/
type: Directory
It can see the dump.rdb from host's /data dir, but when Redis saves any changes in the Pod, it only updated the /data dir within the Pod not the host. My goal is to be able to backup the dump.rdb on the host, so I need the dump.rdb on the host to get updated too. What am I missing here?
Const`s question helps to find the solution for Joe.
Joe missed the place where the file was stored.
My suggestion: try to use NFS volume for storing and restoring backups, it may be easier than using the hostPath
I need to run two identical containers behind Traefik which have to accept requests coming in on multiple ports. To do this I am using docker service labels. The problem that I am running into is when I use Docker service labels and try to scale up to two containers I get an error message about the backend already being defined.
Using the normal labels (traefik.frontend, traefik.port etc.) works fine, but adding the extra labels (traefik.whoami.frontend, traefik.whoami.port etc.) seems to break things.
Docker compose file:
version: '2'
services:
whoami:
image: emilevauge/whoami
networks:
- web
labels:
- "traefik.http.frontend.rule=Host:whoami.docker.localhost"
- "traefik.http.port=80"
- "traefik.http.frontend.entryPoints=http"
- "traefik.http.frontend.backend=whoami"
- "traefik.soap.frontend.rule=Host:whoami.docker.localhost"
- "traefik.soap.port=8443"
- "traefik.soap.frontend.entryPoints=soap"
- "traefik.soap.frontend.backend=whoami"
networks:
web:
external:
name: traefik_webgateway
Scale up:
$ docker-compose scale whoami=2
Creating and starting whoami_whoami_2 ... done
Traefik error log:
proxy_1 | time="2017-10-23T15:37:16Z" level=error msg="Near line 39 (last key parsed 'backends.backend-whoami.servers'): Key 'backends.backend-whoami.servers.service' has already been defined."
Can anyone tell me what I'm doing wrong here or if there is another way to map two ports to a container?
Thanks!
There was a bug with Docker replicas management.
A fix will be merged in the next release : https://github.com/containous/traefik/pull/2314.