how to restart container instead of deleting pod and recreate? - apache

There are cases when you want to restart a specific container instead of deleting the pod and letting Kubernetes recreate it.
i am having one pod running apache container. i did editing in apache config file. for SSL certificate virtual host port changes etc.
now i want to restart apache2 service but without recreating pod.
i tried inside pod with
service apache2 restart
but it also recreate pod and configuration also change again.

This is not how it supposed to work.
You should not change anything inside the POD.
If your POD dies or crushes, Kubernetes should just start a new one and everything should work.
Also keep in mind that you cannot scale the POD that had configuration altered.
Please check the Kubernetes docs Configure a Pod to Use a ConfigMap
You can use ConfigMap to create configuration file.
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps.
ConfigMap can be created and read a content of a file :
$ kubectl create configmap config_data --from-file=config_data.txt
or it can be declared in .yml
config_map:
data:
db_name=colors_db
table_name=purple
name: config_data
version: v1
Also this might be done by creating a secret
or secret can be declared:
secret:
data:
username: my-username
password: my-password
name: secret_data
version: v1
I recommend reading Kubernetes recipe: store nginx config with ConfigMap and reverse-proxy requests from your domain to your Github page.
There are also other options like mounting path with needed configuration on new POD.
I advice you to check Configure a Pod to Use a PersistentVolume for Storage

check this
You can also create a new dockerfile for override the apache dockerfile and change de CMD line, but it's more complicated

Related

Custom DNS record and SSL certificate in docker container

I am facing a issue with self signed certificate and DNS record in hosts file inside docker container. We have multiple Linux servers with docker swarm running. There was a docker service where I need to copy the self signed certificate and create a DNS record manually with docker exec all the time when ever service is restarting. There was a mapped volume for the docker service. How can I map container DNS file(/etc/hosts) and /usr/local/share/ca-certificates to have these in mapped place so that there will be no issues if the container re-start.
Use docker configs.
Something like :-
docker config create my_public-certificate-v1 public.crt.
docker service create --config src=my_public-certificate-v1,target=/usr/local/share/ca-certificates/example.com.crt ...

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.

OpenShift Origin Build - unable to use git as a source

I'm trying to do a simple build of a nodejs app I wrote in OpenShift Origin using the following yaml:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "dyn-kickstart"
spec:
triggers:
- type: "GitHub"
github:
secret: "secret101"
source:
git:
uri: git#bitbucket.org:serverninja02/dynamic-kickstart.git
sourceSecret:
name: "github"
strategy:
type: Docker
dockerStrategy:
dockerfilePath: .
forcePull: true
noCache: true
output:
to:
kind: "DockerImage"
name: "docker-registry-default.apps.reedfamily.local/serverninja/dynamic-kickstart:v0.0.1
The command I'm running to create the build:
$ cat dynamic-kickstart.yml | oc create -f -
What I'm running into is that the build service account doesn't seem to be able to access the github url to clone:
Cloning "git#bitbucket.org:serverninja02/dynamic-kickstart.git" ...
error: build error: Warning: Permanently added 'bitbucket.org,192.168.1.81' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I did follow the instructions on creating the ssh-privatekey secret, placing in the secret store, and linking to the build sa. I also double-checked that key and tested through ssh forwarding that I can log into the OpenShift node and ssh git#bitbucket.org.
I'm not sure what I'm doing wrong but even with using the http git url and making it a public repo, it still doesn't work as it complains about the peer certificate not being trusted:
Cloning "https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git" ...
error: build error: fatal: unable to access 'https://serverninja02#bitbucket.org/serverninja02/dynamic-kickstart.git/': Peer's certificate issuer has been marked as not trusted by the user.
At this point, I'm unsure where to go with this as OpenShift Origin doesn't seem to want to build anything from git as a source.
Any help or suggestions would be greatly appreciated!
OpenShift Version: 1.3.0
OpenShift Kubernetes Version: v1.3.0+52492b4
This is a flat network behind a router. DNS is on Active Directory with a wildcard entry for *.apps.reedfamily.local.
This is a test bed environment in a .local domain. However I'm using this build to potentially build this out as a POC for my company to host OpenShift.
I figured out the answer to my problem!!! So I'll share:
The /etc/resolv.conf was configured automatically during the build of my OpenShift nodes when I ran openshift-ansible. Unfortunately, there was a search domain placed in /etc/resolv.conf that must have been causing issues.
# Generated by NetworkManager
search apps.reedfamily.local
nameserver 192.168.1.40
Once I removed "search apps.reedfamily.local", that fixed the problem immediately on the next build!

Kubernetes add ca certificate to pods' trust root

In my 10-machines bare-metal Kubernetes cluster, one service needs to call another https-based service which is using a self-signed certificate.
However, since this self-signed certificate is not added into pods' trusted root ca, the call failed saying can't validate x.509 certificate.
All pods are based on ubuntu docker images. However the way to add ca cert to trust list on ubuntu (using dpkg-reconfigure ca-certificates) is not working on this pod any longer. Of course even I succeeded adding the ca cert to trust root on one pod, it's gone when another pod is kicked.
I searched Kubernetes documents, and surprised not found any except configuring cert to talk to API service which is not what I'm looking for. It should be quite common scenario if any secure channel needed between pods. Any ideas?
If you want to bake the cert in at buildtime, edit your Dockerfile adding the commands to copy the cert from the build context and update the trust. You could even add this as a layer to something from docker hub etc.
COPY my-cert.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
If you're trying to update the trust at runtime things get more complicated. I haven't done this myself, but you might be able to create a configMap containing the certificate, mount it into your container at the above path, and then use an entrypoint script to run update-ca-certificates before your main process.
Updated Edit read option 3:
I can think of 3 options to solve your issue if I was in your scenario:
Option 1.) (The only complete solution I can offer, my other solutions are half solutions unfortunately, credit to Paras Patidar/the following site:)
https://medium.com/#paraspatidar/add-ssl-tls-certificate-or-pem-file-to-kubernetes-pod-s-trusted-root-ca-store-7bed5cd683d
1.) Add certificate to config map:
lets say your pem file is my-cert.pem
kubectl -n <namespace-for-config-map-optional> create configmap ca-pemstore — from-file=my-cert.pem
2.) Mount configmap as volume to exiting CA root location of container:
mount that config map’s file as one to one file relationship in volume mount in directory /etc/ssl/certs/ as file for example
apiVersion: v1
kind: Pod
metadata:
name: cacheconnectsample
spec:
containers:
- name: cacheconnectsample
image: cacheconnectsample:v1
volumeMounts:
- name: ca-pemstore
mountPath: /etc/ssl/certs/my-cert.pem
subPath: my-cert.pem
readOnly: false
ports:
- containerPort: 80
command: [ "dotnet" ]
args: [ "cacheconnectsample.dll" ]
volumes:
- name: ca-pemstore
configMap:
name: ca-pemstore
So I believe the idea here is that /etc/ssl/certs/ is the location of tls certs that are trusted by pods, and the subPath method allows you to add a file without wiping out the contents of the folder, which would contain the k8s secrets.
If all pods share this mountPath, then you might be able to add a pod present and configmap to every namespace, but that's in alpha and is only helpful for static namespaces. (but if this were true then all your pods would trust that cert.)
Option 2.) (Half solution/idea + doesn't exactly answer your question but solves your problem, I'm fairly confident will work in theory, that will require research on your part, but I think you'll find it's the best option:)
In theory you should be able to leverage cert-manager + external-dns + Lets Encrypt Free + a public domain name to replace the self signed cert with a Public Cert.
(there's cert-manager's end result is to auto gen a k8s tls secret signed by Lets Encrypt Free in your cluster, they have a dns01 challenge that can be used to prove you own the cert, which means that you should be able to leverage that solution even without an ingress / even if the cluster is only meant for private network.)
Edit: Option 3.) (After gaining more hands on experience with Kubernetes)
I believe that switchboard.op's answer is probably the best/should be the accepted answer. This "can" be done at runtime, but I'd argue that it should never be done at runtime, doing it at runtime is super hacky and full of edge cases/there's not a universal solution way of doing it.
Also it turns out that my Option 1 doing it is only half correct.
mounting the ca.crt on the pod alone isn't enough. After that file is mounted on the pod you'd need to run a command to trust it. And that means you probably need to override the pods startup command. Example you can't do something like connect to database (the default startup command) and then update trusted CA Certs's command. You'd have to override the startup file to be a hand jammed, overwrite the default startup script, update trusted CA Certs's, connect to the database. And the problem is Ubuntu, RHEL, Alpine, and others have different locations where you have to mount the CA Cert's and sometimes different commands to trust the CA Certs so a universal at runtime solution that you can apply to all pods in the cluster to update their ca.cert isn't impossible, but would require tons of if statements and mutating webhooks/complexity. (a hand crafted per pod solution is very possible though if you just need to be able to dynamically update it for a single pod.)
switchboard.op's answer is the way I'd do it if I had to do it. Build a new custom docker image with your custom ca.cert being trusted baked into the image. This is a universal solution, and greatly simplifies the YAML side. And it's relatively easy to do on the docker image side.
Just for curiosity, here is an example of manifest utilizing the init container approach.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
# in my case it is CloudFlare CA used to sign certificates for origin servers
origin_ca_rsa_root.pem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
name: demo
spec:
nodeSelector:
kubernetes.io/os: linux
initContainers:
- name: init
# image: ubuntu
# command: ["/bin/sh", "-c"]
# args: ["apt -qq update && apt -qq install -y ca-certificates && update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
# # alternative image with preinstalled ca-certificates utilities
image: grafana/alpine:3.15.4
command: ["/bin/sh", "-c"]
args: ["update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
volumeMounts:
- name: demo
# note - we need change extension to crt here
mountPath: /usr/local/share/ca-certificates/origin_ca_rsa_root.crt
subPath: origin_ca_rsa_root.pem
readOnly: false
- name: tmp
mountPath: /artifact
readOnly: false
containers:
- name: demo
# note - even so init container is alpine base, and this one is ubuntu based everything still works
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: tmp
mountPath: /etc/ssl/certs
readOnly: false
volumes:
- name: demo
configMap:
name: demo
# will be used to pass files between init container and actual container
- name: tmp
emptyDir: {}
and its usage:
kubectl apply -f demo.yml
kubectl exec demo -c demo -- curl --resolve foo.bar.com:443:10.0.14.14 https://foo.bar.com/swagger/v1/swagger.json
kubectl delete -f demp.yml
notes:
replace foo.bar.com to your domain name
replace 10.0.14.14 to ingress controller cluster IP
you may want to add -vv flag to see more details
Indeed it is kind of ugly and monstrous, but at least it does work and is proof of concept. Workarounds with simple ConfigMap do not work because curl reads ca-certificates.crt, which is not modified in that approach.

How to configure Let's encrypt certificates for nginx inside a docker image?

I know how to configure let's encrypt for nginx. I'm having hard time configuring let's encrypt with nginx inside a docker image. Let's encrypt certificates are symlinked in etc/letsencrypt/live folder and I don't have permission to view the real certificate files inside /etc/letsencrypt/archive
Can someone suggest a way out ?
I add my mistake. Maybe someone will find it useful.
I mounted the /live directory of letsencrypt and not the whole letsencrypt directory tree.
The problem with this:
The /live folder just holds symlinks to the /archive folder that is not mounted to the docker container with my approach.
(In fact I even mounted a /certs folder that symlinked to the live folder because I had that certs folder in the development environment, same problem..the real (symlinked) files were not mounted)
All problems went away when I mounted /etc/letsencrypt instead of /live
A part of my docker-compose.yml
services:
ngx:
image: nginx
container_name: ngx
ports:
- 80:80
- 443:443
links:
- php-fpm
volumes:
- ../../com/website:/var/www/html/website
- ./nginx.conf:/etc/nginx/nginx.conf
- ./nginx_conf.d/:/etc/nginx/conf.d/
- ./nginx_logs/:/var/log/nginx/
- ../whereever/you/haveit/etc/letsencrypt/:/etc/letsencrypt
The last line in that config is the important one. Changed it from
- ./certs/:/etc/nginx/certs/
And /certs was a symlink to /etc/letsencrypt/live in my case. This can not work as I described above.
If anyone having this problem, I've solved it by mounting the folders into docker container.
I've mounted both etc/letsencrypt and etc/ssl folders into docker
Docker has -vflag to mount volumes. Don't forget to open port 443 for the container.
Based on how you mount it it's possible to enable https in docker container without changing nginx paths.
docker run -d -p 80:80 -p 443:443 -v /etc/letsencrypt/:/etc/letsencrypt/ -v /etc /ssl/:/etc/ssl/ <image name>
If you are using nginx, Docker and Letsencrypt you might like the following Github project: https-portal.
It automates a lot of manual actions, and makes it easy to manage your configurations using docker-compose. From the README:
Features
Test Locally
Redirections
Automatic Container Discovery
Hybrid Setup with Non-Dockerized Apps
Multiple Domains
Serving Static Sites
Share Certificates with Other Apps
HTTP Basic Auth
How it works
obtains an SSL certificate for each of your subdomains from Let's Encrypt.
configures Nginx to use HTTPS (and force HTTPS by redirecting HTTP to HTTPS)
sets up a cron job that checks your certificates every week, and renew them. if they expire in 30 days.
For some background.. The project was also discussed on Hacker News: HTTPS-Portal: Automated HTTPS server powered by Nginx, Let’s Encrypt and Docker
(Disclaimer: I have no affiliation to the project, just a user)