I'm trying to build the Rancher cluster (3 nodes). I setup it with Rancher signed SSL certificate. Everything is working fine except pods: cattle-node-agent which says this:
level=fatal msg="Get https://rancher-test.mycompany.com: x509: certificate is valid for ingress.local, not rancher-test.mycompany.com".
I setup everything regards to documentations and use for rancher deployment and cert-manager official HELM repository. Rancher version 2.4.5 - stable.
All hosts in the cluster are able to resolve: rancher-test.mycompany.com. For test I don't want to use CA signer cert.
CERT-MANAGER install via: helm upgrade --install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.15.0
RANCHER-SERVER install via: helm upgrade --install rancher --namespace cattle-system tmp/rancher/ --set hostname=host1 --set hostname2=host2 --set hostname3=host3--set replicas=3 ignore_errors: True
Does anyone have a similar issue? Thank you.
It looks like you are getting the fake cert.
Please try the following helm upgrade --install rancher --namespace cattle-system tmp/rancher/ --set hostname=rancher-test.mycompany.com --set replicas=3
Also, please run kubectl -n cattle-system get ingress rancher -o yaml and verify the correct hostname.
Related
I'm having some difficulty understanding Consul end-to-end TLS. For reference, I'm using Consul in Kubernetes (via the hashicorp/consul Helm chart). Only one datacenter and Kubernetes cluster - no external parties or concerns.
I have configured my override values.yaml file like so:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
All other values are as default from the shipped values.yaml file.
This works, and Consul client logs suggest that all agents area connecting nicely using TLS, with relevant certs and keys being created by (as I understand) the Auto-encryption feature of Consul.
What I don't understand is how to initiate a HTTPS connection from an application on Kubernetes, running in a Pod, to a Consul server. Since the Pod's container does not (presumably) have the Consul root CA cert in its trust store, all HTTPS calls fail, as per wget example below:
# Connect to Pod:
laptop$> kubectl exec -it my-pod sh
# Attempt valid HTTPS connection:
my-pod$> wget -q -O - https://consul.service.consul:8501
Connecting to consul.service.consul:8501 (10.110.1.131:8501)
ssl_client: consul.service.consul: certificate verification failed: unable to get local issuer certificate
wget: error getting response: Connection reset by peer
# Retry, but ignore certificate validity issues:
my-pod$> wget --no-check-certificate -q -O - https://consul.service.consul:8501/v1/status/leader
"10.110.1.131:8300"
How am I supposed to enforce end-to-end (verified) HTTPS connections from my apps on Kubernetes to Consul if the container does not recognize the certificate as valid?
Am I misunderstanding something about certificate propagation?
Many thanks - Aaron
Solved with thanks to Hashicorp on their Consul discussion forum.
Create a Kubernetes secret named consul with a key named CONSUL_GOSSIP_ENCRYPTION_KEY and an appropriate encryption key value.
Generate value using consul keygen
Install the hashicorp/consul Helm chart with an values-override.yaml , such as below:
global:
datacenter: sandbox
gossipEncryption:
secretName: "consul"
secretKey: "CONSUL_GOSSIP_ENCRYPTION_KEY"
tls:
enabled: true
httpsOnly: true
enableAutoEncrypt: true
serverAdditionalDNSSANs: ["'consul.service.consul'"]
server:
replicas: 3
bootstrapExpect: 3
storage: 20Gi
dns:
clusterIP: 172.20.53.53
ui:
service:
type: 'LoadBalancer'
syncCatalog:
enabled: true
Create an example Pod spec to represent our application.
Ensure it mounts the Consul server CA cert secret.
Ensure the Pod’s container has HOST_IP exposed as an environment variable.
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: test-pod
spec:
volumes:
- name: consul-consul-ca-cert
secret:
secretName: consul-consul-ca-cert
hostNetwork: false
containers:
- name: consul-test-pod
image: alpine
imagePullPolicy: IfNotPresent
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 24h; done"]
volumeMounts:
- name: consul-consul-ca-cert
mountPath: /consul/tls/ca
Upon creation of the Pod, kubectl exec into it, and ensure the ca-certificates and curl packages are installed (I’m using Alpine Linux in this example).
(curl is purely for testing purposes)
#> apk update
#> apk add ca-certificates curl
Copy the mounted Consul server CA certificate into the /usr/local/share/ca-certificates/ and execute update-ca-certificates to add it to the system root CA store.
#> cp /consul/tls/ca/tls.crt /usr/local/share/ca-certificates/consul-server-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul server is now accessible (and trusted) over HTTPS as below:
#> curl https://consul.service.consul:8501/v1/status/leader
## No TLS errors ##
We also want to talk to the Consul client (instead of the server) over HTTPS, for performance reasons.
Since the Consul client has its own CA cert, we need to retrieve that from the server.
This requires the consul-k8s binary, so we need to get that.
#> cd /usr/local/bin
#> wget https://releases.hashicorp.com/consul-k8s/0.15.0/consul-k8s_0.15.0_linux_amd64.zip # (or whatever latest version is)
#> unzip consul-k8s_0.15.0_linux_amd64.zip
#> rm consul-k8s_0.15.0_linux_amd64.zip
Get the Consul client CA cert and install it via update-ca-certificates :
#> consul-k8s get-consul-client-ca -server-addr consul.service.consul -server-port 8501 -output-file /usr/local/share/ca-certificates/consul-client-ca.crt
#> update-ca-certificates # might give a trivial warning - ignore it
The Consul client is now accessible (and trusted) over HTTPS as below:
#> curl https://$HOST_IP:8501/v1/status/leader
## No TLS errors ##
We can also access the Consul KV service from the client without issue:
#> curl https://$HOST_IP:8501/v1/kv/foo/bar/baz
## No TLS errors ##
Naturally, all of the above should be automated by the implementer. These manual steps are purely for demonstration purposes.
I've installed an OpenShift cluster for testing purposes, and since I'm behind a corporate network, I need to include some Root Certificates in any Pod that wants to make external requests. What can I do to inject those certificates automatically at Pod creation?
I'm running OpenShift Origin (OKD) 3.11 in a local CentOS 7 VM, with a GlusterFS storage provisioning on top of it. I already had multiple issues with the VM itself, which gave me errors when trying to access the network: x509: certificate signed by unknown authority. I fixed that by adding my corporation root certificates in /etc/pki/ca-trust/source/anchors and by running the update-ca-trust command.
When I was running for example the docker-registry deployment in the OpenShift cluster, since the created Pods didn't have access to the host root certificates, they gave again x509: certificate signed by unknown authority errors when trying to pull images from docker.io. I resolved that by creating a ConfigMap containing all needed root certificates, and mounting them in a volume on the registry deployment config.
I thought I only needed to mount a volume in all deployment configs which want to request the external network. But then I provisioned a Jenkins instance and I realised something new: When a pipeline runs, Jenkins creates a Pod with an adapted agent (example: a Spring Boot app will need a Maven agent). Since I have no control to those created pods, they can't have the mounted volume with all root certificates. So for instance I have a pipeline that runs helm init --client-only before releasing my app chart, and this command gives a x509: certificate signed by unknown authority error, because this pod hasn't the root certificates.
x509 Error screenshot
I found that a PodPreset could be the perfect way to resolve my problem, but when I enable this feature in the cluster and create the PodPreset, no new pod is populated. I read on the OpenShift documentation that PodPresets are no longer supported as of 3.7, so I think that it could be the reason it is not working.
OpenShift docs screenshot
Here is my PodPreset definition file:
kind: PodPreset
apiVersion: settings.k8s.io/v1alpha1
metadata:
name: inject-certs
spec:
selector: {}
volumeMounts:
- mountPath: /etc/ssl/certs/cert1.pem
name: ca
subPath: cert1.pem
- mountPath: /etc/ssl/certs/cert2.pem
name: ca
subPath: cert2.pem
- mountPath: /etc/ssl/certs/cert3.pem
name: ca
subPath: cert3.pem
- mountPath: /etc/ssl/certs/cert4.pem
name: ca
subPath: cert4.pem
- mountPath: /etc/ssl/certs/cert5.pem
name: ca
subPath: cert5.pem
- mountPath: /etc/ssl/certs/cert6.pem
name: ca
subPath: cert6.pem
volumes:
- configMap:
defaultMode: 420
name: ca-pemstore
name: ca
I don't know if there is any way to make PodPresets work on OpenShift 3.11, or if there is another solution to inject certs file like this in created pods. This would be really great.
The RedHat COP on GitHub contains a project with a podpresent admission webhook controller you can use:
https://github.com/redhat-cop/podpreset-webhook
basically you deploy that project and change the apiVersion in your PodPresent to apiVersion: redhatcop.redhat.io/v1alpha1
I'm using Kubernetes and I recently updated my admin certs used in the kubeconfig. However, after I did that, all the helm commands fail thus:
Error: Get https://cluster.mysite.com/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: x509: certificate signed by unknown authority
kubectl works as expected:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-1-0-34.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-1-51.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
ip-10-1-10-120.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-10-135.eu-central-1.compute.internal Ready <none> 27d v1.7.10+coreos.0
ip-10-1-11-71.eu-central-1.compute.internal Ready <none> 42d v1.7.10+coreos.0
ip-10-1-12-199.eu-central-1.compute.internal Ready <none> 8d v1.7.10+coreos.0
ip-10-1-2-110.eu-central-1.compute.internal Ready master 42d v1.7.10+coreos.0
As far as I've been able to read, helm is supposed to use the same certificates as kubectl, which makes me curious as how how kubectl works, but helm doesn't?
This is a production cluster with internal releases handled through helm charts, so it being solved is imperative.
Any hints would be greatly appreciated.
As a workaround you can try to disable certificate verification. Helm uses the kube config file (by default ~/.kube/config). You can add insecure-skip-tls-verify: true for the cluster section:
clusters:
- cluster:
server: https://cluster.mysite.com
insecure-skip-tls-verify: true
name: default
Did you already try to reinstall helm/tiller?
kubectl delete deployment tiller-deploy --namespace kube-system
helm init
Also check if you have configured an invalid certificate in the cluster configuration.
In my case, I was running for a single self-manage and the config file was also container ca-file, so the following the above answer was throwing below error
Error: Kubernetes cluster unreachable: Get "https://XX.XX.85.154:6443/version?timeout=32s": x509: certificate is valid for 10.96.0.1, 172.31.25.161, not XX.XX.85.154
And my config was
- cluster:
certificate-authority-data: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
server: https://54.176.85.154:6443
insecure-skip-tls-verify: true
So I had to remove the certificate-authority-data.
- cluster:
server: https://54.176.85.154:6443
insecure-skip-tls-verify: true
Use --insecure-skip-tls-verify to skip tls verification via command line
helm repo add stable --insecure-skip-tls-verify https://charts.helm.sh/stable
In my case the error was caused by an untrusted certificate from the Helm repository.
Downloading the certificate and specifying it using the --ca-file option solved the issue (at least in Helm version 3).
helm repo add --ca-file /path/to/certificate.crt repoName https://example/repository
--ca-file string, verify certificates of HTTPS-enabled servers using this CA bundle
Adding the line below the -cluster to /home/centos/.kube/config file fixed my issue
insecure-skip-tls-verify: true
fixed my issue.
my config file now looks like this.
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/centos/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Nov 2021 20:51:44 EDT
provider: minikube.sigs.k8s.io
version: v1.23.2
name: cluster_info
server: https://192.168.49.2:8443
insecure-skip-tls-verify: true
name: minikube
contexts:
I encountered an edge case for this. You can also get this error if you have multiple kubeconfig files referenced in the KUBECONFIG variable, and more than one file has clusters with the same name.
For my case, it was an old version of helm (v. 3.6.3 in my case) after I upgraded to helm v.3.9.0 brew upgrade helm everything worked again.
Although adding repo with --ca-file did the thing, when it tried to download from that repo with the command posted under, I still got the x509: certificate signed by unknown authority
helm dependency update helm/myStuff
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "myRepo" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 18 charts
Downloading myService from repo https://myCharts.me/
Save error occurred: could not download https://myCharts.me/stuff.tgz ...
x509: certificate signed by unknown authority
Deleting newly downloaded charts, restoring pre-update state
What I needed to do, apart from adding repo with --ca-file was to download the repository certificate and install it as Current User:
Place all certificates in the following store: Trusted Root Certification Authorities:
After installing the certificate I also needed to restart the computer. After restart, when you open the browser and paste the repo URL it should connect without giving a warning and trusting the site (this way you know you installed the certificate successfully).
You can go ahead and run the command, it should pick the certificate this time.
helm dependency update helm/myStuff
....
Saving 18 charts
Downloading service1 from repo https://myCharts.me/
Downloading service2 from repo https://myCharts.me/
....
i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.
In my 10-machines bare-metal Kubernetes cluster, one service needs to call another https-based service which is using a self-signed certificate.
However, since this self-signed certificate is not added into pods' trusted root ca, the call failed saying can't validate x.509 certificate.
All pods are based on ubuntu docker images. However the way to add ca cert to trust list on ubuntu (using dpkg-reconfigure ca-certificates) is not working on this pod any longer. Of course even I succeeded adding the ca cert to trust root on one pod, it's gone when another pod is kicked.
I searched Kubernetes documents, and surprised not found any except configuring cert to talk to API service which is not what I'm looking for. It should be quite common scenario if any secure channel needed between pods. Any ideas?
If you want to bake the cert in at buildtime, edit your Dockerfile adding the commands to copy the cert from the build context and update the trust. You could even add this as a layer to something from docker hub etc.
COPY my-cert.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
If you're trying to update the trust at runtime things get more complicated. I haven't done this myself, but you might be able to create a configMap containing the certificate, mount it into your container at the above path, and then use an entrypoint script to run update-ca-certificates before your main process.
Updated Edit read option 3:
I can think of 3 options to solve your issue if I was in your scenario:
Option 1.) (The only complete solution I can offer, my other solutions are half solutions unfortunately, credit to Paras Patidar/the following site:)
https://medium.com/#paraspatidar/add-ssl-tls-certificate-or-pem-file-to-kubernetes-pod-s-trusted-root-ca-store-7bed5cd683d
1.) Add certificate to config map:
lets say your pem file is my-cert.pem
kubectl -n <namespace-for-config-map-optional> create configmap ca-pemstore — from-file=my-cert.pem
2.) Mount configmap as volume to exiting CA root location of container:
mount that config map’s file as one to one file relationship in volume mount in directory /etc/ssl/certs/ as file for example
apiVersion: v1
kind: Pod
metadata:
name: cacheconnectsample
spec:
containers:
- name: cacheconnectsample
image: cacheconnectsample:v1
volumeMounts:
- name: ca-pemstore
mountPath: /etc/ssl/certs/my-cert.pem
subPath: my-cert.pem
readOnly: false
ports:
- containerPort: 80
command: [ "dotnet" ]
args: [ "cacheconnectsample.dll" ]
volumes:
- name: ca-pemstore
configMap:
name: ca-pemstore
So I believe the idea here is that /etc/ssl/certs/ is the location of tls certs that are trusted by pods, and the subPath method allows you to add a file without wiping out the contents of the folder, which would contain the k8s secrets.
If all pods share this mountPath, then you might be able to add a pod present and configmap to every namespace, but that's in alpha and is only helpful for static namespaces. (but if this were true then all your pods would trust that cert.)
Option 2.) (Half solution/idea + doesn't exactly answer your question but solves your problem, I'm fairly confident will work in theory, that will require research on your part, but I think you'll find it's the best option:)
In theory you should be able to leverage cert-manager + external-dns + Lets Encrypt Free + a public domain name to replace the self signed cert with a Public Cert.
(there's cert-manager's end result is to auto gen a k8s tls secret signed by Lets Encrypt Free in your cluster, they have a dns01 challenge that can be used to prove you own the cert, which means that you should be able to leverage that solution even without an ingress / even if the cluster is only meant for private network.)
Edit: Option 3.) (After gaining more hands on experience with Kubernetes)
I believe that switchboard.op's answer is probably the best/should be the accepted answer. This "can" be done at runtime, but I'd argue that it should never be done at runtime, doing it at runtime is super hacky and full of edge cases/there's not a universal solution way of doing it.
Also it turns out that my Option 1 doing it is only half correct.
mounting the ca.crt on the pod alone isn't enough. After that file is mounted on the pod you'd need to run a command to trust it. And that means you probably need to override the pods startup command. Example you can't do something like connect to database (the default startup command) and then update trusted CA Certs's command. You'd have to override the startup file to be a hand jammed, overwrite the default startup script, update trusted CA Certs's, connect to the database. And the problem is Ubuntu, RHEL, Alpine, and others have different locations where you have to mount the CA Cert's and sometimes different commands to trust the CA Certs so a universal at runtime solution that you can apply to all pods in the cluster to update their ca.cert isn't impossible, but would require tons of if statements and mutating webhooks/complexity. (a hand crafted per pod solution is very possible though if you just need to be able to dynamically update it for a single pod.)
switchboard.op's answer is the way I'd do it if I had to do it. Build a new custom docker image with your custom ca.cert being trusted baked into the image. This is a universal solution, and greatly simplifies the YAML side. And it's relatively easy to do on the docker image side.
Just for curiosity, here is an example of manifest utilizing the init container approach.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
# in my case it is CloudFlare CA used to sign certificates for origin servers
origin_ca_rsa_root.pem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
---
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
name: demo
spec:
nodeSelector:
kubernetes.io/os: linux
initContainers:
- name: init
# image: ubuntu
# command: ["/bin/sh", "-c"]
# args: ["apt -qq update && apt -qq install -y ca-certificates && update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
# # alternative image with preinstalled ca-certificates utilities
image: grafana/alpine:3.15.4
command: ["/bin/sh", "-c"]
args: ["update-ca-certificates && cp -r /etc/ssl/certs/* /artifact/"]
volumeMounts:
- name: demo
# note - we need change extension to crt here
mountPath: /usr/local/share/ca-certificates/origin_ca_rsa_root.crt
subPath: origin_ca_rsa_root.pem
readOnly: false
- name: tmp
mountPath: /artifact
readOnly: false
containers:
- name: demo
# note - even so init container is alpine base, and this one is ubuntu based everything still works
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: tmp
mountPath: /etc/ssl/certs
readOnly: false
volumes:
- name: demo
configMap:
name: demo
# will be used to pass files between init container and actual container
- name: tmp
emptyDir: {}
and its usage:
kubectl apply -f demo.yml
kubectl exec demo -c demo -- curl --resolve foo.bar.com:443:10.0.14.14 https://foo.bar.com/swagger/v1/swagger.json
kubectl delete -f demp.yml
notes:
replace foo.bar.com to your domain name
replace 10.0.14.14 to ingress controller cluster IP
you may want to add -vv flag to see more details
Indeed it is kind of ugly and monstrous, but at least it does work and is proof of concept. Workarounds with simple ConfigMap do not work because curl reads ca-certificates.crt, which is not modified in that approach.