okd "x509: certificate is valid for 172.30.0.1, not 10.0.0.1" - openshift-origin

I am new to OKD and I have okd deployed using https://github.com/Kubeinit/kubeinit and it is working fine. I want to access this cluster from another machine(kubectl commands).
kubeconfig contains "server: https://api.okdcluster.kubeinit.local:6443" which is known to okd host.
master1 - 10.0.0.1
worker1 - 10.0.0.2
worker2 - 10.0.0.3
I want this to be changed to master ip, server: https://10.0.0.1:6443 so that i can access from the other machine.
Error seen:
]$ kubectl get pods -A
Unable to connect to the server: x509: certificate is valid for 172.30.0.1, not 10.0.0.1
It works if i give --insecure-tls-skip-verify. Is there a way i can add 10.0.0.1 to the certificates?

Related

Minio does not seem to recognize TLS/https certificates

Im search now for hours to make minio work with self-signed tls certs using docker.
accroding to the documentation certs just need to be placed at /root/.minio/certs/CAs or /root/.minio/ inside the minio container
I tried both with no success
This is how I start minio (using saltstack):
minio:
docker_container.running:
- order: 10
- hostname: backup
- container_name: backup
- binds:
- /root/backup:/data
- /srv/salt/minio/certs:/root/.minio
- image: minio/minio:latest
- port_bindings:
- 10.10.10.1:9000:443
- environment:
- MINIO_BROWSER=off
- MINIO_ACCESS_KEY=BlaBlaBla
- MINIO_SECRET_KEY=BlaBlaBla
- privileged: false
- entrypoint: sh
- command: -c 'mkdir -p /data/backup && /usr/bin/minio server --address ":443" /data'
- restart_policy: always
If I do "docker logs minio" I just get to see http instead of https:
Endpoint: http://172.17.0.3:443 http://127.0.0.1:443
Both keys public and privat are mounted at the correct location inside the container but they not seem to recognize ...
can smb help, do I need to add some extra parameter here?
Thanks in advance
Per the docs (https://docs.minio.io/docs/how-to-secure-access-to-minio-server-with-tls.html), your keys must be named public.crt and private.key, respectively, and mounted at ~/.minio/certs (e.g. /root/.minio/certs). The CA's directory is for public certs of other servers you want to trust, for example in a distributed setup.
you doesn't need to setup cert in minio. Use nginx server then reverse proxy minio port as like 127.0.0.1:9000 port . then use cert file in nginx server block . your all problem solved

Dokku + letsencrypt: able to get ssl for subdomain, but not root domain

I am using server side CLI to get an SSL for my web app (following these instructions: https://github.com/dokku/dokku-letsencrypt).
After following the setup I ran:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-28 23:12:10,728:INFO:__main__:1317: Generating new account key
2020-04-28 23:12:11,686:INFO:__main__:1343: By using simp_le, you implicitly agree to the CA's terms of service: https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf
2020-04-28 23:12:12,017:INFO:__main__:1406: Generating new certificate private key
2020-04-28 23:12:14,753:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4241725520
2020-04-28 23:12:14,757:INFO:__main__:396: Saving account_key.json
2020-04-28 23:12:14,758:INFO:__main__:396: Saving account_reg.json
Challenge validation has failed, see error log.
Debugging tips: -v improves output verbosity. Help is available under --help.
-----> Certificate retrieval failed!
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
root#taaalk:~#
So it's easier to read the error was:
2020-04-28 23:12:14,753:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4241725520
I did a lot of googling around and the most promising post I found on the subject was this one:
https://veryjoe.com/tech/2019/07/06/HTTPS-dokku.html
In the post it suggested checking my Dokku domain misconfiguration and missing network listeners.
I ran dokku domains:report to check for the misconfiguration. This returned:
root#taaalk:~# dokku domains:report
=====> taaalk domains information
Domains app enabled: true
Domains app vhosts: taaalk.taaalk.co
Domains global enabled: true
Domains global vhosts: taaalk.co
And I then ran dokku network:report to check for missing listeners:
root#taaalk:~# dokku network:report
=====> taaalk network information
Network attach post create:
Network attach post deploy:
Network bind all interfaces: false
Network web listeners: 172.17.0.4:5000
After talking things through with a friend we tried adding an 'A' record to my DNS with the host 'taaalk.taaalk.co'.
I then ran:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-30 13:39:58,623:INFO:__main__:1406: Generating new certificate private key
2020-04-30 13:40:03,879:INFO:__main__:396: Saving fullchain.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving chain.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving cert.pem
2020-04-30 13:40:03,880:INFO:__main__:396: Saving key.pem
-----> Certificate retrieved successfully.
-----> Installing let's encrypt certificates
-----> Unsetting DOKKU_PROXY_PORT
-----> Setting config vars
DOKKU_PROXY_PORT_MAP: http:80:5000
-----> Setting config vars
DOKKU_PROXY_PORT_MAP: http:80:5000 https:443:5000
-----> Configuring taaalk.taaalk.co...(using built-in template)
-----> Creating https nginx.conf
Enabling HSTS
Reloading nginx
-----> Configuring taaalk.taaalk.co...(using built-in template)
-----> Creating https nginx.conf
Enabling HSTS
Reloading nginx
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
Which was successful.
However, now taaalk.taaalk.co has an SSL, but taaalk.co does not.
I don't know where to go from here. I feel it makes sense to change the vhost from taaalk.taaalk.co to taaalk.co, but I am not sure if this is correct or how to do it. The Dokku documentation does not seem to cover changing the vhost name: http://dokku.viewdocs.io/dokku/configuration/domains/
Thank you for any help
Update
I changed the vhost to taaalk.co, so I now have:
root#taaalk:~# dokku domains:report
=====> taaalk domains information
Domains app enabled: true
Domains app vhosts: taaalk.co
Domains global enabled: true
Domains global vhosts: taaalk.co
However, I still get the following error:
root#taaalk:~# dokku letsencrypt taaalk
=====> Let's Encrypt taaalk
-----> Updating letsencrypt docker image...
0.1.0: Pulling from dokku/letsencrypt
Digest: sha256:af5f8529c407645e97821ad28eba328f4c59b83b2141334f899303c49fc07823
Status: Image is up to date for dokku/letsencrypt:0.1.0
docker.io/dokku/letsencrypt:0.1.0
Done updating
-----> Enabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
-----> Getting letsencrypt certificate for taaalk...
- Domain 'taaalk.co'
darkhttpd/1.12, copyright (c) 2003-2016 Emil Mikulic.
listening on: http://0.0.0.0:80/
2020-04-30 17:01:12,996:INFO:__main__:1406: Generating new certificate private key
2020-04-30 17:01:46,068:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
Debugging tips: -v improves output verbosity. Help is available under --help.
-----> Certificate retrieval failed!
-----> Disabling ACME proxy for taaalk...
[ ok ] Reloading nginx configuration (via systemctl): nginx.service.
done
root#taaalk:~#
Again, reproduced below for ease of reading:
2020-04-30 17:01:46,068:ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
The fix was quite simple. First I made A records for both www. and root for my url pointing at my server.
I then set my vhosts to be both taaalk.co and www.taaalk.co with dokku domains:add taaalk www.taaalk.co, etc...
I then removed all the certs associated with taaalk.co with dokku certs:remove taaalk.
I then ran dokku letsencrypt taaalk and everything worked fine.
To anyone looking on who tried what Joshua did and still didn't get letsencrypt to generate certs:
My problem was that I didn't have any port mapping for port 80 on dokku, so letsencrypt was unable to communicate with the server to authorise the new cert, giving this error:
ERROR:__main__:1388: CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/4277663330
Challenge validation has failed, see error log.
Silly me - I had removed the port http 80 mapping in dokku as I thought it was unnecessary.
To fix the problem I just added the port mapping again:
dokku proxy:ports-add myapp http:80:4000
(Note: my app connects to port 4000 hence above, your port may be different)
And then ran dokku letsencrypt:
dokku letsencrypt myapp
This sequence is important, setting the proxy ports correctly allows letsencrypt to connect and autorenew the TLS certs again.

Running an apache container on a port > 1024

I've built a docker image based on httpd:2.4. In my k8s deployment I've defined the following securityContext:
securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
In order to get this container to run properly as non-root apache needs to be configured to bind to a port > 1024, as opposed to the default 80. As far as I can tell this means editing Listen 80 in httpd.conf to Listen {Some port > 1024}.
When I want to run the docker image I've build normally (i.e. on default port 80) I have the following port settings:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 80
service
spec.ports[0].targetPort: 80
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Given these settings the service becomes accessible at the host url provided in the ingress manifest. Again, this is without the changes to httpd.conf. When I make those changes (using Listen 8000), and add in the securityContext section to the deployment, I change the various manifests accordingly:
deployment
spec.template.spec.containers[0].ports[0].containerPort: 8000
service
spec.ports[0].targetPort: 8000
spec.ports[0].port: 8080
ingress
spec.rules[0].http.paths[0].backend.servicePort: 8080
Yet for some reason, when I try to access a URL that should be working I get a 502 Bad Gateway error. Have I set the ports correctly? Is there something else I need to do?
Check if pod is Running
kubectl get pods
kubectl logs pod_name
Check if the URL is accessible within the pod
kubectl exec -it <pod_name> -- bash
$ curl http://localhost:8000
If the above didn't work, check your httpd.conf.
Check with the service name
kubectl exec -it <ingress pod_name> -- bash
$ curl http://svc:8080
You can check ingress logs too.
In order to get this container to run properly as non-root apache
needs to be configured to bind to a port > 1024, as opposed to the
default 80
You got it, that's the hard requirement in order to make the apache container running as non-root, therefore this change needs to be done at container level, not to Kubernetes' abstracts like Deployment's Pod spec or Service/Ingress resource object definitions. So the only thing left in your case, is to build a custom httpd image, with listening port > 1024. The same approach applies to the NGINX Docker containers.
One key information for the 'containerPort' field in Pod spec, that you are trying to manually adjust, and which is not so apparent. It's there primarily for informational purposes, and does not cause opening port on container level. According Kubernetes API reference:
Not specifying a port here DOES NOT prevent that port from being
exposed. Any port which is listening on the default "0.0.0.0" address
inside a container will be accessible from the network. Cannot be updated.
I hope this will help you to move on

How to run remote code as user with certificate from a worker node

I created a user in the Master.
First I created a key and certificate for him: dan.key and dan.crt
Then I created it inside Kubernetes:
kubectl config set-credentials dan \
--client-certificate=/tmp/dan.crt \
--client-key=/tmp/dan.key
This is the ~/.kube/config:
users:
- name: dan
user:
as-user-extra: {}
client-certificate: /tmp/dan.crt
client-key: /tmp/dan.key
I want to be able to run commands from a remote worker as the user I created.
I know how to do it with service account token:
kubectl --server=https://192.168.0.13:6443 --insecure-skip-tls-verify=true --token="<service_account_token>" get pods
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
I followed this question:
kubectl unable to connect to server: x509: certificate signed by unknown authority
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
Unable to connect to the server: x509: certificate signed by unknown authority
I copied the certifiacte and the key to the remote worker and ran:
[workernode tmp]$ kubectl --server=https://192.168.0.13:6443 --client-certificate=/tmp/dan.crt --client-key=/tmp/dan.key get pods
Unable to connect to the server: x509: certificate signed by unknown authority
You were missing the critical piece of data telling kubectl how to trust the https: part of that request, namely --certificate-authority=/path/to/kubernetes/ca.pem
You didn't encounter that error while using --token=... because of the --insecure-skip-tls-verify=true which you should definitely, definitely not do.
I tried like he wrote:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
But I am still receiving:
You have followed the wrong piece of advice from whatever article you were reading; that --accept-hosts flag only controls the remote hostnames from which kubectl proxy will accept connections, and has zero to do with SSL anythings.

Docker secure connection with ssh port forwarding

I made a ssh tunnel for port forwarding to get mapped a port on my laptop with port on the remote host (your-mv.com):
ssh -nfNT -L 3376:your-mv.com:3376 login#server.com
Then I changed docker_host and setup docker tls variables:
export DOCKER_HOST=localhost:3376
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=/my/path
And I run:
docker ps
But I have an error:
Get https://localhost:3376/v1.26/containers/json: x509: certificate is valid for your-mv.com, not localhost
Could you help me what did I wrong and how to overcome this problem?
UPD
Common name of my laptop is subject= /CN=kenenbek. Common name of CA is subject= /CN=cert-authority.com and common name of remote host is subject= /CN=your-vm.com.
DOCKER_TLS_VERIFY is set and the certificate has the Common Name of your-mv.com, but DOCKER_HOST was set to localhost.
Do not set DOCKER_TLS_VERIFY.