After having changed the IP configuration of the cluster (all external IPs changed, the internal private IPs remained the same), some kubectl commands do not work anymore for any container. The pods are all up and running, and seem to find themselves without problems. Here is the output:
bronger#penny:~$ time kubectl logs jb-plus--prod-615777041-71s09
Error from server (InternalError): Internal error occurred: Authorization error (user=kube-apiserver-kubelet-client, verb=get, resource=nodes, subresource=proxy)
real 0m30,539s
user 0m0,441s
sys 0m0,021s
Apparently, there is a 30 seconds timeout, and after that the authorisation error.
What may cause this?
I run Kubernetes 1.8 with Weave Net.
Based on the symptom new ip missing from the certificate. use the below command to validate.
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep DNS
Related
I have successfully created a graylog server (in docker container) that ingests logs from filebeat on a separate machine.
However I of course would like to have the messages encrypted. I am attempting to set this up however I cannot seem to get graylog to accept the connection, instead it is always being reset by peer:
{"log.level":"error","#timestamp":"2023-01-04T15:08:57.746+0100","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":150},"message":"Failed to connect to backoff(async(tcp://<graylog_ip>:5044)): read tcp 192.168.178.99:54372-\><graylog_ip>:5044: read: connection reset by peer","service.name":"filebeat","ecs.version":"1.6.0"}
(Without tls the connection works as intended, a new line appears in graylog every time one is added to my test log file.)
Setup Details
I created a filebeat.crt and filebeat.key file with openssl. I confirmed that the hostname for the certificate was the same hostname for the server with graylog on it:
openssl genrsa -out filebeat.key 2048
openssl req -new -x509 -key filebeat.key -out filebeat.crt -days 3650
From my knowledge, a CA should not be required since I have copied the key myself, filebeat can just encrypt the data it sends with the filebeat.crt, then the server can decrypt with filebeat.key (perhaps this is not correct of me to imagine?)
I then copied both files to the server and local machine. In my compose file I mounted the key into the graylog container and restarted. Then I set up the input configuration that was working previously to have:
bind_address: 0.0.0.0
charset_name: UTF-8
no_beats_prefix: false
number_worker_threads: 12
override_source: <empty>
port: 5044
recv_buffer_size: 1048576
tcp_keepalive: false
tls_cert_file: /etc/graylog/server/filebeat.crt
tls_client_auth: disabled
tls_client_auth_cert_file: <empty>
tls_enable: true
tls_key_file: /etc/graylog/server/filebeat.key
tls_key_password: ********
Then in filebeat I have the following configuration (I also tried converting and using filebeat.pem for the certificate, but no change):
output.logstash:
hosts: ["<graylog_ip>:5044"]
ssl.certificate: '/etc/pki/tls/certs/filebeat.crt'
ssl.key: '/etc/pki/tls/private/filebeat.key'
I really cannot see the issue, any help would be greatly appreciated!
First, try to debug filebeat using
/usr/bin/filebeat -e -d '*' -c filebeat_5.1.2.conf
Probably you will discover that CA is needed or something like that.
But my best quess is that filebeat tries to verify hostname and certivicate name, your generated certificate could not have CN identical to hostname.
Proper solution is using:
ssl.verification_mode: none
Well, this solution works for me.
I am struggling with this issue for a few days, I am trying to connect my db from Robo 3t and Studio 3t, but i got same error with both programs:
Note: I can access by ssh from my terminal, it means that the certificate is fine, the EC2 endpoint is fine, port etc... then the problem should be in another place, right?
SSH Tunnel error: I/O error: Not ASN.1 data
Stacktrace:
|/ SSH Tunnel error: I/O error: Not ASN.1 data
|___/ I/O error: Not ASN.1 data
But I as i said before, I can connect by ssh without any issue:
ssh -i "cert.pem" ec2-muyser#ec2-54-244-36-226.us-west-2.compute.amazonaws.com
I checked all the steps described in the AWS article below, an I also disabled TLS in the cluster param, as suggested in point 5, but I still having the issue.
https://aws.amazon.com/es/premiumsupport/knowledge-center/documentdb-cannot-connect/
I just edit the post to add a few screenshot from my Robo 3t config:
Regards.
I verified the same steps. I am able to connect successfully .
Looks like you are on macOS and you didn't select Self-signed Certificate as recommended in documentation -
https://docs.aws.amazon.com/documentdb/latest/developerguide/robo3t.html
These are two additional settings which you require to do on macOS.
i) If you are on Linux/macOS client machine, you might have to change the permissions of your private key using the following command:
chmod 400 /fullPathToYourPemFile/.pem
ii) if you are on macOS Catalina or above, choose Self-signed Certificate as the Authentication Method because the macOS does not accept certificates with validity greater than 825 days.
I have microK8S cluster, and expose the API server at my domain.
The server.crt and server.key in /var/snap/microk8s/1079/certs need to be replaced with the ones that include my domain.
Otherwise, as expected, i get the error:
Unable to connect to the server: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, not mydonaim.com
With the help of cert-manager I have produced certificates and replaced them, my system works well.
Problem: every time server is restarted, server.crt and server.key are generated again in
/var/snap/microk8s/1079/certs. My custom certs are deleted, making API server unreachable remotely.
How can I stop the system from doing that all the time?
Workaround?
Should I place my certificates elsewhere and edit config files like /var/snap/microk8s/1079/args/kube-controller-manager with the path to those certificates? Are those config files auto-replaced as well?
Cluster information:
Kubernetes version: 1.16.3
Cloud being used: Bare metal, single-node
cluster Installation method: Ubuntu Server with Snaps
Host OS: Ubuntu 18.04.3 LTS
It looks like there is an existing issue that describes copying and modifying the /var/snap/microk8s/current/certs/csr.conf.template to include any extra IP or DNS entries for the generated certificates
Please be aware of the proposed updates in https://discuss.kubernetes.io/t/services-and-ports/11263/6. The following command was required to run in my env:
sudo microk8s refresh-certs
When I do docker pull from inside a container that uses /var/run/docker.sock to run docker (docker inside docker), I got this error:
FATA[0000] Error response from daemon: v1 ping attempt failed with error: Get https://registry.com:5000/v1/_ping: x509: certificate has expired or is not yet valid. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry registry.com:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/registry.com:5000/ca.crt
So I followed the instruction and added the ca.crt inside that directory and also added the insecure option to /etc/default/docker, but the error didn't go away.
I wonder where /var/run/docker.sock command look for the cert when I pull from inside the container. Especially when pulling works from outside (host) with the same config (ca.crt in the right folder and the insecure option is also added).
/var/run/docker.sock is not the thing that is looking for a cert. That is simply the socket that you use to communicate with dockerd. When you do a pull, you are asking the docker daemon to go talk to a registry.
Where did you get the ca.crt file? Is that really the signing certificate for your registry.com:5000 server's certificate? Did you put that in /et c/default/docker/registry.com:5000/ca.crt on the host where dockerd is running, or inside the container?
That ca.crt file belongs where the daemon is running. Double check that you have that correct file in the correct place on the host, and that should fix the issue.
Got it to work now, the solution is to restart the docker daemon inside the container. I actually tried it before but the docker service kept going down after restart, that made me think it was the docker service from the host.
The reason I could not restart docker service is /var/run/docker.pid existed which prevent docker from starting again. So I deleted that pid and docker restarted successfully.
I'm trying to renew my SSL certificate but there is some problem i'm probably missing. after i'v done the following steps the server keep using the old certificate and i do'nt know why.
here'w what i have done:
Create new csr file (domain.csr) + key file (domain.key)
openssl req -new -newkey rsa:2048 -nodes -keyout domain.key -out domain.csr
Copy csr file content and paste it to my ssl provider + get approval.
get 5 files from them and upload them to the server (domain.der,domain.pem
,domain.cer, chain.cer , domain.p7b )
set on apache ssl.conf file ,
SSLCertificateFile (domain.cer) SSLCertificateKeyFile (domain.key).
restart apache
for some reason my server is still using my old certificate.
is the something i'm doing wrong?
Well you figured it out yourself but in case anyone else is in same situation, here's some of the things you can check.
First up check locally whether this works, by running the following openssl command on the server (a crucial step we skipped!):
openssl s_client -connect localhost:443
This will show the cert presented to the client from Apache. If that's not the right one, then you know Apache config is at fault. If it is the right one then something downstream is a problem.
In your case you terminate SSL at the load balancer and forgot to change the cert there. Another issue could be browser caching the SSL cert (restart it, Ctrl+F5 to force refresh or better yet try another browser or a third party website like ssllabs.com).
Assuming it's a problem with Apache then you need to check the config to check all instances of the cert have been replace. The below command will show all the vhosts and what config they are configured in:
/usr/local/apache2/bin/apachectl -S
Alternatively just use standard find and grep unix commands to search your Apache config for the old or new cert:
find /usr/local/apache2/conf -name "*.conf" -exec grep olddomain.cer {} \; -print
Both those commands assume apache is installed in /usr/local/apache2 but change the path as appropriate.
If all looks good and you've definitely restarted Apache then you can try a full stop and restart as I have noticed sometimes a graceful restart of Apache doesn't always pick up new config. Before starting the web server back up again, check you can't connect from your browser (to ensure you're connecting to the server you think you're connecting to) and that the process is down with the following command:
ps -ef | grep httpd
and then finally start.
Another thing to check is that the cert you are installing is the one you think it is, using this openssl command to print out the cert details (assuming the cert is in x509 format but there are similar commands for other formats):
openssl x509 -in domain.cer -text
And last but not least check the Apache log files to see if any errors in there. Though would expect that to mean no cert is loaded rather than just the old one.
Good answer from #Barry.
Another aspect is apache is not the front most web server. From this conversation. It is possible that there are other web servers in front of apache.
Something like - nginx. In our case it was AWS ELB. We had to change cert in ELB in order to change.
We had a similiar problem to what #Akshay is saying above.
In order for the server to update the certificates we had to run some commands for Google Cloud Compute Engine Load Balancer:
gcloud compute target-https-proxies update
Hope this helps someone that is using GCP for hosting apache.