How to configure a SSL certificate to be used by Kubernetes with Google Cloud? - ssl

I am trying to send my app to a Google Cloud Cluster using the kubectl command behind a corporative proxy that needs a certificate ".crt" file to be used when doing HTTPS requests.
I already ran the gcloud container clusters get-credentials... command and it also asked for a certificate. I followed the given instructions by Google and I configured my certificate file without any issue and it worked.
But when I try the kubectl get pods I am getting the following message:
"Unable to connect to the server: x509: certificate signed by unknown authority"
How can I configure my certificate file to be used by the kubectl command?
I did a search about this subject but I found too difficult steps. Could I just run something like this:
kubectl --set_ca_file /path/to/my/cert
Thank you

The short answer up to what I know is no.
here[1] you can see the step by step of how to get this done in the easiest way I found so far, is not a one line way but is the closest to that.
after having your cert files you need to run this:
gcloud compute ssl-certificates create test-ingress-1 \ --certificate [FIRST_CERT_FILE] --private-key [FIRST_KEY_FILE]
then you need to create your YAML file with the configuration (in the link there are two examples)
run this command:
kubectl apply -f [NAME_OF_YOUR_FILE].yaml
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl

Related

MinIO operator on minikube is not working

I'm trying to use the MinIO operator on a minikube (1 node) deployed in an EC2 machine.
The operator is deployed correctly and the same is for the tenant creation and it seems all good until I try to make a connection to the created tenant.
In this case I receive a 500 internal server error then I'm unable to create buckets or to use the mc client that MinIO provides.
I tried both with the MinIO console (using a port-forward) and the command line minio command to create the tenant and both worked.
This is what I see with kubectl:
mc test
kubectl get all -n minio-tenant-aisync
kubectl get all --all-namespaces
I am new to Kubernetes and MinIO then I don't know if I am missing something, could you help me please?
The first mc command that you are running shows there is something listening on port 9000 of your localhost, however you are getting a TLS verification error because MinIO by default is using a certificate issued by the local kubernetes certificate authority, also the returned certificate is not valid for localhost domain, the solution for this is to add the --insecure flag to your mc command (and include it in all subsequent commands unless you use a valid certificate), ie:
./mc alias set minio https://localhost:9000 [accesskey] [secretkey] --insecure

Change domain for load balancer's SSL certificates

I am going around in circles for the past hour trying to change the domain for HTTP(S) Load Balancer's SSL certificates.
I can't seem to find an option from the console or CLI to change/update the domains. After created a new one, I cannot delete the old one because it is attached to the load balancer. To remove the old SSL certificate, I have to delete the LB and its dependencies, and to go through all the steps to create the load balancer again.
May I know if it is a bug or expected behavior?
Thanks.
Before you can delete an SSL certificate, you must first update each target proxy that references the certificate. For each target proxy, run the appropriate gcloud update command to update the target proxy's CERTIFICATE_LIST such that it no longer includes the SSL certificate you need to delete.
Please find below steps to replacing SSL certificates.
1.Create a new SSL certificate resource. The new SSL certificate must have a unique name within the project.
2. Update the target proxy so that its list of SSL certificate(s) includes the new SSL certificate in the first position to make it the primary certificate. After the new certificate, include any existing SSL certificates that you want to retain. Make sure to exclude the old SSL certificate that you no longer need. To avoid downtime, run a single gcloud command with the --ssl-certificates flag. For example:
For external HTTP(S) load balancers:
Use the gcloud compute target-https-proxies update command with the --global flag.
gcloud compute target-https-proxies update TARGET_PROXY_NAME
--global
--ssl-certificates=new-ssl-cert,other-certificates
--global-ssl-certificates.
For internal HTTP(S) load balancers:
gcloud compute target-https-proxies update TARGET_PROXY_NAME
--region REGION
--ssl-certificates=new-ssl-cert,other-certificates
--global-ssl-certificates
For SSL proxy load balancers:
Use the gcloud compute target-ssl-proxies update command with the --backend-service flag.
gcloud compute target-ssl-proxies update TARGET_PROXY_NAME
--ssl-certificates=new-ssl-cert,other-certificates
Verify that the load balancer is serving the replacement certificate by running the following OpenSSL command:
echo | openssl s_client -showcerts -connect IP_ADDRESS:443 -verify 99 -verify_return_error
Wait 15 minutes to ensure that the replacement certificate is available to all Google Front Ends (GFEs).
(Optional) Delete the old SSL certificate.
For further reading please follow the links below:
Deleting/ Replacing SSL certificates :
https://cloud.google.com/load-balancing/docs/ssl-certificates/self-managed-certs#delete-ssl-cert-resource
Replacing an existing SSL certificate
https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs#replace-ssl

kubectl unable to connect to server: x509: certificate signed by unknown authority

i'm getting an error when running kubectl one one machine (windows)
the k8s cluster is running on CentOs 7 kubernetes cluster 1.7
master, worker
Here's my .kube\config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.10.12.7:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:node:localhost.localdomain
name: system:node:localhost.localdomain#kubernetes
current-context: system:node:localhost.localdomain#kubernetes
kind: Config
preferences: {}
users:
- name: system:node:localhost.localdomain
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
the cluster is built using kubeadm with the default certificates on the pki directory
kubectl unable to connect to server: x509: certificate signed by unknown authority
One more solution in case it helps anyone:
My scenario:
using Windows 10
Kubernetes installed via Docker Desktop ui 2.1.0.1
the installer created config file at ~/.kube/config
the value in ~/.kube/config for server is https://kubernetes.docker.internal:6443
using proxy
Issue: kubectl commands to this endpoint were going through the proxy, I figured it out after running kubectl --insecure-skip-tls-verify cluster-info dump which displayed the proxy html error page.
Fix: just making sure that this URL doesn't go through the proxy, in my case in bash I used export no_proxy=$no_proxy,*.docker.internal
So kubectl doesn't trust the cluster, because for whatever reason the configuration has been messed up (mine included). To fix this, you can use openssl to extract the certificate from the cluster
openssl.exe s_client -showcerts -connect IP:PORT
IP:PORT should be what in your config is written after server:
Copy paste stuff starting from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- (these lines included) into a new text file, say... myCert.crt If there are multiple entries, copy all of them.
Now go to .kube\config and instead of
certificate-authority-data: <wrongEncodedPublicKey>`
put
certificate-authority: myCert.crt
(it assumes you put myCert.crt in the same folder as the config file)
If you made the cert correctly it will trust the cluster (tried renaming the file and it no longer trusted afterwards).
I wish I knew what encoding certificate-authority-data uses, but after a few hours of googling I resorted to this solution, and looking back I think it's more elegant anyway.
Run:
gcloud container clusters get-credentials standard-cluster-1 --zone us-central1-a --project devops1-218400
here devops1-218400 is my project name. Replace it with your project name.
I got the same error while running $ kubectl get nodes as a root user. I fixed it by exporting kubelet.conf to environment variable.
$ export KUBECONFIG=/etc/kubernetes/kubelet.conf
$ kubectl get nodes
For my case, its simple worked by adding --insecure-skip-tls-verify at end of kubectl commands, for single time.
Sorry I wasn't able to provide this earlier, I just realized the cause:
So on the master node we're running a kubectl proxy
kubectl proxy --address 0.0.0.0 --accept-hosts '.*'
I stopped this and voila the error was gone.
I'm now able to do
kubectl get nodes
NAME STATUS AGE VERSION
centos-k8s2 Ready 3d v1.7.5
localhost.localdomain Ready 3d v1.7.5
I hope this helps those who stumbled upon this scenario.
I my case I resolved this issue copying the kubelet configuration to my home kube config
cat /etc/kubernetes/kubelet.conf > ~/.kube/config
This was happening because my company's network does not allow self signing certificates through their network. Try switching to a different network
For those of you that were late to the thread like I was and none of these answers worked for you I may have the solution:
When I copied over my .kube/config file to my windows 10 machine (with kubectl installed) I didn't change the IP address from 127.0.0.1:6443 to the master's IP address which was 192.168.x.x. (running windows 10 machine connecting to raspberry pi cluster on the same network). Make sure that you do this and it may fix your problem like it did mine.
On GCP
check: gcloud version
-- localMacOS# gcloud version
Run:
--- localMacOS# gcloud container clusters get-credentials 'clusterName' \ --zone=us-'zoneName'
Get clusterName and zoneName from your console -- here: https://console.cloud.google.com/kubernetes/list?
ref: .x509 #market place deployments on GCP #Kubernetes
I got this because I was not connected to the office's VPN
In case of the error you should export all the kubecfg which contains the certs. kops export kubecfg "your cluster-name and export KOPS_STATE_STORE=s3://"paste your S3 store" .
Now you should be able to access and see the resources of your cluster.
This is an old question but in case that also helps someone else here is another possible reason.
Let's assume that you have deployed Kubernetes with user x. If the .kube dir is under the /home/x user and you connect to the node with root or y user it will give you this error.
You need to switch to the user profile so kubernetes can load the configuration from the .kube dir.
Update: When copying the ~/.kube/config file content on a local pc from a master node make sure to replace the hostname of the loadbalancer with a valid IP. In my case the problem was related to the dns lookup.
Hope this helps.

Uploading SSL certificates TO IAM

I have 4 certificates which i received from CA (SSL) :
Root CA Certificate - 123.crt
Intermediate CA Certificate - 456.crt
Intermediate CA Certificate - 789.crt
Your PositiveSSL Certificate - 654.crt
I have generated circuit.pem -private key and csr.pem through which I got these certificates.
Now, i want to upload these certificates to IAM using
aws iam upload-server-certificate --server-certificate-name certificate_object_name --certificate-body file://public_key_certificate_file --private-key file://privatekey.pem --certificate-chain file://certificate_chain_file
AWS -Working with Server Certificates
But I am not able to gauge which is my server certificate and how can I upload my all certificates.
Please help me with the above command for my above certificates.
I tried :
aws iam upload-server-certificate --server-certificate-name MyServerCertificate --certificate-body file://www_advisorcircuit_com.crt --private-key file://circuit.pem --certificate-chain file://COMODORSAAddTrustCA.crt
I am getting this error:
A client error (InvalidClientTokenId) occurred when calling the UploadServerCertificate operation: The security token included in the request is invalid.
I have to say, getting this to work was a huge pain in the ass. Basically you are missing the user configuration details. You have to create a user on Amazon using the IAM service here https://console.aws.amazon.com/iam/home. Pay attention to what your region is in the url, you'll need that later. So create a user, attach a policy (I attached AdministratorAccess), "Create Access Key", download credentials for the user and use them to run:
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json
Some caveats on getting the certificate install command to work. Make sure the file's have readable permissions....I think I specified 664. I specified the .pem extension on all the files...I believe AWS prefers the old school style key files, so I had to run
openssl rsa -in my_key.key -text > new_key.pem
An additional hint (because that's what happened to me)
Run echo $AWS_ACCESS_KEY_ID and echo $AWS_SECRET_ACCESS_KEY to check if these ENV variables are set.
No matter what you pass to aws configure, the ENV variables will override it.
Configuration Settings and Precedence
Yes, This is tricky even you have configured all IAM access to a user and then trying to upload certificate using their access keys.
I got this problem many times. Here is how I solved it.
When User is not having required IAM access like Upload Server certificate etc. -> You need to make sure that user has the right access, Maybe try after giving Full IAM access to the user.
Of Course, the region and the other user details should be correct as discussed in previous answers.
Was trying with an older session terminal(This terminal was running for more than 24 hours) -> Relaunch the terminal and try the same command. Yes, I observed this issue twice. I simply relaunched the terminal and performed the same command and it worked.
Command with absolute path:
*aws iam upload-server-certificate --server-certificate-name mycertificate --certificate-body file:///Users/raushan/Downloads/com/certificate.pem --private-key file:///Users/raushan/Downloads/com/private_key.pem --certificate-chain file:///Users/raushan/Downloads/com/CertChain.pem*

The command heroku ssl says my domains have no certificate installed

I just want to say that this is not normally something I do, but I have been tasked with it recently...
I have followed the heroku documentation for setting up SSL closely, but I am still encountering a problem.
I have added my cert to heroku using the following command:
heroku certs:add path_to_crt path_to_key
This part seems to work. I receive a message saying:
Adding SSL Endpoint to my_app ... done
I have also setup a CNAME for my hosting service to point to the endpoint associated with the cert command above. However, when I browse to the site I still receive a SSL error. It says my certificate isn't trusted and points to the *.heroku.com license, not the one I have just uploaded.
I have noticed that when I execute the following command:
heroku ssl
I receive the following:
my_domain_name has no certificate
My assumption is that there should be a certificate associated with this domain at this point.
Any ideas?
Edit: It appears that I did not wait long enough for the certificate stuff to trickle through the internets... however, my question regarding the "heroku ssl" command still puzzles me.
The Heroku ssl command is for legacy certificates:
$ heroku ssl -h
Usage: heroku ssl
list legacy certificates for an app
The command you need is heroku certs which will output the relevant certificate info for that project.