Kubernetes x509 authentication to multiple clusters: error: You must be logged in to the server (Unauthorized) - authentication

I am currently following this tutorial: https://betterprogramming.pub/k8s-tips-give-access-to-your-clusterwith-a-client-certificate-dfb3b71a76fe
My goal is to to enable access to multiple K8s clusters with the use of client x509 certificates. It works fine with one kubeconfig file, but when adding more separate kubeconfigs to the $KUBECONFIG ENV variable or combining them together into one large config file it still only lets me access one cluster. When trying to access the resources of one of the other clusters, I get the following message:
error: You must be logged in to the server (Unauthorized)
I merged two config files into one using the method described here: https://medium.com/#jacobtomlinson/how-to-merge-kubernetes-kubectl-config-files-737b61bd517d
This is how the config file looks like (Note: Some cluster-specific information is redacted):
https://i.stack.imgur.com/HgtGC.png
Other than that I checked that both clusters have an approved CSR.
Note: I use kubectx to switch between contexts, maybe the issue lies there.

Related

Chrome Browser refusing to connect to AWS Cloud Former instance over HTTPS

I'm struggling to use AWS Cloud Former to generate a CloudFormation template. I have already launched the Cloud Former stack twice and attempted to connect to the associated DNS for the EC2 instance generated each time and keep receiving the error pictured below.
I have already tried to create a new SSL certification for the EC2 instance via AWS Certificate Manager, but AWS does not allow this for EC2 instances. I'm not very familiar with SSL/HTTPS processes and would appreciate any guidance on next steps I should pursue to address/troubleshoot this.
Upon further research into this, I have found the following issue:
Specifically, I'm seeing the following SSL certification issue:
Has anyone else seen this yet with CloudFormer recently?
CloudFormer uses self signed certificates that are generated by the stack. This is the normal browser warning when the browser encounters a self signed certificate. For your purposes, you can simply click on the link at the bottom (Proceed to EC2-xxx (Unsafe)) of the warning page, and ignore the warning. You will connect successfully in spite of the warning.
SSL certificates require a domain. On AWS you can set one up with the certificate manager, but it will still have an issue until you correctly configure Route 53 on AWS as well.

How do you supply your Applications with TLS Certificates centrally in Openshift?

I am currently struggeling with the following tasks. I don't want to include my TLS certificates in my templates because
I don't want to check in credentials in code management while still checking in the templates
I am using multiple Applications with the same Certificate and I don't want to update repos just because I might distribute another certificate
Now my approach is this. I am using Jenkins for my build pipelines. I have a Repo that is used just for certificate management. It will run when updated and distribute the certificate and private key to Openshift Secrets on various clusters.
When running the Template of an application I am retrieving the Information from the secret and setting the values in the route. And here's where things get tricky. I can only use single line values because
Openshift templates will not accept multiline parameters with oc process
Secrets will not store multiline values
So the solution seemed to be easy. Just store the Certificate with \n and set it in the Route like this. However Openshift will not accept single line certificates resulting in the error
spec.tls.key: Invalid value: "redacted key data": tls: found a certificate rather than a key in the PEM for the private key
Now the solution could be to insert the Certificate as multiple lines directly in the template file before processing and applying it to the cluster but that seems a little bit hacky to me. So my Question is
How can you centrally manage TLS Certificates for your applications and set them correclty in the Templates you're applying?
Secrets can be multiple lines. You can create a secret using a certificate file, and mount that secret as a file into your containers. See here for how to create secrets from files:
https://kubernetes.io/docs/concepts/configuration/secret/
Use the openshift command line tool instead of kubectl.
For certificates, there is something called cert-manager:
https://docs.cert-manager.io/en/latest/
This will generate certs as needed. You might want to take a look.
In order to centrally manage TLS Certificates for the applications, you can create a general secret and use it via volume mounting.

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

Uploading SSL Certificate to AWS Elastic Load Balancer

My SSL Certificate on my AWS Elastic Load Balancer is going to expire very soon and I need to replace it with a new one.
I've got the new certificate / bundle / key, uploaded to IAM but it won't show in the drop down in the Load Balancer settings that should let me choose the certificate to apply.
Here is the output when I put
aws iam list-server-certificates
To my mind this shows that I have uploaded the new certificate to IAM ok. The top certificate in the list is the one which is due to expire any moment now and the other two are ones I have recently uploaded with the intention of replacing it (They are actually two attempts to upload using the same pem files).
The image below shows that only one certificate is available to choose to apply to the load balancer. Unfortunately it is the one that is about to expire.
The one thing that does strike me as a little odd is that the certificate name in the dropdown - ptdsslcert - is different to the names in the aws iam list-server-certificates output, even though it is the same certificate that expires imminently.
I'm really stuck here and if I don't figure this out soon I'm going to have an expired certificate on my domain so I would be really appreciative of any help on this.
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files.
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Although it's hard to guess the specific local machine configuration issue that resulted in the behavior observed, as noted in the comments, this appeared to be an issue where aws cli was using two different sets of credentials to access two different services, and these two sets of credentials were actually from two different AWS accounts.
The ServerCertificateName returned by the API (accessed through the CLI) should have matched the certificate name shown in the console drop-down for Elastic Load Balancer certificate selection.
The composition of ARNs (Amazon Resource Names) varies by service, but often includes the AWS account number. In this case, the account number shown in the CLI output did not match what was visible in the AWS console... leading to the conclusion that the issue was that an AWS account other than the intended one was being accessed by aws cli.
As cross-confirmed by the differing display names, the "existing" certificate, uploaded a year ago, may have had the same content but was in fact a different IAM entity than the one seen in the dropdown, as the two certificates were associated with entirely different accounts.

(EC2) Launch Windows instance programmatically via command line

I'd like to launch a Windows 2008 (64bits, base install) instance programmatically, kinda like clicking on the Launch Instance link & following the "Create a New Instance" wizard.
I read about this command ec2-run-instances, I tried running it on putty using this syntax:
/opt/aws/bin/ec2-run-instances ami_id ami-e5784391 -n 1
--availability-zone eu-west-1a --region eu-west-1 --instance-type m1.small --private-key /full/path/MyPrivateKey.pem --group MyRDP
but it always complain that:
Required option '-C, --cert CERT' missing (-h for usage)
According to the documentation, this option isn't required!!
Can someone tell me what's wrong anyway? I'm just trying to programmatically launch a fresh Windows install, run some tests on the clouds & shut it down after that.
The error message is correct (just try adding --cert ;) - to what documentation are you referring here?
The requirement is clearly outlined in the Microsoft Windows Guide for Amazon EC2, specifically in Task 4: Set the EC2_PRIVATE_KEY and EC2_CERT Environment Variables:
The command line tools need access to an X.509 certificate and a
corresponding private key that are associated with your account. [...]
You can either specify your credentials with the --private-key and
--cert parameters every time you issue a command or you can create environment variables that point to the credential files on your local
system. If the environment variables are properly configured, you can
omit the parameters when you issue a command.
[emphasis mine]
Maybe the option of using environment variables has been misleading somehow somewhere?
Alternative
Please note that you can ease and speed up working with EC2 considerably by using alternate scripting environments covering the same ground, in particular the excellent boto, which is a Python package that provides interfaces to Amazon Web Services.
Boto uses the nowadays more common authentication scheme based on access keys only rather than X.509 certificates (e.g. an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair), which furthermore can (and should) be managed via AWS Identity and Access Management (IAM) to avoid the risk of exposing your main AWS account credentials in the first place. See my answer to How to download an EC2 X.509 certificate with an IAM User account? for more details on this.
Good luck!