I am using the following version:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Here, I am trying to authenticate a user that makes use of x509 certs using the below custom scrip that I created looking into few online forums and kubernetes docs.
#!/bin/bash
cluster=test-operations-k8
namespace=demo
username=jack
openssl genrsa -out $username.pem 2048
openssl req -new -key $username.pem -out $username.csr -subj "/CN=$username"
cat <<EOF | kubectl create -n $namespace -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: user-request-$username
spec:
groups:
- system:authenticated
request: $(cat $username.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
kubectl certificate approve user-request-$username
kubectl get csr user-request-$username -o jsonpath='{.status.certificate}' | base64 -d > $username.crt
kubectl --kubeconfig ~/.kube/config-$username config set-cluster $cluster --insecure-skip-tls-verify=true --server=https://$cluster.eastus.cloudapp.azure.com
kubectl --kubeconfig ~/.kube/config-$username config set-credentials $username --client-certificate=$username.crt --client-key=$username.pem --embed-certs=true
kubectl --kubeconfig ~/.kube/config-$username config set-context $cluster --cluster=$cluster --user=$username
kubectl --kubeconfig ~/.kube/config-$username config use-context $cluster
echo "Config file for $username has been created successfully !"
But while getting resources I get the below error:
error: You must be logged in to the server (Unauthorized)
Can someone please advise what needs to be done to fix this issue ?
Also please note the appropriate roles and rolebindings have also been created which I have not listed out here.
Make sure the CA used to sign the CSRs (the --cluster-signing-cert-file file given to kube-controller-manager) is in the --client-ca-file bundle given to kube-apiserver (which is what authenticates client certs presented to the apiserver)
Also ensure the certificate requested is a client certificate (has client auth in the usages field)
Related
I have a simple python Kafka producer, and i'm trying to access the Strimzi Kafka Cluster on GKE, and i'm getting following error :
cimpl.KafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Failed to create producer: ssl.key.location failed: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch"}
Here is the Kafka producer code:
from confluent_kafka import Producer
kafkaBrokers='<host>:<port>'
caRootLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cacerts.pem'
certLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cert.pem'
keyLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/key.pem'
password='<password>'
conf = {'bootstrap.servers': kafkaBrokers,
'security.protocol': 'SSL',
'ssl.ca.location':caRootLocation,
'ssl.certificate.location': certLocation,
'ssl.key.location':keyLocation,
'ssl.key.password' : password
}
topic = 'my-topic1'
producer = Producer(conf)
for n in range(100):
producer.produce(topic, key=str(n), value="val -> "+str(n))
producer.flush()
To get the pem files (from the secrets - PKCS files), here are the commands used
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password
kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.p12}' | base64 -d > user2.p12
kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.password}' | base64 -d > user2.password
- to get the user private key i.e. key.pem
openssl pkcs12 -in user2.p12 -nodes -nocerts -out key.pem -passin pass:<passwd>
# CARoot - extract cacerts.cer
openssl pkcs12 -in ca.p12 -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > cacerts.cer
# convert to pem format
openssl x509 -in cacerts.cer -out cacerts.pem
# get the ca.crt from the secret
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
# convert to pem
openssl x509 -in ca.crt -out cert.pem
Any ideas how to fix this issue ?
Pls note -
I'm able to access Kafka Cluster using commandline Kafka producer/consumer on SSL
This is fixed, pls see below configuration that is expected:
'ssl.ca.location' -> CARoot (certifying authority, used to sign all the user certs)
'ssl.certificate.location' -> User Cert (used by Kubernetes to authenticate to API server)
'ssl.key.location' -> User private key
The above error was due to incorrect User Cert being used, it should match the User Private Key
I'm trying to sort out certificates for applications deployed to a k8s cluster (running on docker-for-win, WSL2 on Windows 10 20H2).
I would like to use the DNS to connect to services, e.g. registry.default.svc.cluster.local, which I've verified is reachable. I created a certificate by following these steps:
Create an openssl.conf with content
[ req ]
default_bits = 2048
prompt = no
encrypt_key = no
distinguished_name = req_dn
req_extensions = req_ext
[ req_dn ]
CN = *.default.svc.cluster.local
[ req_ext ]
subjectAltName = #alt_names
[ alt_names ]
DNS.1 = *.default.svc.cluster.local
Create csr and key file with openssl req -new -config openssl.conf -out wildcard.csr -keyout wildcard.key
Created a certificate signing request with
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: wildcard_csr
spec:
groups:
- system:authenticated
request: $(cat wildcard.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Approved the request: kubectl certificate approve wildcard_csr
Extracted the crt file: kubectl get csr wildcard_csr -o jsonpath='{.status.certificate}' | base64 -d > wildcard.crt
Deleted the request kubectl delete csr wildcard_csr.
I've then started a pod with the registry:2 image and configured it to use the wildcard.crt and wildcard.key files.
In a different pod, I then tried to push to that registry, but got the error
Error response from daemon: Get https://registry.default.svc.cluster.local:2100/v2/: x509: certificate signed by unknown authority
So it seems that within the pod, the k8s CA isn't trusted. Am I right with this observation? If so, how can I make k8s trust itself (after all, it was a k8s component that signed the certificate)?
I found a way to achieve this with changes to the yaml only: On my machine (not sure how universal that is), the CA certificate is available in the service-account-token secret default-token-7g75m (kubectl describe secrets to find out the name, look for the secret of type kubernetes.io/service-account-token that contains an entry ca.crt).
So to trust this certificate, add a volume
name: "kube-certificate"
secret:
secretName: "default-token-7g75m"
and to the pod that requires the certificate, add a volumeMount
name: "kube-certificate"
mountPath: "/etc/ssl/certs/kube-ca.crt",
subPath: "ca.crt"
I installed Rundeck on a new RHEL 7.7 box, using the rpm method. I can access the server just fine with http, but when I follow the directions in the docs, the server is not accessible from browsers or by curling localhost.
The only error I receive is:
WARN SslContextFactory --- [ main] No supported ciphers from [SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,...(many more ciphers)
Grails application running at https://localhost:4443 in environment: production
curl localhost:4443
curl: (35) Peer reports it experienced an internal error.
Relevant parts of the configuration files are as follows:
/etc/rundeck/profile:
RDECK_JVM="-Drundeck.jaaslogin=$JAAS_LOGIN \
-Djava.security.auth.login.config=$JAAS_CONF \
-Dloginmodule.name=$LOGIN_MODULE \
-Drdeck.config=$RDECK_CONFIG \
-Drundeck.server.configDir=$RDECK_SERVER_CONFIG \
-Dserver.datastore.path=$RDECK_SERVER_DATA/rundeck \
-Drundeck.server.serverDir=$RDECK_INSTALL \
-Drdeck.projects=$RDECK_PROJECTS \
-Drdeck.runlogs=$RUNDECK_LOGDIR \
-Drundeck.config.location=$RDECK_CONFIG_FILE \
-Djava.io.tmpdir=$RUNDECK_TEMPDIR \
-Drundeck.server.workDir=$RUNDECK_WORKDIR \
-Dserver.http.port=$RDECK_HTTP_PORT \
-Drdeck.base=$RDECK_BASE \
-Djdk.tls.ephemeralDHKeySize=jdk8 \
-Drundeck.rundeck.jetty.connector.ssl.excludedCipherSuites=SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA384,SSL_RSA_WITH_AES_256_CBC_SHA256,SSL_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,SSL_ECDH_RSA_WITH_AES_256_CBC_SHA384,SSL_DHE_RSA_WITH_AES_256_CBC_SHA256,SSL_DHE_DSS_WITH_AES_256_CBC_SHA256,SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,SSL_ECDHE_RSA_WITH_AES_256_CBC_SHA,SSL_RSA_WITH_AES_256_CBC_SHA,SSL_ECDH_ECDSA_WITH_AES_256_CBC_SHA,SSL_ECDH_RSA_WITH_AES_256_CBC_SHA,SSL_DHE_RSA_WITH_AES_256_CBC_SHA,SSL_DHE_DSS_WITH_AES_256_CBC_SHA,SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA256,SSL_RSA_WITH_AES_128_CBC_SHA256,SSL_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,SSL_ECDH_RSA_WITH_AES_128_CBC_SHA256,SSL_DHE_RSA_WITH_AES_128_CBC_SHA256,SSL_DHE_DSS_WITH_AES_128_CBC_SHA256,SSL_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,SSL_ECDHE_RSA_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA,SSL_ECDH_ECDSA_WITH_AES_128_CBC_SHA,SSL_ECDH_RSA_WITH_AES_128_CBC_SHA,SSL_DHE_RSA_WITH_AES_128_CBC_SHA,SSL_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,SSL_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,SSL_ECDHE_RSA_WITH_AES_256_GCM_SHA384,SSL_RSA_WITH_AES_256_GCM_SHA384,SSL_ECDH_ECDSA_WITH_AES_256_GCM_SHA384,SSL_ECDH_RSA_WITH_AES_256_GCM_SHA384,SSL_DHE_DSS_WITH_AES_256_GCM_SHA384,SSL_DHE_RSA_WITH_AES_256_GCM_SHA384,SSL_ECDHE_RSA_WITH_AES_128_GCM_SHA256,SSL_RSA_WITH_AES_128_GCM_SHA256,SSL_ECDH_ECDSA_WITH_AES_128_GCM_SHA256,SSL_ECDH_RSA_WITH_AES_128_GCM_SHA256,SSL_DHE_RSA_WITH_AES_128_GCM_SHA256,SSL_DHE_DSS_WITH_AES_128_GCM_SHA256"
#
# Set min/max heap size
#
RDECK_JVM="$RDECK_JVM $RDECK_JVM_SETTINGS"
#
# SSL Configuration - Uncomment the following to enable. Check SSL.properties for details.
#
if [ -n "$RUNDECK_WITH_SSL" ] ; then
RDECK_JVM="$RDECK_JVM -Drundeck.ssl.config=$RDECK_SERVER_CONFIG/ssl/ssl.properties -Dserver.https.port=${RDECK_HTTPS_PORT} -Dorg.eclipse.jetty.util.ssl.LEVEL=DEBUG"
fi
/etc/sysconfig/rundeckd:
export RUNDECK_WITH_SSL=true
export RDECK_HTTPS_PORT=4443
If I add export RDECK_JVM_OPTS="-Dserver.ssl.ciphers=SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384" to /etc/sysconfig/rundeckd I get the following:
[2020-03-29 09:01:51.533] WARN config --- [ main] Weak cipher suite SSL_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 enabled for SslContextFactory#1456dec8[provider=null,keyStore=file:///etc/rundeck/ssl/keystore,trustStore=file:///etc/rundeck/ssl/truststore]
Grails application running at https://localhost:4443 in environment: production
curl: (35) Peer reports it experienced an internal error.
Other configurations:
/etc/rundeck/framework.properties:
framework.server.name = server-dns
framework.server.hostname = server-dns
framework.server.port = 4443
framework.server.url = https://server-dns
framework.rundeck.url = https://server-dns
/etc/rundeck/rundeck-config.properties:
grails.serverURL=https://server-dns:4443
keystore and truststore exist, I have attempted both self signed and real crts.
I'm at a loss here. I followed all sorts of guides and advice from the internet leading to my current (mis?)configuration.
Thanks
Edited to fix mistakes in the post.
Maybe you need to reference your keystore/trustore in ssl.properties file (usually at /etc/rundeck/ssl/ssl.properties path). I wrote a little guide to set up Rundeck with SSL.
1.- install Rundeck.
rpm -Uvh https://repo.rundeck.org/latest.rpm
yum install rundeck
2.- create keystore: (if you already have a certificate in .key/.crt or .pk12 formats, skip to 2b)
keytool -keystore /etc/rundeck/ssl/keystore -alias rundeck -genkey -keyalg RSA -keypass password -storepass password
2b.- in case you have your own certificate, do below:
If you have .crt and .key files, create a .p12 file:
openssl pkcs12 -export -in YOUR.crt -inkey YOUR.key -out NEW.p12
Convert it to a .jks (also if you have only the .p12 file):
keytool -importkeystore -destkeystore keystore -srckeystore NEW.p12 -srcstoretype pkcs12
3.- copy keystore as truststore.
4.- edit /etc/rundeck/ssl/ssl.properties file:
keystore=/etc/rundeck/ssl/keystore
keystore.password=password
key.password=password
truststore=/etc/rundeck/ssl/truststore
truststore.password=password
5.- edit /etc/rundeck/framework.properties file:
framework.server.port = 4443
framework.server.url = https://localhost:4443
6.- edit /etc/rundeck/rundeck-config.properties file:
grails.serverURL=https://localhost:4443
7.- edit/create /etc/sysconfig/rundeckd file:
export RUNDECK_WITH_SSL=true
8.- start the rundeck service.
systemctl start rundeckd
Iam trying to perform kubernetes webhook authentication using pamhook. But I get a TLS handshake error at the webhook when apiserver tries to contact it.
Following are the steps I followed:
apiserver IP: 192.168.20.30
pam webhook server IP: 192.168.20.50
Created a server certificate for my pam webhook server using the ca.crt present in /etc/kubernetes/pki/ (in master node)
> openssl genrsa -out ubuntuserver.key 2048 openssl req -new -key
> ubuntuserver.key -out ubuntuserver.csr -config myconf.conf openssl
> x509 -req -in ubuntuserver.csr -CA ca.crt -CAkey ca.key
> -CAcreateserial -out ubuntuserver.crt -days 10000
Started my pam webhook server using this certificate.
./pam_hook-master -cert-file /root/newca/ubuntuserver.crt -key-file /root/newca/ubuntuserver.key -signing-key rootroot -bind-port 6000
I1109 07:21:41.388836 3882 main.go:327] Starting pam_hook on :6000
Created a kubeconfig file as follows:
$cat webhook-config.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://192.168.20.50:6000/authenticate
name: 192.168.20.50
users:
- name: root
user:
client-certificate: /etc/kubernetes/pki/client.crt
client-key: /etc/kubernetes/pki/client.key
current-context: 192.168.20.50
contexts:
- context:
cluster: 192.168.20.50
user: root
name: 192.168.20.50
Configured api server manifest file in
/etc/kubernetes/manifest/kube-apiserver.yaml
> ...
> - --authentication-token-webhook-config-file=/etc/kubernetes/pki/webhook-config.yaml
> - --runtime-config=authorization.k8s.io/v1beta1=true ...
apiserver gets restarted.
Obtain token by requesting pamhook server. (I did this from my master node)
$ curl https://192.168.20.50:6000/token --cacert /etc/kubernetes/pki/ca.crt -u root
Enter host password for user 'root':
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M
Made a request to apiserver using this token, which in turn should communicate with webhook to provide authentication. But Iam getting a 401 Error.
$ curl -vvv --insecure -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M" https://192.168.20.38:6443/api/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
Message at the webhook server is:
> 2017/11/09 07:50:53 http: TLS handshake error from 192.168.20.38:49712: remote error: tls: bad certificate
Document says that api server sends a http request with the token in its payload. If I try to recreate the same call in curl using the token and ca.crt, it gets authenticated.
> $ curl -X POST https://192.168.20.50:6000/authenticate --cacert /etc/kubernetes/pki/ca.crt -d '{"ApiVersion":"authentication.k8s.io/v1beta1", "Kind": "TokenReview", "Spec":{"Token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiIiLCJleHAiOjE1MTAyMjY3ODksImlhdCI6MTUxMDIyNjE4OSwiaXNzIjoiIiwidXNlcm5hbWUiOiJyb290In0.3LmHBy_anjR62WNqKICCx_b8YWFpF4HSKMWLmyORU0M"}}'
{"apiVersion":"authentication.k8s.io/v1beta1","kind":"TokenReview","status":{"authenticated":true,"user":{"username":"root","uid":"0","groups":["root"]}}}
But when it is requested by apiserver, TLS handshake gets failed.
My understanding is that the TLS verification is done by checking against the certificate-authority file mentioned in webhook-config.yaml right? If so, the TLS verification should have been successful with ca.crt. But it is failing.
Does that mean api server is performing validation using some other CA? Which CA does it use? How do I go past this TLS verification successfully?
Finally I've fixed this. The mistake I've made is I used IP of the webhook server throughout (192.168.20.50 in this case). I replaced it with the FQDN of the webhook server machine and things worked out.
I changed IP to FQDN in the following places:
CN while generating the server certificate
-server field in the kubeconfig file
$cat webhook-config.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/pki/ca.crt
server: https://ubuntuserver:6000/authenticate
name: 192.168.20.50
users:
- name: root
user:
client-certificate: /etc/kubernetes/pki/client.crt
client-key: /etc/kubernetes/pki/client.key
current-context: 192.168.20.50
contexts:
- context:
cluster: 192.168.20.50
user: root
name: 192.168.20.50
I have very limited knowledge about TLS certification. I wanted to enable https for docker daemon. I followed this tutorial but at the end failed to start docker daemon.
I am using docker in a Ubuntu 16.04 VM and my client and server is the same machine. So I use the $hostname as the 'Common Name' during all the process.
After following the whole process in docker documentation when I run
sudo dockerd --tlsverify --tlscacert=ca.pem --tlscert=server-cert.pem --tlskey=server-key.pem -H=0.0.0.0:2376
I get the INFO log that "API listen on [::]:2376"
When I use the below command:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=$HOST:2376 version
I get proper response.
But when I reload the daemon and try to start docker it says failed to start docker and give the following message-
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
Output of 'journalctl -xe' is:
I copied the necessary certificate to ~/.docker/ and the 'ExecStart' in my /lib/systemd/system/docker.service file is:
ExecStart=/usr/bin/dockerd -H fd:// -H 0.0.0.0:2376 \
--tlsverify --tlscacert=/home/sakib/.docker/ca.pem \
--tlskey=/home/sakib/.docker/key.pem \
--tlscert=/home/sakib/.docker/cert.pem
When I try to communicate with the API I get the following response:
$ curl -X GET https://0.0.0.0:2376/images/json
curl: (35) gnutls_handshake() failed: Certificate is bad
$ docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built: Thu Aug 18 05:33:38 2016
OS/Arch: linux/amd64
An error occurred trying to connect: Get https://EL802:2376/v1.24/version: x509: certificate is valid for $HOST, not EL802
NOTE: EL802 is my hostname which I set as the 'HOST' environment variable.
I think the problem is with the 'CN' name that I chose while creating client certificate. I create the server and client certificate as below-
Server:
openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr
Client:
openssl req -subj '/CN=$HOST' -new -key key.pem -out client.csr
As my client and server is my host machine(EL802) which I set as the $HOST variable.
Your picture does not show the full error line, but if the error message is:
pid file found, ensure docker is not running or delete /var/run/docker.pid
Try and delete the pid, and restart.
Also double-check your docker installation on Ubuntu, and its systemd configuration.
x509: certificate is valid for $HOST, not EL802
That means the certificate has been created with the string $HOST instead of its actual value.
openssl req -subj '/CN=$HOST'
The strong quoting of the single quotes would prevent the shell to replace $HOST with its value. Use double quotes.