kubernetes - can't access the cluster with new user certificate - ssl

I want to create new user admin in kubernetes ,i do all the steps for creating and authorizing the certificates but when i want to access to api,i receive anuthorized error.
i do these steps to create user-admin:
1/ openssl genrsa -out user.key 2048
2/ openssl req -new -key user.key -out user.csr -subj "/CN=kube-user"
3/
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: user
spec:
request: $(cat user.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
4/ k certificate approve user
5/ k get csr user -o jsonpath='{.status.certificate}' | base64 --decode > user.crt
6/ kubectl config view -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' --raw | base64 --decode - > ca.crt
7/
curl https://$Kube-Master-Ip:6443/api/v1 \
--key user.key \
--cert user.crt \
--cacert ca.crt
8/ and this is what i've receive:
{
"kind":"Status",
"apiVersion":"v1",
"metadata":{},
"status":"Failure",
"message":"Unauthorized",
"reason":"Unatuhorized",
"code":401
}
document source: https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/

The step 2 command is wrong. The admin user should be part of system:masters group.
openssl req -new -key user.key -out user.csr -subj "/CN=kube-user/O=system:masters"

Related

python confluent kafka client - unable to access Kafka on GKE using SSL

I have a simple python Kafka producer, and i'm trying to access the Strimzi Kafka Cluster on GKE, and i'm getting following error :
cimpl.KafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Failed to create producer: ssl.key.location failed: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch"}
Here is the Kafka producer code:
from confluent_kafka import Producer
kafkaBrokers='<host>:<port>'
caRootLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cacerts.pem'
certLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cert.pem'
keyLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/key.pem'
password='<password>'
conf = {'bootstrap.servers': kafkaBrokers,
'security.protocol': 'SSL',
'ssl.ca.location':caRootLocation,
'ssl.certificate.location': certLocation,
'ssl.key.location':keyLocation,
'ssl.key.password' : password
}
topic = 'my-topic1'
producer = Producer(conf)
for n in range(100):
producer.produce(topic, key=str(n), value="val -> "+str(n))
producer.flush()
To get the pem files (from the secrets - PKCS files), here are the commands used
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password
kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.p12}' | base64 -d > user2.p12
kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.password}' | base64 -d > user2.password
- to get the user private key i.e. key.pem
openssl pkcs12 -in user2.p12 -nodes -nocerts -out key.pem -passin pass:<passwd>
# CARoot - extract cacerts.cer
openssl pkcs12 -in ca.p12 -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > cacerts.cer
# convert to pem format
openssl x509 -in cacerts.cer -out cacerts.pem
# get the ca.crt from the secret
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
# convert to pem
openssl x509 -in ca.crt -out cert.pem
Any ideas how to fix this issue ?
Pls note -
I'm able to access Kafka Cluster using commandline Kafka producer/consumer on SSL
This is fixed, pls see below configuration that is expected:
'ssl.ca.location' -> CARoot (certifying authority, used to sign all the user certs)
'ssl.certificate.location' -> User Cert (used by Kubernetes to authenticate to API server)
'ssl.key.location' -> User private key
The above error was due to incorrect User Cert being used, it should match the User Private Key

import keypair to an existing pkcs12 keystore under a new alias name

I am learning OAUTH2 and OpenID Connect and configuring multiply tomcat servers (a Client for the UI, and multiply Resource Servers for the APIs) to use SSL. So I have created a PKCS12 keystore with a self-signed certificate + private key the following way and then I pushed it under my 1st Tomcat:
(I know that the commands bellow can be simplify and combine into one (or two) but I deliberately keep tem separately because that way I can see and understand the steps better)
(1) The keypair was created with openssl this way:
openssl genrsa \
-des3 \
-passout pass:$phrase \
-out id_rsa_$domain.key $numbits
(2) Then I created a Certificate Signing Request with this command:
openssl req \
-new \
-key id_rsa_$domain.key \
-passin pass:$phrase \
-subj "$subj" \
-out $domain.csr
(3) After that I created a x509 certificate:
openssl x509 \
-req \
-days $days \
-in $domain.csr \
-signkey id_rsa_$domain.key \
-passin pass:$phrase \
-out $domain.crt
(4) Finnaly I have created a key-store in PKCS12 format:
pem=$domain.pem
cat id_rsa_$domain.key > $pem
cat $domain.crt >> $pem
openssl pkcs12 \
-export \
-in $pem \
-passin pass:$phrase \
-password pass:$keystore_pwd \
-name $domain \
-out example.com.pkcs12
rm $pem
At the end of this process I have the following files:
id_rsa_authserver.example.com.key: the private (and public) key
authserver.example.com.crt: the self signed certificate
example.com.pkcs12: the keystore
Inside the .pkcs12 file I only have one key-pair entry under the authserver.example.com alias. I have checked the result with KeyStore Explorer as well and everything looks fine and the 1st Tomcat works properly with that keystore.
Then I repeated the steps (1), (2) and (3) and I generated new files for order.example.com host machine and at the end I have two new files:
id_rsa_order.example.com.key
order.example.com.crt
Now I would like to add to my "root" example.com.pkcs12 keystore this new keypair + certificate under the order.example.com alias in order to I keep all certs that I use for my demo in one keystore. I can do it easily with the KeyStore Explorer tool via the tools > import key pair > openSSL > browse the private key and cert files, but this is not enough good for me. I would like to do the import via command line using OpenSSL.
Unfortunately I have not found the proper openssl command that I can use to ADD my 2nd key+cert to the existing keystore.
What is the command that I can use?

Unable to run kafka with self-signed certificates

I am setting up kafka that uses SASL_PLAIN and SSL auth. I set one up in a public vpc so that I could use certbot to generate certs, but for this one I am setting it in a private network that cannot be accessed by certbot (and I cannot allow it to be accessed either).
So I want to use self-signed certs to do this. I've tried this:
openssl req -new -newkey rsa:4096 \
-days 3650 \
-x509 \
-subj "/CN=$(hostname)" \
-keyout key.pem \
-out cert.pem \
-passout "pass:${PASSWORD}"
openssl pkcs12 -export -out certout -name kafka \
-inkey "key.pem" \
-in "cert.pem" \
-password "pass:${PASSWORD}" \
-passin "pass:${PASSWORD}"
keytool -importkeystore -noprompt \
-srckeystore certout \
-srcstoretype pkcs12 \
-destkeystore /etc/ssl/cert.jks \
-deststoretype pkcs12 \
-srcstorepass "${PASSWORD}" \
-deststorepass "${PASSWORD}"
And in /etc/kafka/server.properties I have:
ssl.keystore.location=/etc/ssl/cert.jks
ssl.truststore.location=/etc/ssl/certs/java/cacerts
Where cacerts comes from is the apt package ca-certificates-java (and I ran update-ca-certificates -f too)
And if I try keytool -import -alias kafka -file certout -cacerts it gives the error
keytool error: java.io.IOException: Keystore was tampered with, or password was incorrect
In the logs for kafka, I see every broker spewing this:
INFO [Controller id=2, targetBrokerId=3] Failed authentication with 3.kafka.my.dns/10.1.1.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
(I have changed the URL and IP address in that log)
How can I run kafka with self-signed certs?

Trouble getting a self-signed cert working in Postman

I'm finally getting into using SSL on my personal sites, so I started by trying to make a multi-domain self signed cert for my local development (to handle api.mydomain.local, www.mydomain.local, and mydomain.local). I don't know if this was my first mistake, but...
As I couldn't find a single encompassing guide, I started by using two tutorials (from EasyEngine and DeveloperSide) to create my cert and install it on my host (Win10). I then used a DigitalOcean guide to figure out how to setup my Apache on my dev server (a Ubuntu VM); up to there, no big trouble, other than some minor issues caused by working from multiple guides at the same time.
I go ahead and try to hit my api in Chrome, it gives me an untrusted certificate value as expected, I pass and it works. As far as I know, this means the cert worked? However, when I try to hit my api in Postman, I get an error that indicates it can't accept an untrusted cert, which is fine, as it has a tutorial on how to fix that. However, it still doesn't work. I can't figure out what else to do to fix this, am I on the right track? Is my cert completely borked? Did I make a core mistake in trying to do a multi-domain cert?
One thing I did notice is that in the Dev Tools security tab, it says
Subject Alternative Name missing
So I'm not sure if that means my alt names aren't working, but if they weren't, it wouldn't try to load the certificate when I hit it in Chrome, right?
I had a similar issue while writing an article for my website on SSL certificates. So i wrote shell script for the same
#!/bin/bash
CERT_COMPANY_NAME=${CERT_COMPANY_NAME:=Tarun Lalwani}
CERT_COUNTRY=${CERT_COUNTRY:=IN}
CERT_STATE=${CERT_STATE:=DELHI}
CERT_CITY=${CERT_CITY:=DELHI}
CERT_DIR=${CERT_DIR:=certs}
ROOT_CERT=${ROOT_CERT:=rootCA.pem}
ROOT_CERT_KEY=${ROOT_CERT_KEY:=rootCA.key.pem}
# make directories to work from
mkdir -p $CERT_DIR
create_root_cert(){
# Create your very own Root Certificate Authority
openssl genrsa \
-out $CERT_DIR/$ROOT_CERT_KEY \
2048
# Self-sign your Root Certificate Authority
# Since this is private, the details can be as bogus as you like
openssl req \
-x509 \
-new \
-nodes \
-key ${CERT_DIR}/$ROOT_CERT_KEY \
-days 1024 \
-out ${CERT_DIR}/$ROOT_CERT \
-subj "/C=$CERT_COUNTRY/ST=$CERT_STATE/L=$CERT_CITY/O=$CERT_COMPANY_NAME Signing Authority/CN=$CERT_COMPANY_NAME Signing Authority"
}
create_domain_cert()
{
local FQDN=$1
local FILENAME=${FQDN/\*/wild}
# Create a Device Certificate for each domain,
# such as example.com, *.example.com, awesome.example.com
# NOTE: You MUST match CN to the domain name or ip address you want to use
openssl genrsa \
-out $CERT_DIR/${FILENAME}.key \
2048
# Create a request from your Device, which your Root CA will sign
if [[ ! -z "${SAN}" ]]; then
openssl req -new \
-key ${CERT_DIR}/${FILENAME}.key \
-out ${CERT_DIR}/${FILENAME}.csr \
-subj "/C=${CERT_COUNTRY}/ST=${CERT_STATE}/L=${CERT_CITY}/O=$CERT_COMPANY_NAME/CN=${FQDN}" \
-reqexts san_env -config <(cat /etc/ssl/openssl.cnf <(cat ./openssl-san.cnf))
else
openssl req -new \
-key ${CERT_DIR}/${FILENAME}.key \
-out ${CERT_DIR}/${FILENAME}.csr \
-subj "/C=${CERT_COUNTRY}/ST=${CERT_STATE}/L=${CERT_CITY}/O=$CERT_COMPANY_NAME/CN=${FQDN}"
fi
# Sign the request from Device with your Root CA
if [[ ! -z "${SAN}" ]]; then
openssl x509 \
-sha256 \
-req -in $CERT_DIR/${FILENAME}.csr \
-CA $CERT_DIR/$ROOT_CERT \
-CAkey $CERT_DIR/$ROOT_CERT_KEY \
-CAcreateserial \
-out $CERT_DIR/${FILENAME}.crt \
-days 500 \
-extensions san_env \
-extfile openssl-san.cnf
else
openssl x509 \
-sha256 \
-req -in $CERT_DIR/${FILENAME}.csr \
-CA $CERT_DIR/$ROOT_CERT \
-CAkey $CERT_DIR/$ROOT_CERT_KEY \
-CAcreateserial \
-out $CERT_DIR/${FILENAME}.crt \
-days 500
fi
}
METHOD=$1
ARGS=${*:2}
echo "Called with $METHOD and $ARGS"
if [ -z "${METHOD}" ]; then
echo "Usage ./sslcerts.sh [create_root_cert|create_domain_cert] <args>"
echo "Below are the environment variabls you can use:"
echo "CERT_COMPANY_NAME=Company Name"
echo "CERT_COUNTRY=Country"
echo "CERT_STATE=State"
echo "CERT_CITY=City"
echo "CERT_DIR=Directory where certificate needs to be genereated"
echo "ROOT_CERT=Name of the root cert"
echo "ROOT_CERT_KEY=Name of root certificate key"
else
${METHOD} ${ARGS}
fi
You can change the environment variables on the TOP and generate a self-signed certificate using below
$ SAN=DNS.1:*.tarunlalwani.com,DNS.2:tarunlalwani.com ./sslcerts.sh create_domain_cert '*.tarunlalwani.com'
Edit 1
Earlier browsers use to rely on FQDN, but now some of them have started using SAN which is "Subject Alternative Name". Generally openssl doesn't come up v3 extensions configured. SAN is a part of the v3 extensions. So when you generated a self signed certificated it has the correct FQDN (Full qualified domain name) but not SAN. Chrome will show error for these certificates but you will see firefox working fine.
PS: Taken from article http://tarunlalwani.com/post/self-signed-certificates-trusting-them/

Ansible X509 certificate missing Subject Alternative Name

I'am using Vagrant and Ansible roles to generate an SSL/TLS certificate but no matter what I try, the generated certificates is missing the Subject Alternative Name:
- name: Create an SSL security key & CSR (Certificate Signing Request)
shell: openssl req -new -newkey rsa:2048 -nodes -keyout /etc/apache2/ssl/{{ item.host }}.key -subj "/subjectAltName=DNS.1={{ item.host }}, DNS.2=www.{{ item.host }}, IP.1=192.168.33.11/C={{params['ssl'].country_name}}/ST={{params['ssl'].state}}/L={{params['ssl'].locality}}/O={{params['ssl'].organization}}/CN={{ item.host }}" -out /etc/apache2/ssl/{{ item.host }}.csr
args:
executable: "/bin/bash"
with_items: "{{params['vhosts']}}"
when: item.ssl is defined and item.ssl
The certificate files gets generated but they Google Chrome always says
Subject Alternative Name Missing
This is the debug of my environment:
$ openssl version
OpenSSL 1.0.2l 25 May 2017
$ openssl x509 -noout -text -in /etc/apache2/ssl/myhost.dev.crt
Certificate:
Data:
Version: 1 (0x0)
Serial Number:
a2:77:35:c7:6a:72:35:22
Signature Algorithm: sha256WithRSAEncryption
Issuer: subjectAltName=DNS.1=myhost.dev, DNS.2=www.myhost.dev, IP.1=192.168.33.11, C=DE, ST=Berlin, L=Berlin, O=Ltd, CN=myhost.dev
Validity
Not Before: Jun 12 15:36:58 2017 GMT
Not After : Jun 10 15:36:58 2027 GMT
Subject: subjectAltName=DNS.1=myhost.dev, DNS.2=www.myhost.dev, IP.1=192.168.33.11, C=DE, ST=Berlin, L=Berlin, O=Ltd, CN=myhost.dev
Your key isn't using X509 extensions. In order to add them to your CSR, you'll need a config file that specifies what extensions to add. The command line interface isn't friendly enough to let you easily specify X509 extensions on the command line.
What you could do is use Bash's process substitution with a command that generates a modified config file on the fly when you invoke openssl to generate your CSR:
openssl req \
-new -newkey rsa:2048 \
-subj "{your existing subject}" \
... \
-x509 \
-reqexts SAN \
-config <(
cat /etc/ssl/openssl.cnf
printf '\n[SAN]\nsubjectAltName=DNS:example.com,DNS:www.example.com'
)
Again, process substitution only works in GNU bash, and will not work if your CI runner's default shell is Bourne Shell, as it sometimes is on Ubuntu-based distros.
This answer was adapted from here.
After some research on the openssl library and understanding how it works, I was doing the mistake of using -X509*: adding -X509 will create a certificate and not a request!
I solved my issue by following this main steps:
Set up a certificate authority: entity that issues digital
certificates.
Create server or user certificate request.
Sign the server certificate request.
Add this keys and certificates to your host.
Add the certificates to the browser.
I wrote a step by step long tutorial on how to achieve this on my blog post.