libcurl: how to use TPM private key for mutual SSL authentication - ssl

I use the C libcurl library.
I need to do OCSP stapling combined with mutual authentication. For that, I'll take model on the below exemples. However, I need the private key of my client certificate to be stored in the TPM chip. Do you know how to do that, using tpm2-tss-engine? Thanks for your help.
https://curl.haxx.se/libcurl/c/smtp-ssl.html
https://curl.haxx.se/libcurl/c/CURLOPT_SSLCERT.html

I was able use a key stored in TPM with openssl s_client (maybe it is possible with curl), but am able to make a HTTPS request and receive a response.
openssl allows you to read the key from the TPM. you can use the command s_client to connect via a TCP, then send your HTTP request.
Example command would look like:
File: http_request.txt (with two newlines at the end)
GET /url/path HTTP/1.0
Host: hostname.com
cat http_request.txt | \
openssl s_client \
-nocommands \
-ign_eof \
-msgfile /dev/null \
-quiet \
-keyform engine \
-engine pkcs11 \
-cert mycertificate.pem \
-CAfile root.ca.pem \
-key 'pkcs11:model=SWTPM;manufacturer=Intel;token=mytoken;object=myobject;type=private;pin-value=mypin' \
-connect hostname.com:443
This allows me to use TPM to make requests to AWS IoT: iot:AssumeRoleWithCertificate which assumes the key is a file on disk: https://docs.aws.amazon.com/iot/latest/developerguide/authorizing-direct-aws.html

Related

Certificate issues with centOS7 with curl

I have an issue when using certificate when using curl. I'm running centOS7. We managed to get the curl going in other places, but not our dev machine:
What we are trying to do:
sudo curl -X 'GET' 'https://webpage/document' --cert '/localization.crt.pem' --key '/localization.key.pem' -H 'accept: */*' -k
Im getting this error:
curl: (58) SSL peer cannot verify your certificate.
What I tried to do?(from centOS documentation)
https://access.redhat.com/documentation/en-us/red_hat_certificate_system/9/html/administration_guide_common_criteria_edition/importing_certificate_into_nssdb
# PKICertImport -d . -n "client name" -t ",," -a -i certificate.crt.pem -u C
after echo $? we get a 0, so i think it is installed properly?
Any idea on whats wrong would be great.
I have run into this recently on our linux environments. I've found that this tends to happen if you have an SSL Certificate issued that also includes a chain certificate. If that chain is not also configured on your server OpenSSL considers the certificate invalid.
I would test this using this command:
openssl s_client -showcerts -verify 5 -connect website.com:443
If you see a block like this that means you are missing the certificate chain in your server configuration:
---
SSL handshake has read 2162 bytes and written 401 bytes
Verification error: unable to verify the first certificate
---
Windows fills in the gaps and doesn't mind this type of configuration, but openssl is very particular.
I managed to solve the issue. Recompiled curl with openSSL with following tutorial:
Install curl with openssl
Works like a charm :)

python confluent kafka client - unable to access Kafka on GKE using SSL

I have a simple python Kafka producer, and i'm trying to access the Strimzi Kafka Cluster on GKE, and i'm getting following error :
cimpl.KafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Failed to create producer: ssl.key.location failed: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch"}
Here is the Kafka producer code:
from confluent_kafka import Producer
kafkaBrokers='<host>:<port>'
caRootLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cacerts.pem'
certLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/cert.pem'
keyLocation='/Users/karanalang/Documents/Technology/strimzi/gcp_certs_nov28/pem-user2/key.pem'
password='<password>'
conf = {'bootstrap.servers': kafkaBrokers,
'security.protocol': 'SSL',
'ssl.ca.location':caRootLocation,
'ssl.certificate.location': certLocation,
'ssl.key.location':keyLocation,
'ssl.key.password' : password
}
topic = 'my-topic1'
producer = Producer(conf)
for n in range(100):
producer.produce(topic, key=str(n), value="val -> "+str(n))
producer.flush()
To get the pem files (from the secrets - PKCS files), here are the commands used
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password
kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.p12}' | base64 -d > user2.p12
kubectl get secret my-bridge1 -n kafka -o jsonpath='{.data.user\.password}' | base64 -d > user2.password
- to get the user private key i.e. key.pem
openssl pkcs12 -in user2.p12 -nodes -nocerts -out key.pem -passin pass:<passwd>
# CARoot - extract cacerts.cer
openssl pkcs12 -in ca.p12 -cacerts -nokeys -chain | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > cacerts.cer
# convert to pem format
openssl x509 -in cacerts.cer -out cacerts.pem
# get the ca.crt from the secret
kubectl get secret my-cluster-lb-ssl-certs-cluster-ca-cert -n kafka -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
# convert to pem
openssl x509 -in ca.crt -out cert.pem
Any ideas how to fix this issue ?
Pls note -
I'm able to access Kafka Cluster using commandline Kafka producer/consumer on SSL
This is fixed, pls see below configuration that is expected:
'ssl.ca.location' -> CARoot (certifying authority, used to sign all the user certs)
'ssl.certificate.location' -> User Cert (used by Kubernetes to authenticate to API server)
'ssl.key.location' -> User private key
The above error was due to incorrect User Cert being used, it should match the User Private Key

How to verify a Microsoft certificate

This is the certificate https://gist.github.com/larytet/2fb447e875831577584592cd99980fd1 (x5t VjWIUjS5JS3eAFdm2dnydlZfY-I)
I am doing
openssl verify -verbose -x509_strict certificate.pen
I am getting
CN = estsclient.coreauth.outlook.com
error 20 at 0 depth lookup: unable to get local issuer certificate
error certificate.pem: verification failed
Where do I find the certificate, the whole chain or the root which should be installed in my system?
You need to provide the CA certificate, and most real CAs will list locations where its certificate can be found in an extension in each certificate they issue.
With OpenSSL, you can view this extension:
openssl x509 -text -noout < certificate.pem
Look for the "Authority Information Access extension", and its "CA Issuers" field to find the URL and download the certificate from Microsoft.
Because this file is encoded with DER, it needs to be transcoded to PEM for use with openssl verify:
openssl x509 -inform der < Microsoft\ IT\ TLS\ CA\ 2.crt > Microsoft\ IT\ TLS\ CA\ 2.pem
Because you just downloaded this file from who knows where over HTTP, you need some way to verify its authenticity.
You'll notice that it too lists an issuer, so you can perform this process recursively to obtain the entire certificate chain back to a root certificate that you already trust. Usually, we trust the certificates that come pre-installed on our systems. But, in theory, an attacker could have compromised that set, so people sometimes do find out-of-band means to verify their root CA certificates. What's appropriate for you depends on your application.
The chain of certificates that you download on your way to the trust anchor are "intermediate" certificates; you don't have to trust them directly, because you'll be verifying a chain starting with one of the anchors on your system.
Concatenate the PEM-encoded certificates, including headers and footers, together in a single file of untrusted certificates. In my case, the "Baltimore CyberTrust Root" certificate that issued the "Microsoft IT TLS CA 2" intermediate certificate is pre-installed as a root CA on my system, so I only have to download the Microsoft certificate, and it's the only one in my file of "untrusted" certificates.
Now you have the necessary information to attempt your original command:
openssl verify --verbose -untrusted Microsoft\ IT\ TLS\ CA\ 2.pem -x509_strict certificate.pem
In case anyone finds this question I ended up with something like this + lot of comment
RUN curl --silent https://outlook.com/autodiscover/metadata/json/1 > ./outlook.com.autodiscover.metadata.json.1
RUN pem_file=certificate.pem \
&& echo "-----BEGIN CERTIFICATE-----" > $pem_file \
&& cat ./outlook.com.autodiscover.metadata.json.1 | jq --raw-output '.keys[0].keyvalue.value' >> $pem_file \
&& echo "-----END CERTIFICATE-----" >> $pem_file \
&& cat $pem_file \
&& openssl x509 -text -noout < $pem_file | grep "CA Issuers" \
&& curl https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem > rootCA.pem \
&& curl http://www.microsoft.com/pki/mscorp/Microsoft%20IT%20TLS%20CA%202.crt | openssl x509 -inform der >> rootCA.pem \
&& curl https://cacerts.digicert.com/DigiCertCloudServicesCA-1.crt | openssl x509 -inform der >> rootCA.pem \
&& cat rootCA.pem \
&& cat certificate.pem
RUN pem_file=certificate.pem && openssl verify -verbose -x509_strict -untrusted rootCA.pem $pem_file

Trouble getting a self-signed cert working in Postman

I'm finally getting into using SSL on my personal sites, so I started by trying to make a multi-domain self signed cert for my local development (to handle api.mydomain.local, www.mydomain.local, and mydomain.local). I don't know if this was my first mistake, but...
As I couldn't find a single encompassing guide, I started by using two tutorials (from EasyEngine and DeveloperSide) to create my cert and install it on my host (Win10). I then used a DigitalOcean guide to figure out how to setup my Apache on my dev server (a Ubuntu VM); up to there, no big trouble, other than some minor issues caused by working from multiple guides at the same time.
I go ahead and try to hit my api in Chrome, it gives me an untrusted certificate value as expected, I pass and it works. As far as I know, this means the cert worked? However, when I try to hit my api in Postman, I get an error that indicates it can't accept an untrusted cert, which is fine, as it has a tutorial on how to fix that. However, it still doesn't work. I can't figure out what else to do to fix this, am I on the right track? Is my cert completely borked? Did I make a core mistake in trying to do a multi-domain cert?
One thing I did notice is that in the Dev Tools security tab, it says
Subject Alternative Name missing
So I'm not sure if that means my alt names aren't working, but if they weren't, it wouldn't try to load the certificate when I hit it in Chrome, right?
I had a similar issue while writing an article for my website on SSL certificates. So i wrote shell script for the same
#!/bin/bash
CERT_COMPANY_NAME=${CERT_COMPANY_NAME:=Tarun Lalwani}
CERT_COUNTRY=${CERT_COUNTRY:=IN}
CERT_STATE=${CERT_STATE:=DELHI}
CERT_CITY=${CERT_CITY:=DELHI}
CERT_DIR=${CERT_DIR:=certs}
ROOT_CERT=${ROOT_CERT:=rootCA.pem}
ROOT_CERT_KEY=${ROOT_CERT_KEY:=rootCA.key.pem}
# make directories to work from
mkdir -p $CERT_DIR
create_root_cert(){
# Create your very own Root Certificate Authority
openssl genrsa \
-out $CERT_DIR/$ROOT_CERT_KEY \
2048
# Self-sign your Root Certificate Authority
# Since this is private, the details can be as bogus as you like
openssl req \
-x509 \
-new \
-nodes \
-key ${CERT_DIR}/$ROOT_CERT_KEY \
-days 1024 \
-out ${CERT_DIR}/$ROOT_CERT \
-subj "/C=$CERT_COUNTRY/ST=$CERT_STATE/L=$CERT_CITY/O=$CERT_COMPANY_NAME Signing Authority/CN=$CERT_COMPANY_NAME Signing Authority"
}
create_domain_cert()
{
local FQDN=$1
local FILENAME=${FQDN/\*/wild}
# Create a Device Certificate for each domain,
# such as example.com, *.example.com, awesome.example.com
# NOTE: You MUST match CN to the domain name or ip address you want to use
openssl genrsa \
-out $CERT_DIR/${FILENAME}.key \
2048
# Create a request from your Device, which your Root CA will sign
if [[ ! -z "${SAN}" ]]; then
openssl req -new \
-key ${CERT_DIR}/${FILENAME}.key \
-out ${CERT_DIR}/${FILENAME}.csr \
-subj "/C=${CERT_COUNTRY}/ST=${CERT_STATE}/L=${CERT_CITY}/O=$CERT_COMPANY_NAME/CN=${FQDN}" \
-reqexts san_env -config <(cat /etc/ssl/openssl.cnf <(cat ./openssl-san.cnf))
else
openssl req -new \
-key ${CERT_DIR}/${FILENAME}.key \
-out ${CERT_DIR}/${FILENAME}.csr \
-subj "/C=${CERT_COUNTRY}/ST=${CERT_STATE}/L=${CERT_CITY}/O=$CERT_COMPANY_NAME/CN=${FQDN}"
fi
# Sign the request from Device with your Root CA
if [[ ! -z "${SAN}" ]]; then
openssl x509 \
-sha256 \
-req -in $CERT_DIR/${FILENAME}.csr \
-CA $CERT_DIR/$ROOT_CERT \
-CAkey $CERT_DIR/$ROOT_CERT_KEY \
-CAcreateserial \
-out $CERT_DIR/${FILENAME}.crt \
-days 500 \
-extensions san_env \
-extfile openssl-san.cnf
else
openssl x509 \
-sha256 \
-req -in $CERT_DIR/${FILENAME}.csr \
-CA $CERT_DIR/$ROOT_CERT \
-CAkey $CERT_DIR/$ROOT_CERT_KEY \
-CAcreateserial \
-out $CERT_DIR/${FILENAME}.crt \
-days 500
fi
}
METHOD=$1
ARGS=${*:2}
echo "Called with $METHOD and $ARGS"
if [ -z "${METHOD}" ]; then
echo "Usage ./sslcerts.sh [create_root_cert|create_domain_cert] <args>"
echo "Below are the environment variabls you can use:"
echo "CERT_COMPANY_NAME=Company Name"
echo "CERT_COUNTRY=Country"
echo "CERT_STATE=State"
echo "CERT_CITY=City"
echo "CERT_DIR=Directory where certificate needs to be genereated"
echo "ROOT_CERT=Name of the root cert"
echo "ROOT_CERT_KEY=Name of root certificate key"
else
${METHOD} ${ARGS}
fi
You can change the environment variables on the TOP and generate a self-signed certificate using below
$ SAN=DNS.1:*.tarunlalwani.com,DNS.2:tarunlalwani.com ./sslcerts.sh create_domain_cert '*.tarunlalwani.com'
Edit 1
Earlier browsers use to rely on FQDN, but now some of them have started using SAN which is "Subject Alternative Name". Generally openssl doesn't come up v3 extensions configured. SAN is a part of the v3 extensions. So when you generated a self signed certificated it has the correct FQDN (Full qualified domain name) but not SAN. Chrome will show error for these certificates but you will see firefox working fine.
PS: Taken from article http://tarunlalwani.com/post/self-signed-certificates-trusting-them/

Use self signed certificate with cURL?

I have a flask application running using a self signed certificate. I'm able to send in a curl request using:
curl -v -k -H "Content-Type: application/json" -d '{"data":"value1","key":"value2"}' https://<server_ip>:<port>
The verbose logs show that everything went alright.
I wanted to avoid using the -k (--insecure) option and instead specify a .pem file that curl could use. Looking at the curl man page I found that you could do this using the --cert option.
So I created a .pem file using this:
openssl rsa -in server.key -text > private.pem
CURL throws me this error when using the private.pem file:
curl: (58) unable to use client certificate (no key found or wrong pass phrase?)
Any suggestions? - or is this only possible with a properly signed certificate?
Tnx
This is just another version of this question: Using openssl to get the certificate from a server
Or put more bluntly:
Using curl --cert is wrong, it is for client certificates.
First, get the the certs your server is using:
$ echo quit | openssl s_client -showcerts -servername server -connect server:443 > cacert.pem
(-servername is necessary for SNI so that you get the right virtual server's certificate back)
Then make your curl command line use that set to verify the server in subsequent operations:
$ curl --cacert cacert.pem https://server/ [and the rest]
special teaser
Starting with curl 7.88.0 (to be shipped in February 2023), curl can save the certificates itself with the new %{certs} variable for the -w option. Blogged about here.
To make request from https server through curl. I make use of below steps
Step1: Generate self signed certificate with below code at root of the project you want to make use of it.openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes
Step2: Fill the prompt with required details but when you get to Common name input localhost e.g Common Name (eg, fully qualified host name) []:localhost
step3: When your openssl cert.pem & key.pem has being generated startup your server then in another terminal or command line run curl --cacert cert.pem https://localhost:443
Note: I use port 443 which is the default https port, you can make use of another port then make sure cert.pem file path is well referenced.