This certificate lacks a "hosts" field. This makes it unsuitable for websites - ssl

when I execute this command to generate kubernetes certificate:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
Why the cfssl took shows:
[root#iZuf63refzweg1d9dh94t8Z ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
> -config=ca-config.json \
> -profile=kubernetes \
> kubernetes-csr.json | cfssljson -bare kubernetes
2019/08/25 20:02:12 [INFO] generate received request
2019/08/25 20:02:12 [INFO] received CSR
2019/08/25 20:02:12 [INFO] generating key: rsa-2048
2019/08/25 20:02:13 [INFO] encoded CSR
2019/08/25 20:02:13 [INFO] signed certificate with serial number 540759253485135214776496461610290604881680785507
2019/08/25 20:02:13 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
this is my kubernetes(kubernetes-csr.json) config:
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"172.19.104.230",
"172.19.150.82",
"172.19.104.231"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
obviously it contains hosts field. I am using cfssl version 1.2 .Is this a bug?

update cfssl version from v1.2 to v1.3.4(latest version):
go get -u github.com/cloudflare/cfssl/cmd/cfssl

Related

Self-Signed SSL certificate for local IP

Development certificate created by command dotnet dev-certs https --trust is works. I want to create a self-signed certificate for local IP in my LAN.
I've created a self-signed certificate by command:
New-SelfSignedCertificate -NotBefore (Get-Date) -NotAfter (Get-Date).AddYears(2) -Subject "192.168.1.100" -KeyAlgorithm "RSA" -KeyLength 2048 -HashAlgorithm "SHA256" -CertStoreLocation "Cert:\LocalMachine\My" -KeyUsage KeyEncipherment -FriendlyName "HTTPS development certificate" -TextExtension #("2.5.29.19={critical}{text}","2.5.29.37={critical}{text}1.3.6.1.5.5.7.3.1","2.5.29.17={critical}{text}DNS=192.168.1.100")
Then I copied certificate in trusted folder in certificates store:
After this I edited appsettings.Development.json:
"Kestrel": {
"Endpoints": {
"localhostHttp": {
"Url": "http://192.168.1.100:5000"
},
"localhostHttps": {
"Url": "https://192.168.1.100:5001",
"Certificate": {
"Subject": "192.168.1.100",
"Store": "Root",
"Location": "CurrentUser",
"AllowInvalid": true
}
}
}
But no result:
Is possible to create certificate like this?

cfssl gencert fails on generation from json

cfssl v 1.2
cfssl gencert -initca ca/ca-csr.json
where the json is:
{
"hosts": [
"cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CA",
"L": "Montreal",
"O": "My Inc.",
"OU": "QC",
"ST": "Montreal"
}
]
}
I got a err msg:
Must specify bundle target through -cert or -domain
From the example seems I use it the same way.
checking the code:
not sure how would it go to that condition.
Q: what actually the way I can use it? to generate the certificate from the JSON file.
You are using the incorrect binary.
Most likely you went to the releases section and obtained the first binary (cfssl-bundle_*) for your platform and renamed/aliased it to cfssl.
That is not the one that the linked tutorial uses.
Further down in the list of release artifacts you'll find a cfssl_<version>_<platform> binary which is the utility you want.

Chaincode container can't connect to the local peer due to certificate signed by unknown authority

First of all I'd like to mention, that my setup works like a charm when there's no TLS enabled. It works even in Docker Swarm on AWS.
The problem starts when I enable TLS. When I deploy my .bna file via Composer, my newly created chaincode container produces the following logs:
2017-08-23 13:14:16.389 UTC [Composer] Info -> INFO 001 Setting the Composer pool size to 8
2017-08-23 13:14:16.402 UTC [shim] userChaincodeStreamGetter -> ERRO 002 Error trying to connect to local peer: x509: certificate signed by unknown authority
Error starting chaincode: Error trying to connect to local peer: x509: certificate signed by unknown authority
Funny thing is, that this works when deploying .bna via the composer playground (when the TLS is still enabled in my fabric)...
Below is my connection profile:
{
"name": "test",
"description": "test",
"type": "hlfv1",
"orderers": [
{
"url": "grpcs://orderer.company.com:7050",
"cert": "-----BEGIN CERTIFICATE-----blabla1\n-----END CERTIFICATE-----\n"
}
],
"channel": "channelname",
"mspID": "CompanyMSP",
"ca": {
"url": "https://ca.company.com:7054",
"name": "ca-company",
"trustedRoots": [
"-----BEGIN CERTIFICATE-----\nblabla2\n-----END CERTIFICATE-----\n"
],
"verify": true
},
"peers": [
{
"requestURL": "grpcs://peer0.company.com:7051",
"eventURL": "grpcs://peer0.company.com:7053",
"cert": "-----BEGIN CERTIFICATE-----\nbalbla3\n-----END CERTIFICATE-----\n"
}
],
"keyValStore": "/home/composer/.composer-credentials",
"timeout": 300
}
My certs have been generated by cryptogen tool, hence:
orderers.0.cert contains value of crypto-config/ordererOrganizations/company.com/orderers/orderer.company.com/msp/tlscacerts/tlsca.company.com-cert.pem
peers.0.cert contains value of crypto-config/peerOrganizations/company.com/peers/peer0.company.com/msp/tlscacerts/tlsca.company.com-cert.pem
ca.trustedRoots.0 contains crypto-config/peerOrganizations/company.com/peers/peer0.company.com/tls/ca.crt
I've got the feeling, that my trustedRoots certificate is wrong...
UPDATE
When I do docker inspect chaincode_container I can see that it misses ENV variable: CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer.crt, while the chaincode container deployed via playground does have it...
When the chaincode image is built, the TLS certificate that it uses to build the trusted roots is the rootcert from:
# TLS Settings
# Note that peer-chaincode connections through chaincodeListenAddress is
# not mutual TLS auth. See comments on chaincodeListenAddress for more info
tls:
enabled: false
cert:
file: tls/server.crt
key:
file: tls/server.key
rootcert:
file: tls/ca.crt
The TLS certificate that the peer uses to run the gRPC service is the cert one.
By the way - You're using the release branch code, not the one in master - is that correct?

logstash http_poller ssl certification issue

I am trying to use logstash http_poller to query a server RESTAPI. I download the server pem through explore, and generate jks file with keytool. but we still get error "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target". Don't know what wrong.
The config like below:
http_poller {
urls => {
restapi => {
method => get
url => "https://path_to_resources
headers => {
Accept => "application/json"
}
truststore => "/path/generated.truststore.jks"
truststore_password => "xxx"
ssl_certificate_validation => false
auth => {
user => "xxx"
password => "xxx"
}
}
}
request_timeout => 60
interval => 60000
codec => "json"
metadata_target => "http_poller_metadata"
}
}
By the way, what impact if ssl_certificate_validation is set as false?
I interpret OPs intention as to hopefully being able to disable TLS verification, which we still cant (logstash-7.11.1) and I plow on with how to get a trust store for these cases. This Q was one of my hits in pursuit of the same.
Some appliances will be running self signed certificates (another discussion ppl...) - so a small script to setup such a trust store could be helpful, especially if you are about to set up some automation internally.
Another caveat is that the self signed certificate still has to have a matching host name.
Based on the example from https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http_poller.html
NB! Further error checking, etc. is left at your discretion.
#!/bin/bash
# Fetch an http server's TLS certificate and
# create or update a JAVA keystore / truststore
usage () {
echo "usage: get-cert.sh <hostname>:<port>"
exit 1
}
TRUSTSTORE=cacert/trust.jks
PARAM=$1
HOSTNAME=$(echo "$PARAM" | cut -d: -f 1)
PORT=$(echo "$PARAM" | cut -d: -f 2)
REST=$(echo "$PARAM" | cut -d: -f 3-)
[ -z "$HOSTNAME" ] && usage
[ -z "$PORT" ] && usage
[ -n "$REST" ] && usage
OUTPUT=$(
openssl \
s_client \
-showcerts \
-connect "${HOSTNAME}":"${PORT}" </dev/null 2>/dev/null | \
openssl \
x509 \
-outform PEM)
EC=$?
[ $EC -ne 0 ] && { echo "ERROR EC=$EC - $OUTPUT" ; exit $EC ; }
keytool \
-import \
-storepass changeit \
-alias ${HOSTNAME} \
-noprompt \
-file <(echo "$OUTPUT") \
-keystore ${TRUSTSTORE}
Using some bash specific possibilities here. The alternative is to go through temporary files, as pr the official example (see link above).
Apparently your certificate is invalid .
Regarding
ssl_certificate_validation
it doesn't have real impact , http-puller is based on manticore, a ruby libary which relay on Apache HC
which does not support this hook see

What username does the kubernetes kubelet use when contacting the kubernetes API?

So I've been trying to implement ABAC authorization in the kubernetes API, with the following arguments in my kube-api manifest file.
- --authorization-mode=ABAC
- --authorization-policy-file=/etc/kubernetes/auth/abac-rules.json
And the following content in the abac-rulse.json file.
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*" }}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "apiGroup": "*", "namespace": "*", "resource": "*", "readonly": false}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "apiGroup": "*", "namespace": "*", "resource": "*", "readonly": false }}
However, the kubelets can't seem to connect to the api servers. I read that the username is taken from the CN field of the -subject in the certificate used to authenticate the connection, see here. In this case that's the fqdn of the hose, i've tried that too with no luck.
Any ideas what i'm doing wrong?
Cheers in advance
Edit:
I'm using Kubernetes version 1.2.2, both kubectl and hyperkube docker image.
Figured out the answer, documenting here for anyone else having the same issue with ABAC.
The kubelet user is define in the worker configuration, which in my case is a yaml file which i store here - /etc/kubernetes/worker-kubeconfig.yaml, the content of which is shown below:
apiVersion: v1
kind: Config
clusters:
- name: default
cluster:
server: https://10.96.17.34:8443
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: default
user: kubelet
name: kubelet-context
current-context: kubelet-context
So the user it's connecting with is kubelet.
In my case I had create my certificates with the CN=${MINION_FQDN}, and since this did not match "kubelet" then the ABAC policies weren't met. I regenerated my certifcates with the following arguments and now the nodes authenticate succesfully :)
# Create worker key
openssl genrsa -out $OUT/${WORKER_HOSTNAME}/worker-key.pem 2048
#Creating Worker CSR...
WORKER_FQDN=${WORKER_FQDN} WORKER_IP=${WORKER_IP} openssl req -new -key $OUT/${WORKER_HOSTNAME}/worker-key.pem -out $OUT/${WORKER_HOSTNAME}/worker.csr -subj "/CN=kubelet" -config $SSL_CONFIG
# Creating Worker Cert
WORKER_FQDN=${WORKER_FQDN} WORKER_IP=${WORKER_IP} openssl x509 -req -in $OUT/${WORKER_HOSTNAME}/worker.csr -CA $CA/ca.pem -CAkey $CA/ca-key.pem -CAcreateserial -out $OUT/${WORKER_HOSTNAME}/worker.pem -days 365 -extensions v3_req -extfile $SSL_CONFIG
The important part of which is this:
-subj "/CN=kubelet"
Hope this helps someone else.