I'm trying to setup Hyperledger Fabric using the new external chaincode service feature. Running the chaincode with the default configuration (no tls or peer authentication) works as expected.
I've read quite a few tutorials about this topic, but they all only use self-signed certificates for this purpose - which is not very helpful since I am working on configuring a production system. I Would like to use the certificate authorities (fabric-ca-server) that are already running in my network and provide the certificates for my orderers, peers etc.
My question would be: How do I generate the root_cert as well as client_cert and client_key using my existing CA? There must be a way to do this using the fabric-ca-client. I've already tried to use the ca-certificate of my peer-organization, but that did not work (It does not seem to contain the hostname of the chaincode-service).
Thank you for your help.
UPDATE:
I've now tried to use the fabric-ca-client's register and enroll commands to register an identity and get myself an tls Enrollment-profile.
fabric-ca-client register --caname $CANAME --id.name chaincode --id.secret chainpw --tls.certfiles $certfile --loglevel error
fabric-ca-client enroll -u https://chaincode: chainpw#$CA_HOST_ADDRESS:$nodePort --caname $CANAME -M "$chainDir/msp" --csr.cn diplom-$validK8SHostName --csr.hosts diplom-$validK8SHostName --tls.certfiles $certfile --loglevel error
fabric-ca-client enroll -u https://chaincode: chainpw#$CA_HOST_ADDRESS:$nodePort --caname $CANAME -M "$chainDir/tls" --enrollment.profile tls --csr.hosts diplom-$validK8SHostName --csr.hosts localhost --tls.certfiles $certfile --loglevel error
From the generated tls directory, I took the /signcerts/cert.pem and formatted it into single-line via awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' ... and pasted it into the connection.json as root_cert.
Similarly I've copied the cert.pem file into the chaincode container and set the environment variable CORE_CHAINCODE_TLS_CLIENT_CACERT_FILE to point to this file. However, the peer still cannot connect to the container.
ClientHandshake -> ERRO 06c Client TLS handshake failed after 752.754µs with error: tls: first record does not look like a TLS handshake
Update 2:
Seems like I've set the wrong environment-variable on the chaincode service. CORE_CHAINCODE_TLS_CERT_FILE must be set to the generated /signcerts/cert.pem and CORE_CHAINCODE_TLS_KEY_FILE to private-key from /keystore.
The chaincode service seems to accept the certificates now, but the peer complains that they were signed by an unknown authority.
Update 3:
Another bit of progress. Seems like I made a mistake in the fabric-ca-client commands. I accidentally set the csr.cn parameter, thereby overwriting my CA hostname. With he following I was able to register my chaincode service with my CA and get the corresponding TLS certificates to be valid for my service and checkout with the organisations CA :-)
fabric-ca-client register --caname $CANAME --id.name $NAME --id.secret $PW --tls.certfiles $certfile --loglevel error
fabric-ca-client enroll -u https://$NAME:$PW#$CA_HOST_ADDRESS:$nodePort --caname $CANAME -M "$chainDir/msp" --csr.hosts chain-$validK8SHostName --tls.certfiles $certfile --loglevel error
fabric-ca-client enroll -u https://$NAME:$PW#$CA_HOST_ADDRESS:$nodePort --caname $CANAME -M "$chainDir/tls" --enrollment.profile tls --csr.hosts chain-$validK8SHostName --csr.hosts localhost --tls.certfiles $certfile --loglevel error
For additional information, https://github.com/hyperledgendary/contract-as-a-service is an example repo that shows the chaincode as an external service.
The approach taken there is indeed what Victor has describe above, and is, AFAIK a good way to do this.
The Fabric CA docs on initializing a CA server mentions:
The fabric-ca-server init command generates a self-signed CA
certificate unless the -u <parent-fabric-ca-server-URL> option is
specified. If the -u is specified, the server’s CA certificate is
signed by the parent Fabric CA server. In order to authenticate to the
parent Fabric CA server, the URL must be of the form
<scheme>://<enrollmentID>:<secret>#<host>:<port>, where <enrollmentID>
and <secret> correspond to an identity with an hf.IntermediateCA
attribute whose value equals true.
It also says you can set a key and certificate file which has already been generated
If you want the Fabric CA server to use a CA signing certificate and
key file which you provide, you must place your files in the location
referenced by ca.certfile and ca.keyfile respectively. Both files must
be PEM-encoded and must not be encrypted. More specifically, the
contents of the CA certificate file must begin with -----BEGIN CERTIFICATE----- and the contents of the key file must begin with
-----BEGIN PRIVATE KEY----- and not -----BEGIN ENCRYPTED PRIVATE KEY-----.
So if you already a CA server running, it should be possible to get a certificate signed by it or to create one and include it in your child CA
Finally I was able to find an answer to my problem. As stated in my third update, I've used the fabric-ca-client to enrol an identity for my chaincode-service.
fabric-ca-client register --caname $CANAME --id.name $NAME --id.secret $PW --tls.certfiles $certfile --loglevel error
fabric-ca-client enroll -u https://$NAME:$PW#$CA_HOST_ADDRESS:$nodePort --caname $CANAME -M "$chainDir/msp" --csr.hosts chain-$validK8SHostName --tls.certfiles $certfile --loglevel error
fabric-ca-client enroll -u https://$NAME:$PW#$CA_HOST_ADDRESS:$nodePort --caname $CANAME -M "$chainDir/tls" --enrollment.profile tls --csr.hosts chain-$validK8SHostName --csr.hosts localhost --tls.certfiles $certfile --loglevel error
Pleas note that the validK8SHostName is simply the hostname of my container with the dots replaced by dashes (k8s does not allow dots in service or container names).
These commands generate an msp and tls folder in my chaindDir. The certificates/keys referenced as JSON-propeties are converted into single-line using awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}'. The environment variables of the chaincode container are the actual files (I've mounted them as secrets in my k8s cluster).
tls/signcerts -> Certificated needed for the "root_cert" property and the CORE_CHAINCODE_TLS_CERT_FILE env.
tls/keystore -> Private key that is set for the CORE_CHAINCODE_TLS_KEY_FILE env.
tls/tlscacerts -> Certificate needed for the CORE_CHAINCODE_TLS_CLIENT_CA_CERT_FILE env.
msp/signcert -> Certificate needed for the "client_cert" property.
msp/keystore -> Private key that is set for the "client_key" property
With the setup I can start chaincode containers that are TLS-terminated and only communicate with those peers, that have the corresponding Certificate and private-key combination. Thereby all certificates and keys are generated by my organisations CA.
Related
In this article mosquitto_sub with TLS enabled I understand that you need to provide a capath or cafile option to mosquitto_sub (and pub) but I am having trouble figuring out where those files/paths come from.
Back in October I was able to run mosquitto_sub -h mymosquitto.com -p 8883 -v -t 'jim/#' -u <u> -P <pw> --capath ssl/certs from my desktop computer (running Mint 19). That no longer works. I did an apt install ca-certificates and found the .crt files in /usr/share/ca-certificates/mozilla/ but when I used that path, it still gave me: Error: A TLS error occurred.
This is a Ubuntu 18.04 server running Let'sencrypt. I tried to point the --cafile to the chain.pem file which came from:
allow_anonymous false
password_file /etc/mosquitto/pwfile
listener 1883
listener 8883
certfile /etc/letsencrypt/live/mymosquitto.com/cert.pem
cafile /etc/letsencrypt/live/mymosquitto.com/chain.pem
keyfile /etc/letsencrypt/live/mymosquitto.com/privkey.pem
But that didn't work either. Can someone please help me understand what I should be doing?
From the mosquitto_sub man page:
--capath
Define the path to a directory containing PEM encoded CA certificates that are trusted. Used to enable SSL communication.
For --capath to work correctly, the certificate files must have ".crt" as the file ending and you must run "openssl rehash [path to
capath]" each time you add/remove a certificate.
If you want to use a directory of certs you will have to make sure the openssl rehash command mentioned has been run on that directory.
If you want use a file from the letsencrypt --cafile with the fullchain.pem file
I have rethought my situation. Since my certs get regenerated every 3 months or so I'm going to have to redo my apps using the new files so I decided to just go back to rolling my own. I did that using this site: http://www.steves-internet-guide.com/mosquitto-tls/ and I'm back to where I was in October.Thanks to hardillb for the advise.
Jim.
I registered for an account on a mqtt server provider.
They provide 3 ports:
port: 1xxxx
ssl port: 2xxxx
websockets(TLS only): 3xxxx
I am publishing and receiving data from port 1xxx.
I would like to add encryption though. The mqtt server provider gives a "shared" subdomain but says:
If you want to use a custom domain for your instance you have to provide your own certificate to use with MQTT+TLS and Websockets. Certificates must be PEM encoded and the privte key unencrypted. Certs are only stored on your dedicated instance. When certs are installed you can point your domain as a CNAME to hairdresser.cloudmqtt.com.
I added a CNAME on my domain panel which I call it (mqtt.mydomain.com) and resolves to the above subdomain.
On my domain panel also I added ssl from letsenrypt(free) to my subdomain mqtt.mydomain.com(which points to mqtt server domain).
After adding the ssl I downloaded a zip file from the domain panel which contains 3 files:
mqtt.mydomain.com.ca
mqtt.mydomain.com.cert
mqtt.mydomain.com.key
I paste the contents of ca file to CA chain, cert file to Certificate and key file to Private key
Saved everything and restarted instance(mqtt server).
Then I tried from my computer:
mosquitto_pub -h "mqtt.mydomain.com" -p 1xxxx -i test1 -u test1 -P pass1 -t mytopics/test1 -m "hi everyone" -d -c
works but since its port 1xxxx its not SSL.
Trying the ssl:
mosquitto_pub -h "mqtt.mydomain.com" -p 2xxxx -i test1 -u test1 -P pass1 -t mytopics/test1 -m "hi everyone" -d -c --cafile C:\Users\CT\Downloads\certs\mqtt.mydomain.com.ca
gives me error on cmd:
OpenSSL Error[0]: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Error: A TLS error occurred.
Tried many different commands like passing cert file appart from ca and even key file(which is probably wrong i guess) and I am getting different errors on the server logs like:
OpenSSL Error: error:14094418:SSL routine
s:ssl3_read_bytes:tlsv1 alert unknown ca
OpenSSL Error: error:1408F10B:SSL routines:ssl3_get_record:wrong version number
Client connection from xx.xx.xx.xx failed: error:1408F10B:SSL routines:ssl3_get_record:wrong version number.
I am trying to get Redis 6 (with TLS enabled during compilation, tests after compilation were successful) to work. I am using Lets Encrypt certificate and following configuration:
tls-port 63790
tls-cert-file /etc/letsencrypt/live/myserver.net/cert.pem
tls-key-file /etc/letsencrypt/live/myserver.net/privkey.pem
tls-ca-cert-dir /etc/letsencrypt/live/myserver.net/
tls-auth-clients no
tls-protocols "TLSv1.2 TLSv1.3"
and this client command from localhost
redis-cli --tls --cert /etc/letsencrypt/live/myserver.net/cert.pem --key /etc/letsencrypt/live/myserver.net/privkey.pem --cacert /etc/letsencrypt/live/myserver.net/fullchain.pem -h myserver.net -p 63790 -a password
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Could not connect to Redis at myserver.net:63790: SSL_connect failed: certificate verify failed
this is output from redis log:
Error accepting a client connection: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca
While I am using openssl client with same certificates, i am able to connect and get ping reply from Redis server
No matter if I change
tls-ca-cert-dir /etc/letsencrypt/live/myserver.net/
to
tls-ca-cert
on server side
or
--cacert /etc/letsencrypt/live/myserver.net/fullchain.pem to chain.pem on client side
I tried to all versions of
tls-protocols ""
and change
tls-auth-clients no
to
tls-auth-clients optional
but I am still stuck with same error
OpenSSL version is 1.1.1
Redis version is 6.0.8
OS: Ubuntu 20.04
Can you help me to find out reason why is TLS not working, please?
Thank you
Wil
Ahh, SOLVED!
I was putting wrong CA chain. I had to chain root and intermediate certs downloaded from LE website into new file. It may come handy for someone with same problem.
I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.
I followed the Docker Registry installation docs precisely, and have a registry running on a remote Ubuntu VM. On that VM, the Docker container is running with the following command:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/registry.key \
registry:2
On the remote VM, I have the following directory structure:
/home/myuser/
certs/
registry.crt
registry.key
/etc/docker/certs.d/myregistry.example.com:5000/
ca.crt
ca.key
The ca.crt is the same exact cert as ~/certs/registry.crt (just renamed); same goes for ca.key and registry.key being the same/just renamed. I created the ca* files per a suggestion from the error output you'll see below.
I am almost 100% sure the CA cert is still valid, although any help ruling that out (e.g. how can I actually tell?) would be appreciated. When I start the container and look at the Docker logs, I don't see any errors.
I then attempt to login from my local laptop (Mac):
docker login myregistry.example.com:5000
It queries me for my username, password and email (although I don't recall ever specifying an email when setting up Basic Auth). After entering these correctly (I have checked and double checked...) I get the following error:
myuser#mymachine:~/tmp$docker login myregistry.example.com:5000
Username: my_ciuser
Password:
Email: myuser#example.com
Error response from daemon: invalid registry endpoint https://myregistry.example.com:5000/v0/:
unable to ping registry endpoint https://myregistry.example.com:5000/v0/ v2 ping attempt failed with error:
Get https://myregistry.example.com:5000/v2/: x509: certificate has expired or is not yet valid
v1 ping attempt failed with error: Get https://myregistry.example.com:5000/v1/_ping: x509:
certificate has expired or is not yet valid. If this private registry supports only HTTP or
HTTPS with an unknown CA certificate, please add
`--insecure-registry myregistry.example.com:5000` to the daemon's
arguments. In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
So from my perspective, I guess the following are possible:
The CA cert is invalid (if so, why?!?)
The CA cert is an intermediary cert (if so, how can I tell?)
The CA cert is expired (if so, how do I tell?)
This is a bad error message, and some other facet of the registry is not configured properly (if so, how do I troubleshoot further?)
Perhaps my cert is not located in the correct place on the server, or doesn't have the right permissions set (if so, where does the cert need to be?)
Something else that I would never expect in a million years
Any ideas/thoughts?
As said in the error message:
... In the case of HTTPS, if you have access to the registry's CA
certificate, no need for the flag; simply place the CA certificate
at /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
where myregistry.example.com:5000 - your CN with port.
You should copy your ca.crt into each Docker Daemon that will connect to your Docker Registry and put it in this folder: /etc/docker/certs.d/myregistry.example.com:5000/ca.crt
After this action you need to restart Docker daemon, for example, via sudo service docker stop && service docker start on CentOS (or call similar procedure on your OS).
I had the similar error:
Then I added my private registry to the insecureregistries list.
See below image for docker-desktop