Hyperledger Fabric - Peer unable to connect to (raft) Orderer with Mutual TLS - ssl

I am running a HLF on kubernetes - (3 raft orderers & 2 peers)
Now as raft requires Mutual TLS I had to setup some certificates.
The 3 raft orderers are able to communicate with eachother, as they are electing a leader, and re-electing another leader when I bring that leader down.
When I setup the peer, I used the same CA to generate the certificates. I am able to create the channel & join it from the peer. However I have to run CORE_PEER_MSPCONFIGPATH=$ADMIN_MSP_PATH prior to those commands, otherwise I get Access Denied error.
I am also forced to append the following flags to every peer channel x command I run.
--tls --cafile $ORD_TLS_PATH/cacert.pem --certfile $CORE_PEER_TLS_CLIENTCERT_FILE --keyfile $CORE_PEER_TLS_CLIENTKEY_FILE --clientauth
I am able to create, fetch, join the channel using the admin msp.
Now once the channel is joined, the peer is unable to connect with the orderer, somehow a bad certificate is given.
Orderer Logs
A bad certificate is used ?
2019-08-15 16:07:55.699 UTC [core.comm] ServerHandshake -> ERRO 221 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.130.2.148:53922
2019-08-15 16:07:55.699 UTC [grpc] handleRawConn -> DEBU 222 grpc: Server.Serve failed to complete security handshake from "10.130.2.148:53922": remote error: tls: bad certificate
Peer Logs
These suggest that it could not validate it with the ca.crt ?
2019-08-15 16:10:17.990 UTC [grpc] DialContext -> DEBU 03a parsed scheme: ""
2019-08-15 16:10:17.990 UTC [grpc] DialContext -> DEBU 03b scheme "" not registered, fallback to default scheme
2019-08-15 16:10:17.991 UTC [grpc] watcher -> DEBU 03c ccResolverWrapper: sending new addresses to cc: [{orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}]
2019-08-15 16:10:17.991 UTC [grpc] switchBalancer -> DEBU 03d ClientConn switching balancer to "pick_first"
2019-08-15 16:10:17.991 UTC [grpc] HandleSubConnStateChange -> DEBU 03e pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, CONNECTING
2019-08-15 16:10:18.009 UTC [grpc] createTransport -> DEBU 03f grpc: addrConn.createTransport failed to connect to {orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
2019-08-15 16:10:18.012 UTC [grpc] HandleSubConnStateChange -> DEBU 040 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, TRANSIENT_FAILURE
2019-08-15 16:10:18.991 UTC [grpc] HandleSubConnStateChange -> DEBU 041 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, CONNECTING
2019-08-15 16:10:19.003 UTC [grpc] createTransport -> DEBU 042 grpc: addrConn.createTransport failed to connect to {orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
2019-08-15 16:10:19.003 UTC [grpc] HandleSubConnStateChange -> DEBU 043 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, TRANSIENT_FAILURE
2019-08-15 16:10:20.719 UTC [grpc] HandleSubConnStateChange -> DEBU 044 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, CONNECTING
2019-08-15 16:10:20.731 UTC [grpc] createTransport -> DEBU 045 grpc: addrConn.createTransport failed to connect to {orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
2019-08-15 16:10:20.733 UTC [grpc] HandleSubConnStateChange -> DEBU 046 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, TRANSIENT_FAILURE
2019-08-15 16:10:20.990 UTC [ConnProducer] NewConnection -> ERRO 047 Failed connecting to {orderer-2.hlf-orderers.svc.cluster.local:7050 [OrdererMSP]} , error: context deadline exceeded
I generated the used certificates as follows:
Orderer Admin
fabric-ca-client enroll -u https://u:p#ca.example.com -M ./OrdererMSP
Orderer Node X
As I use the same certificates for TLS I added the used hosts here for TLS purposes
orderer-x.hlf-orderers.svc.cluster.local #kubernetes
orderer-x.hlf-orderers #kubernetes
orderer-x #kubernetes
localhost #local debug
fabric-ca-client enroll -m orderer-x \
-u https://ox:px#ca.example.com \
--csr.hosts orderer-x.hlf-orderers.svc.cluster.local,orderer-x.hlf-orderers,orderer-x,localhost \
-M orderer-x-MSP
Peer Admin
fabric-ca-client enroll -u https://u:p#ca.example.com -M ./PeerMSP
Peer Node X
fabric-ca-client enroll -m peer-x \
-u https://ox:px#ca.example.com \
--csr.hosts peer-x.hlf-peers.svc.cluster.local,peer-x.hlf-peers,peer-x,localhost \
-M peer-x-MSP
Now all of these, have the same ca.crt (/cacerts/ca.example.com.pem)
configtx.yaml
Orderer:
<<: *OrdererDefaults
OrdererType: etcdraft
EtcdRaft:
Consenters:
- Host: orderer-1.hlf-orderers.svc.cluster.local
Port: 7050
ClientTLSCert: orderer-1-MSP/signcerts/cert.pem
ServerTLSCert: orderer-1-MSP/signcerts/cert.pem
- Host: orderer-2.hlf-orderers.svc.cluster.local
Port: 7050
ClientTLSCert: orderer-2-MSP/signcerts/cert.pem
ServerTLSCert: orderer-2-MSP/signcerts/cert.pem
- Host: orderer-3.hlf-orderers.svc.cluster.local
Port: 7050
ClientTLSCert: orderer-3-MSP/signcerts/cert.pem
ServerTLSCert: orderer-3-MSP/signcerts/cert.pem
Addresses:
- orderer-1.hlf-orderers.svc.cluster.local:7050
- orderer-2.hlf-orderers.svc.cluster.local:7050
- orderer-3.hlf-orderers.svc.cluster.local:7050
I have checked multiple times if the correct certificates are mounted on the correct places and configured.
On the peer side I made sure that:
CORE_PEER_TLS_CLIENTROOTCAS_FILES is set correctly and that the (correct) file gets mounted (CORE_PEER_TLS_CLIENTROOTCAS_FILES: "/var/hyperledger/tls/client/cert/ca.crt")
Idem for CORE_PEER_TLS_CLIENTKEY_FILE & CORE_PEER_TLS_CLIENTCERT_FILE
CORE_PEER_TLS_CLIENTAUTHREQUIRED is set to true
On the orderer side I made sure that:
ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED is set to true
ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE is set correctly
ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY is set correctly
ORDERER_GENERAL_TLS_CLIENTROOTCAS is set correctly
It seems strange to me that the orderers are able to talk to eachother (as they are electing leaders), but that the peer is not able to do so

So it appears to be, that the tlscacerts should be in the msp(s) directory(ies) PRIOR to creating genesis / channel block. Simply mounting them in the pod at runtime is not enough
My msp directories (used in configtx.yaml) look like:
admincerts
tlscacerts
cacerts
...
After this it all started to work

seems like you have got below error
E0923 16:30:14.963567129 31166 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate.
E0923 16:30:15.964456710 31166 ssl_transport_security.cc:188] ssl_info_callback: error occured.
According to your details, All seems to be correct
However check below
certificate signed by unknown authority -> This makes me bit doubt on your certificate mapping
MAKE SURE
PEER:
CORE_PEER_TLS_ENABLED=true
CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/data/maersksea-rca-maersksea-chain.pem
CORE_PEER_TLS_CLIENTCERT_FILE=/data/tls/maersksea-peer-maersksea-client.crt
CORE_PEER_TLS_CLIENTKEY_FILE=/data/tls/maersksea-peer-maersksea-client.key
CORE_PEER_TLS_CLIENTAUTHREQUIRED=true
CORE_PEER_TLS_CLIENTROOTCAS_FILES=/data/maersksea-rca-maersksea-chain.pem
Orderer:
ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true
ORDERER_GENERAL_TLS_CLIENTROOTCAS=[/data/maersksea-rca-maersksea-chain.pem]

Related

erlang failed to resolve ipv6 addresses using parameter from rabbitmq

I'm using rabbitmq cluster in k8s which has only pure ipv6 address. inet return nxdomain error when parsing the k8s service name.
The paramter passed to erlang from rabbitmq is:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+A 128 -kernel inetrc '/etc/rabbitmq/erl_inetrc' -proto_dist inet6_tcp"
RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp"
erl_inetrc: |-
{inet6, true}.
when rabbitmq using its plugin rabbit_peer_discovery_k8s to invoke k8s api:
2019-10-15 07:33:55.000 [info] <0.238.0> Peer discovery backend does not support locking, falling back to randomized delay
2019-10-15 07:33:55.000 [info] <0.238.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized start
up delay.
2019-10-15 07:33:55.000 [debug] <0.238.0> GET https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/tazou/endpoints/zt4-crmq
2019-10-15 07:33:55.015 [debug] <0.238.0> Response: {error,{failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},{inet,[inet]
,nxdomain}]}}
2019-10-15 07:33:55.015 [debug] <0.238.0> HTTP Error {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},{inet,[inet],nxdom
ain}]}
2019-10-15 07:33:55.015 [info] <0.238.0> Failed to get nodes from k8s - {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}}
,
{inet,[inet],nxdomain}]}
2019-10-15 07:33:55.016 [error] <0.237.0> CRASH REPORT Process <0.237.0> with 0 neighbours exited with reason: no case clause matching {error,"{fa
iled_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from
_config/0 line 167 in application_master:init/4 line 138
2019-10-15 07:33:55.016 [info] <0.43.0> Application rabbit exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kub
ernetes.default.svc.cluster.local\",443}},\n
in k8s console, the address could be resolved:
[rabbitmq]# nslookup -type=AAAA kubernetes.default.svc.cluster.local
Server: 2019:282:4000:2001::6
Address: 2019:282:4000:2001::6#53
kubernetes.default.svc.cluster.local has AAAA address fd01:abcd::1
the inet could return ipv6 address.
kubectl exec -ti zt4-crmq-0 rabbitmqctl eval 'inet:gethostbyname("kubernetes.default.svc.cluster.local").'
{ok,{hostent,"kubernetes.default.svc.cluster.local",[],inet6,16,
[{64769,43981,0,0,0,0,0,1}]}}
as I know, plugin call httpc:request to invoke k8s api. I don't know what's the gap between httpc:request and inet:gethostbyname. I also don't what's used by httpc:request to resolve the address of hostname.
I query for the rabbitmq plugin, It's said that rabbitmq plugin don't aware how erlang resovlve the address. https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/issues/55.
Anything else I could set for erl_inetrc so that erlang could resolve the ipv6 address? what did i miss to config? or how could i debug from erlang side? I'm new to erlang.
B.R,
Tao

Kafka inter broker SSL handshake failed

I am trying to setup inter-broker SSL (not client) authentication and keep seeing the following errors:
[2019-05-17 06:33:47,151] INFO [Controller id=1004, targetBrokerId=1004] Failed authentication with /$IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2019-05-17 06:33:47,151] INFO [SocketServer brokerId=1004] Failed authentication with /$IP (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2019-05-17 06:33:47,151] ERROR [Controller id=1004, targetBrokerId=1004] Connection to node 1004 (/$IP:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
My server.properties is:
listeners=PLAINTEXT://$IP:9092,SSL://$IP:9093
security.inter.broker.protocol=SSL
ssl.truststore.password=$PASS
ssl.keystore.password=$PASS
ssl.key.password=$PASS
ssl.endpoint.identification.algorithm=""
ssl.keystore.location=/etc/kafka/kafka.server.keystore.jks
ssl.truststore.location=/etc/kafka/kafka.server.truststore.jks
``
When I run `openssl s_client -debug -connect $IP:9093 -tls1` I get back a list of certificates and `Secure Renegotiation IS supported`
Despite adding `-Djavax.net.debug=all` there's not anything in the logs which points to the problem.
Kafka version 2.2
Any ideas?
I had incorrectly set the value of ssl.endpoint.identification.algorithm="" instead of ssl.endpoint.identification.algorithm", this fixed it.
This value was changed in 2.2 to default to https so setting it to nothing worked.

RabbitMQ Server TLS, client alert: Fatal - Certificate Unknown when starting service

Have RabbitMQ configured to enable TLS with certificates. Key, Cert, and CA defined in .conf file. Upon service startup, error is thrown. Cannot find the cause for this to be thrown and logging isn't giving any more information at the debug level.
Get a client alert failure and am not certain of cause.
2019-03-22 10:04:18.690 [info] <0.7.0> Server startup complete; 4 plugins started.
* rabbitmq_amqp1_0
* rabbitmq_management
* rabbitmq_management_agent
* rabbitmq_web_dispatch
2019-03-22 10:04:24.831 [debug] <0.689.0> Supervisor {<0.689.0>,rabbit_connection_sup} started rabbit_connection_helper_sup:start_link() at pid <0.690.0>
2019-03-22 10:04:24.831 [debug] <0.689.0> Supervisor {<0.689.0>,rabbit_connection_sup} started rabbit_reader:start_link(<0.690.0>, {acceptor,{0,0,0,0},5671}) at pid <0.691.0>
2019-03-22 10:04:24.909 [info] <0.688.0> TLS server: In state certify received CLIENT ALERT: Fatal - Certificate Unknown
Our certs didn't have the correct type of X509v3 Extended Key Usage on the cert.
For x509 Auth, you'll need to assign client web auth when creating the certificate.
X509v3 Extended Key Usage:
TLS Web Client Authentication
This won't fix the issue if your certificate CA is broken and can't be verified, but for my issue, this was the resolution.

Rundeck gmail smtp setting

I've been trying for weeks to get to configure rundeck to send emails using my gmail smtp settings on a Centos7 vm. I do have an ssl cert (that I purchased) configured for the server. Not sure if that has something to do with it. So far I have:
Converted the "/etc/rundeck/rundeck-config.properties" file to
"/etc/rundeck/rundeck-config.groovy" file
Updates the "/etc/sysconfig/rundeckd" file with the following settings. "export RUNDECK_WITH_SSL=true
export RDECK_CONFIG_FILE="/etc/rundeck/rundeck-config.groovy"
Edited my "/etc/rundeck/rundeck-config.groovy" with the following text.
loglevel.default="INFO"
rdeck.base="/var/lib/rundeck"
rss.enabled=false
grails.mail.default.from="user#gmail.com"
dataSource.dbCreate = "update"
dataSource.url = "jdbc:h2:file:/var/lib/rundeck/data/rundeckdb;MVCC=true;TRACE_LEVEL_FILE=4"
grails {
mail {
host = "smtp.gmail.com"
username = "user#gmail.com"
port = 465
password = "password"
props = ["mail.smtp.auth":"true",
"mail.smtp.port":"465",
"mail.smtp.starttls.enable":"true",
"mail.smtp.socketFactory.port":"465",
"mail.smtp.socketFactory.fallback":"false"]
}
}
grails.serverURL="https://rundeck-server:4443"
rundeck.log4j.config.file = "/etc/rundeck/log4j.properties"
If I run it with port 465, I receive the following error.
2019-01-03 14:57:26.260 ERROR --- [eduler_Worker-1] rundeck.services.NotificationService : Error sending notification email to user#gmail.com for Execution 231: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com, port: 465, response: -1. Failed messages: javax.mail.MessagingException: Could not connect to SMTP host: smtp.gmail.com, port: 465, response: -1
If I configure using port 587, I receive the following error in the "/var/log/rundeck/service.log"
2019-01-03 15:05:52.121 ERROR --- [eduler_Worker-1] rundeck.services.NotificationService : Error sending notification email to user#gmail.com for Execution 233: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not convert socket to TLS;
nested exception is:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. Failed messages: javax.mail.MessagingException: Could not convert socket to TLS;
nested exception is:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Alternatively, I installed mailx and although theres an error in the logs, it successfully sends an email when I run the following "echo "Your message" | mail -v -s "Message Subject" user#gmail.com" command. The output is
Resolving host smtp.gmail.com . . . done.
Connecting to x.x.x.x:465 . . . connected.
Error in certificate: Peer's certificate issuer is not recognized.
Comparing DNS name: "smtp.gmail.com"
SSL parameters: cipher=AES-xxx-GCM, keysize=xxx, secretkeysize=xxx,
issuer=CN=Google Internet Authority G3,O=Google Trust Services,C=US
subject=CN=smtp.gmail.com,O=Google LLC,L=Mountain View,ST=California,C=US
220 smtp.gmail.com ESMTP b22sm9658588ios.45 - gsmtp
>>> EHLO rundeck-server
250-smtp.gmail.com at your service, [x.x.x.x]
250-SIZE 35882577
250-8BITMIME
250-AUTH LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-CHUNKING
250 SMTPUTF8
>>> AUTH LOGIN
334 xxxx
>>> xxxx==
334 xxxx
>>> xxxx==
235 2.7.0 Accepted
>>> MAIL FROM:<user#rundeck-server>
250 2.1.0 OK xx - gsmtp
>>> RCPT TO:<user#gmail.com>
250 2.1.5 OK xx - gsmtp
>>> DATA
354 Go ahead xx - gsmtp
>>> .
250 2.0.0 OK xx - gsmtp
>>> QUIT
221 2.0.0 closing connection xx - gsmtp
I added the following to the bottom of my "/etc/mail.rc" file
set smtp=smtps://smtp.gmail.com:465
set smtp-auth=login
set smtp-auth-user=USER.GMAIL.COM
set smtp-auth-password='password'
set ssl-verify=ignore
set nss-config-dir=/etc/pki/nssdb/
Looks like you're using a self-signed certificate or non trusted certificate by google, check that:
Related links:
Java: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
http://web.archive.org/web/20080820101722/http://blogs.sun.com/andreas/entry/no_more_unable_to_find
https://news.ycombinator.com/item?id=4920088
http://code.naishe.in/2011/07/looks-like-article-no-more-unable-to.html
I send emails from my gmail account with these settings in /etc/rundeck/rundeck-config.groovy
grails {
mail {
host = "smtp.gmail.com"
username = "USER"
port = 587
password = "PASSWORD"
props = ["mail.smtp.starttls.enable":"true","mail.smtp.auth":"true","mail.smtp.socketFactory.port":"587","mail.smtp.socketFactory.fallback":"false"]
}
}

Hyperledger Fabric - Error on Invoke / TLS handshake failed with error tls: first record does not look like a TLS handshake

Scope: This is a network with one channel composed of 3 Orgs, 1 anchor peer per organization, 1 CA per org and 1 MSP per org.
I'm facing an issue on my Hyperledger Fabric network related to the TLS handshake process that occurs when I make an invoke call to my chaincode (which is correctly installed and instantiated) through the CLI container.
ORDERER
[core.comm] ServerHandshake -> ERRO 01b TLS handshake failed with error tls: first record does not look like a TLS handshake {"server": "Orderer", "remote address": "192.168.0.23:55806"}
CLI
Error: error sending transaction for invoke: could not send: EOF - proposal response: version:1 response:<status:200 >
I couldn't find a solution to that so any kind of help would be great.
EDIT: I'm also having warnings like this one in the orderer container when I update the anchor peers:
2018-12-12 14:06:00.518 UTC [common.deliver] Handle -> WARN 014 Error reading from 192.168.32.23:43938: rpc error: code = Canceled desc = context canceled
2018-12-12 14:06:00.518 UTC [comm.grpc.server] 1 -> INFO 016 streaming call completed {"grpc.start_time": "2018-12-12T14:06:00.509Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "192.168.32.23:43938", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.958614ms"}
2018-12-12 14:06:00.518 UTC [orderer.common.broadcast] Handle -> WARN 015 Error reading from 192.168.32.23:43940: rpc error: code = Canceled desc = context canceled
2018-12-12 14:06:00.518 UTC [comm.grpc.server] 1 -> INFO 017 streaming call completed {"grpc.start_time": "2018-12-12T14:06:00.511Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "192.168.32.23:43940", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "7.13278ms"}
2018-12-12 14:06:10.328 UTC [comm.grpc.server] 1 -> INFO 018 streaming call completed {"grpc.start_time": "2018-12-12T14:06:05.692Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "192.168.32.13:40886", "grpc.peer_subject": "CN=peer1.farmer.supply-chain-network.com,L=San Francisco,ST=California,C=US", "error": "context finished before block retrieved: context canceled", "grpc.code": "Unknown", "grpc.call_duration": "4.636199388s"}
Thanks in advance
Seems like the orderer was expecting a tls connection but cli did not connect with tls.
Did you properly specify --tls --cafile <orderer-cert> during invoke?