SSL Error: Handshake failed with fatal error - Querying fabric-sdk-rest server on a Fabric Network with TLS enabled - ssl

I started a Multi-Host Fabric Network usind docker swarm made up of 1 CA-server, 1 Orderer, 2 Peers (both in Org1, one on PC1 and one on PC2), 2 CouchBD (one for each Peer) with fabric-sdk-rest running on PC2.
Now if I disable TLS in the Fabric Network, everything works fine. But if i enable the TLS in the network, the SDK cannot connect to the peers failing to query.
Here I show the configuration of the network and the fabric-sdk-rest:
(crypto-config.yaml)
OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.example.com
Template:
Count: 2
Users:
Count: 0
(datasources.json)
{
"db": {
"name": "db",
"connector": "memory"
},
"fabricDataSource": {
"name": "fabricDataSource",
"connector": "fabric",
"keyStoreFile": "/tmp/fabricSDKStore",
"fabricUser": {
"username": "Admin#org1.example.com",
"mspid": "Org1MSP",
"cryptoContent": {
"privateKey":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/KEY_sk",
"signedCert":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/Admin#org1.example.com-cert.pem"
}
},
"COMMENT_orgs":"Referenced by peers to avoid having to configure the same file location multiple times. Change CACertFile locations for your fabric",
"orgs": [
{ "name":"org1", "CACertFile":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem"}
],
"COMMENT_peers" : "Configured array is for use with the fabric-sample when running it in a local docker set up. eventURL and publicCertFile not currently used.",
"peers": [
{ "requestURL":"grpcs://peer1.org1.example.com:7051", "eventURL":"grpcs://peer1.org1.example.com:7053", "orgIndex":"0", "publicCertFile":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp/signcerts/peer1.org1.example.com-cert.pem", "hostname":"peer1" }
],
"COMMENT_peers_secure" : "UNUSED. This is a copy of the above with grpcs URLs. Replace peers content with this if grpcs urls are needed.",
"peers-secure": [
{ "requestURL":"grpcs://peer1.org1.example.com:7051", "eventURL":"grpcs://peer1.org1.example.com:7053", "orgIndex":"0", "publicCertFile":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp/signcerts/peer1.org1.example.com-cert.pem", "hostname":"peer1" }
],
"orderers": [
{ "url":"grpcs://orderer.example.com:7050", "CACertFile":"$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/ca/ca.example.com-cert.pem", "publicCertFile": "$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/signcerts/orderer.example.com-cert.pem", "hostname":"orderer"}
],
"COMMENT_orderers_secure" : "UNUSED. This is a copy of the above with grpcs URLs. Replace orderers content with this if grpcs urls are needed.",
"orderers-secure": [
{ "url":"grpcs://orderer.example.com:7050", "CACertFile":"$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/ca/ca.example.com-cert.pem", "publicCertFile": "$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/signcerts/orderer.example.com-cert.pem", "hostname":"orderer"}
],
"COMMENT_channels":"fabric-sdk-node Client class requires channel information to be configured during bootstrap.",
"channels": [
{ "name":"mychannel", "peersIndex":[0], "orderersIndex":[0] }
],
"channels-first-network": [
{ "name":"mychannel", "peersIndex":[0,1,2,3], "orderersIndex":[0] }
]
}
}
Once started the Hyperledger Fabric SDK REST server at https://0.0.0.0:3000, when I try to make the GET channels query from the explorer, I get the following error:
error: [fabricconnector.js]: Failed to queryChannels: Error: 14 UNAVAILABLE: Connect Failed
Error not handled for the GET request /api/fabric/1_0/channels: Error: 14 UNAVAILABLE: Connect Failed
at Object.exports.createStatusError ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:1214:28)
at InterceptingListener._callNext ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:590:42)
at InterceptingListener.onReceiveStatus ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:640:8)
at callback ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:867:24)
E0510 10:51:04.780559355 12247 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.
Has anyone ever seen this error? Can anyone help me get through this, please?

Related

Service discovery with eureka is not working in docker container

When I run my API gateway in docker container then it is not able to find my services which are registered in eureka.
API Gateway
-- ocelot.json
{
"ReRoutes": [
{
"DownstreamPathTemplate": "/api/values",
"DownstreamScheme": "http",
"UseServiceDiscovery": true,
"ServiceName": "sampleservice",
"UpstreamPathTemplate": "/sample-api/{catchAll}"
}
],
"GlobalConfiguration": {
"UseServiceDiscovery": true,
"ServiceDiscoveryProvider": {
"Type": "Eureka",
"Host": "myeurekaserver",
"Port": "8761"
}
}
}
-- appsettings.json for API Gateway
{
"eureka": {
"client": {
"shouldRegisterWithEureka": false,
"serviceUrl": "http://myeurekaserver:8761/eureka/",
"ValidateCertificates": false
},
"instance": {
"appName": "gateway",
"hostName": "myeurekaserver",
"port": "7000"
}
}
}
Service Configuration --appsettings.json
{
"eureka": {
"client": {
"shouldRegisterWithEureka": true,
"serviceUrl": "http://myeurekaserver:8761/eureka/",
"ValidateCertificates": false
},
"instance": {
"appName": "sampleservice",
"hostName": "myeurekaserver",
"port": "7001"
}
}
}
docker-compose.yml
version: '3.4'
services:
sampleapi:
image: ${DOCKER_REGISTRY-}sampleapi
ports:
- "7001:80"
networks:
- ecnetwork
build:
context: .
dockerfile: SampleAPI/Dockerfile
gateway:
image: ${DOCKER_REGISTRY-}gateway
ports:
- "7000:80"
networks:
- ecnetwork
build:
context: .
dockerfile: Gateway/Dockerfile
myeurekaserver:
image: ${DOCKER_REGISTRY-}myeurekaserver
ports:
- "8761:8761"
networks:
- ecnetwork
build:
context: .
dockerfile: MyEurekaServer/Dockerfile
networks:
ecnetwork:
external: true
When I run command docker-compose up and check on http://localhost:8761/ I find my services have been registred in the eureka server, but I run http://localhost:7000/sample-api/order
It returns
localhost is currently unable to handle this request. HTTP ERROR 500
I checked my console window, then It is API gateway is able to discover the services, here is the log.
gateway_1 | dbug: Steeltoe.Discovery.Eureka.DiscoveryClient[0]
gateway_1 | FetchRegistryDelta returned: OK
gateway_1 | dbug: Steeltoe.Discovery.Eureka.DiscoveryClient[0]
gateway_1 | FetchRegistry succeeded
It's an application error, check your app API gateway.
500 Internal Server Error
A generic error message, given when an unexpected condition was encountered and no more specific message is suitable
Try to debug your application without Docker.
Check in the docker on which port the service is registered 7000 or 80?
Then see if the 7000 port is accessible for you in local by telnet

Problem in enable Authenticate Mesos APIs in Mesosphere/ DCOS

Authentication is not enable for Mesos APIs by default.
After Install DCOS I want config Mesos API Authentication on it.
I'm going to set authentication for mesos APIs like : register_frameworks, run_task,...
the problem is after my configuration, DCOS GUI and marathon dosent work correctly.
I configured DCOS as follow:
Mesos environment variable config:
path:/opt/mesosphere/etc/mesos-master
#Authentication part
MESOS_LOG_DIR=/var/log/mesos
#Framework authentication
MESOS_AUTHENTICATORS="crammd5"
MESOS_AUTHENTICATE_FRAMEWORKS=true
MESOS_AUTHENTICATE_HTTP_FRAMEWORKS=true
MESOS_HTTP_FRAMEWORK_AUTHENTICATORS="basic"
MESOS_ACLS=/opt/mesosphere/etc/acls
MESOS_AUTHENTICATE=true
MESOS_CREDENTIALS=/opt/mesosphere/etc/mesos_credentials_auth.json
MESOS_ROLE=foo
Marathon environment variable config:
path:/opt/mesosphere/marathon
#authentication section
MARATHON_MESOS_AUTHENTICATION=enabled
#MARATHON_HTTP_CREDENTIALS=marathon:123456
MARATHON_MESOS_AUTHENTICATION_PRINCIPAL=marathon
MARATHON_MESOS_ROLE=foo
MARATHON_MESOS_AUTHENTICATION_SECRET_file=/opt/mesosphere/etc/marathon.secret
Marathon environment variable config: path:/opt/mesosphere/metronome
METRONOME_MESOS_AUTHENTICATION_ENABLED=true
METRONOME_MESOS_AUTHENTICATION_PRINCIPAL=metronome
METRONOME_MESOS_ROLE=foo
METRONOME_MESOS_AUTHENTICATION_SECRET_FILE= /opt/mesosphere/etc/metronome.secret
/opt/mesosphere/etc/metronome.secret (contain metronome secret without new line)
123456
/opt/mesosphere/etc/marathon.secret (contain marathon secret without new line)
123456
/opt/mesosphere/etc/acls
{
"run_tasks": [
{
"principals": {
"type": "ANY"
},
"users": {
"type": "ANY"
}
}
],
"register_frameworks": [
{
"principals": {
"type": "ANY"
},
"roles": {
"type": "ANY"
}
}
]
}
/opt/mesosphere/etc/mesos_credentials_auth.json
{
"credentials" : [
{
"principal": "principal1",
"secret": "secret1"
},
{
"principal": "principal2",
"secret": "secret2"
},
{
"principal": "marathon",
"secret": "123456"
},
{
"principal": "metronome",
"secret": "123456"
}
]
}
When I enable this configuration and stop and start dcos-mesos-master:
systemctl stop dcos-mesos-master.service
systemctl start dcos-mesos-master.service
systemctl stop dcos-marathon.service
systemctl start dcos-marathon.service
systemctl stop dcos-metronome.service
systemctl start dcos-metronome.service
http://IP/services page in DCOS dosnt work. I think its beacuase authentication of marathon don't set correctly. bcs this address dosent work after enable authentication configuration:\
http://IP/service/marathon/v2/deployments?_timestamp=1560449507192
I got this errors in mesos log after enable metronome authentication:
I0613 17:35:12.176092 305 authenticator.cpp:98] Creating new server
SASL connection
I0613 17:35:12.177258 304 master.cpp:10255] Re-authenticating
scheduler-aca98ea7-be34-49d1-9200-5ef8c15da153#172.17.0.2:15201;
discarding outstanding authentication
I0613 17:35:12.177523 304 master.cpp:10285] Ignoring stale
authentication result of scheduler-aca98ea7-be34-49d1-9200-
5ef8c15da153#172.17.0.2:15201
I0613 17:35:12.177582 304 authenticator.cpp:98] Creating new server
SASL connection
I0613 17:35:12.178586 302 master.cpp:10255] Re-authenticating
scheduler-aca98ea7-be34-49d1-9200-5ef8c15da153#172.17.0.2:15201;
discarding outstanding authentication
I0613 17:35:12.178850 302 master.cpp:10285] Ignoring stale
authentication result of scheduler-aca98ea7-be34-49d1-9200-
5ef8c15da153#172.17.0.2:15201
After search, finally I got my answer:
These security features are only available on "DC/OS mesosphere Enterprise" and you cant config it in open source version.
also I opened github issue with more details:(I hope it will be usefull)
https://github.com/mesosphere/marathon/issues/6942

My AKS Cluster was brought down, how can I recover?

I have been playing around with load-testing my application on a single agent cluster in AKS. During the testing, the connection to the dashboard stalled and never resumed. My application seems down as well, so I am assuming the cluster is in a bad state.
The API server is restate-f4cbd3d9.hcp.centralus.azmk8s.io
kubectl cluster-info dump shows the following error:
{
"name": "kube-dns-v20-6c8f7f988b-9wpx9.14fbbbd6bf60f0cf",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/events/kube-dns-v20-6c8f7f988b-9wpx9.14fbbbd6bf60f0cf",
"uid": "47f57d3c-d577-11e7-88d4-0a58ac1f0249",
"resourceVersion": "185572",
"creationTimestamp": "2017-11-30T02:36:34Z",
"InvolvedObject": {
"Kind": "Pod",
"Namespace": "kube-system",
"Name": "kube-dns-v20-6c8f7f988b-9wpx9",
"UID": "9d2b20f2-d3f5-11e7-88d4-0a58ac1f0249",
"APIVersion": "v1",
"ResourceVersion": "299",
"FieldPath": "spec.containers{kubedns}"
},
"Reason": "Unhealthy",
"Message": "Liveness probe failed: Get http://10.244.0.4:8080/healthz-kubedns: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)",
"Source": {
"Component": "kubelet",
"Host": "aks-agentpool-34912234-0"
},
"FirstTimestamp": "2017-11-30T02:23:50Z",
"LastTimestamp": "2017-11-30T02:59:00Z",
"Count": 6,
"Type": "Warning"
}
As well as some Pod Sync errors in Kube-System.
Example of issue:
az aks browse -g REstate.Server -n REstate
Merged "REstate" as current context in C:\Users\User\AppData\Local\Temp\tmp29d0conq
Proxy running on http://127.0.0.1:8001/
Press CTRL+C to close the tunnel...
error: error upgrading connection: error dialing backend: dial tcp 10.240.0.4:10250: getsockopt: connection timed out
You'll probably need to ssh to the node to see if the Kubelet service is running. For future you can set Resource quotas from exhausting all resources in the cluster nodes.
Resource Quotas -https://kubernetes.io/docs/concepts/policy/resource-quotas/

Chaincode container can't connect to the local peer due to certificate signed by unknown authority

First of all I'd like to mention, that my setup works like a charm when there's no TLS enabled. It works even in Docker Swarm on AWS.
The problem starts when I enable TLS. When I deploy my .bna file via Composer, my newly created chaincode container produces the following logs:
2017-08-23 13:14:16.389 UTC [Composer] Info -> INFO 001 Setting the Composer pool size to 8
2017-08-23 13:14:16.402 UTC [shim] userChaincodeStreamGetter -> ERRO 002 Error trying to connect to local peer: x509: certificate signed by unknown authority
Error starting chaincode: Error trying to connect to local peer: x509: certificate signed by unknown authority
Funny thing is, that this works when deploying .bna via the composer playground (when the TLS is still enabled in my fabric)...
Below is my connection profile:
{
"name": "test",
"description": "test",
"type": "hlfv1",
"orderers": [
{
"url": "grpcs://orderer.company.com:7050",
"cert": "-----BEGIN CERTIFICATE-----blabla1\n-----END CERTIFICATE-----\n"
}
],
"channel": "channelname",
"mspID": "CompanyMSP",
"ca": {
"url": "https://ca.company.com:7054",
"name": "ca-company",
"trustedRoots": [
"-----BEGIN CERTIFICATE-----\nblabla2\n-----END CERTIFICATE-----\n"
],
"verify": true
},
"peers": [
{
"requestURL": "grpcs://peer0.company.com:7051",
"eventURL": "grpcs://peer0.company.com:7053",
"cert": "-----BEGIN CERTIFICATE-----\nbalbla3\n-----END CERTIFICATE-----\n"
}
],
"keyValStore": "/home/composer/.composer-credentials",
"timeout": 300
}
My certs have been generated by cryptogen tool, hence:
orderers.0.cert contains value of crypto-config/ordererOrganizations/company.com/orderers/orderer.company.com/msp/tlscacerts/tlsca.company.com-cert.pem
peers.0.cert contains value of crypto-config/peerOrganizations/company.com/peers/peer0.company.com/msp/tlscacerts/tlsca.company.com-cert.pem
ca.trustedRoots.0 contains crypto-config/peerOrganizations/company.com/peers/peer0.company.com/tls/ca.crt
I've got the feeling, that my trustedRoots certificate is wrong...
UPDATE
When I do docker inspect chaincode_container I can see that it misses ENV variable: CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer.crt, while the chaincode container deployed via playground does have it...
When the chaincode image is built, the TLS certificate that it uses to build the trusted roots is the rootcert from:
# TLS Settings
# Note that peer-chaincode connections through chaincodeListenAddress is
# not mutual TLS auth. See comments on chaincodeListenAddress for more info
tls:
enabled: false
cert:
file: tls/server.crt
key:
file: tls/server.key
rootcert:
file: tls/ca.crt
The TLS certificate that the peer uses to run the gRPC service is the cert one.
By the way - You're using the release branch code, not the one in master - is that correct?

Can't Connect to Service via Marathon-lb using DCOS

I recently went through the tutorial for load balancing apps in DCOS using marathon-lb (in the example they balance some nginx containers: https://dcos.io/docs/1.9/networking/marathon-lb/marathon-lb-advanced-tutorial/). I am trying to use this approach to internally load balance my own custom application. The custom app I am using is a play scala app. I have the internal marathon-lb set up and can successfully use it for the nginx container but when I try to use my own docker image I cannot get this to work. I start up my service with my custom image and I can access the service fine by using the IP and port that gets assigned to it (i.e. if the service gets deployed on 10.0.0.0 and is available on port 1234 then curl http://10.0.0.0:1234/ works as expected and I can also make my api calls as defined in my application routes). However, when I try to access the app through the load balancer (curl -i http://marathon-lb-internal.marathon.mesos:10002, where 10002 is the service port) then I get this message:
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
For reference, here is my json file I'm using to start my custom service:
{
"id": "my-app",
"container": {
"type": "DOCKER",
"docker": {
"image": "my_repo/my_image:1.0.0",
"network": "BRIDGE",
"portMappings": [
{ "hostPort": 0, "containerPort": 9000, "servicePort": 10002, "protocol": "tcp" }
],
"parameters": [
{ "key": "env", "value": "USER_NAME=user" },
{ "key": "env", "value": "USER_PASSWORD=password" }
],
"forcePullImage": true
}
},
"instances": 1,
"cpus": 1,
"mem": 1000,
"healthChecks": [{
"protocol": "HTTP",
"path": "/v1/health",
"portIndex": 0,
"timeoutSeconds": 10,
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"maxConsecutiveFailures": 10
}],
"labels":{
"HAPROXY_GROUP":"internal"
},
"uris": [ "https://s3.amazonaws.com/my_bucket/my_docker_credentials" ]
}
I had the same problem and found the solution here
marathon-lb health check failing on all spray.io containers
Need to add
"HAPROXY_0_BACKEND_HTTP_HEALTHCHECK_OPTIONS": " http-send-name-header Host\n timeout check {healthCheckTimeoutSeconds}s\n"
To your config so that the REST layer doesn't bark on the health check from marathon