Hyperledger Fabric - Error on Invoke / TLS handshake failed with error tls: first record does not look like a TLS handshake - ssl

Scope: This is a network with one channel composed of 3 Orgs, 1 anchor peer per organization, 1 CA per org and 1 MSP per org.
I'm facing an issue on my Hyperledger Fabric network related to the TLS handshake process that occurs when I make an invoke call to my chaincode (which is correctly installed and instantiated) through the CLI container.
ORDERER
[core.comm] ServerHandshake -> ERRO 01b TLS handshake failed with error tls: first record does not look like a TLS handshake {"server": "Orderer", "remote address": "192.168.0.23:55806"}
CLI
Error: error sending transaction for invoke: could not send: EOF - proposal response: version:1 response:<status:200 >
I couldn't find a solution to that so any kind of help would be great.
EDIT: I'm also having warnings like this one in the orderer container when I update the anchor peers:
2018-12-12 14:06:00.518 UTC [common.deliver] Handle -> WARN 014 Error reading from 192.168.32.23:43938: rpc error: code = Canceled desc = context canceled
2018-12-12 14:06:00.518 UTC [comm.grpc.server] 1 -> INFO 016 streaming call completed {"grpc.start_time": "2018-12-12T14:06:00.509Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "192.168.32.23:43938", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "8.958614ms"}
2018-12-12 14:06:00.518 UTC [orderer.common.broadcast] Handle -> WARN 015 Error reading from 192.168.32.23:43940: rpc error: code = Canceled desc = context canceled
2018-12-12 14:06:00.518 UTC [comm.grpc.server] 1 -> INFO 017 streaming call completed {"grpc.start_time": "2018-12-12T14:06:00.511Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "192.168.32.23:43940", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "7.13278ms"}
2018-12-12 14:06:10.328 UTC [comm.grpc.server] 1 -> INFO 018 streaming call completed {"grpc.start_time": "2018-12-12T14:06:05.692Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "192.168.32.13:40886", "grpc.peer_subject": "CN=peer1.farmer.supply-chain-network.com,L=San Francisco,ST=California,C=US", "error": "context finished before block retrieved: context canceled", "grpc.code": "Unknown", "grpc.call_duration": "4.636199388s"}
Thanks in advance

Seems like the orderer was expecting a tls connection but cli did not connect with tls.
Did you properly specify --tls --cafile <orderer-cert> during invoke?

Related

Tls 1.3 client does not report failed handshake when client certificate verification by server failed

I have a C client using OpenSSL that is failing a test when using a certificate that fails validation on the server side during the SSL_do_handshake() call on the server. When the application was using TLS 1.2 The SSL_do_handshake() failure on the server would be reported back to the client when it called SSL_do_handshake() as a failure return value.
When upgrading my application to OpenSSL 1.1.1 and TLS 1.3 I noted that while the validation error is still occurring on the server, it was no longer being reported back to the client.
I'm aware that the handshake protocol got completely re-written as part of TLS 1.3 however it seems like with all of the various callbacks available I should be able somehow on the client side to determine that authentication has failed without having to attempt to write data to the server.
Has anyone else encountered this and can they recommend a path forward?
The server and client in both TLSv1.2 and TLSv1.3 consider the handshake to be complete when they have both written a "Finished" message, and received one from the peer. This is what the handshake looks like in TLSv1.2 (taken from RFC5246):
Client Server
ClientHello -------->
ServerHello
Certificate*
ServerKeyExchange*
CertificateRequest*
<-------- ServerHelloDone
Certificate*
ClientKeyExchange
CertificateVerify*
[ChangeCipherSpec]
Finished -------->
[ChangeCipherSpec]
<-------- Finished
Application Data <-------> Application Data
So here you can see that the client sends its Certificate and Finished messages in its second flight of communication with the server. It then waits to receive the ChangeCipherSpec and Finished messages back from the server before it considers the handshake "complete" and it can start sending application data.
This is the equivalent flow for TLSv1.3 taken from RFC8446:
Client Server
Key ^ ClientHello
Exch | + key_share*
| + signature_algorithms*
| + psk_key_exchange_modes*
v + pre_shared_key* -------->
ServerHello ^ Key
+ key_share* | Exch
+ pre_shared_key* v
{EncryptedExtensions} ^ Server
{CertificateRequest*} v Params
{Certificate*} ^
{CertificateVerify*} | Auth
{Finished} v
<-------- [Application Data*]
^ {Certificate*}
Auth | {CertificateVerify*}
v {Finished} -------->
[Application Data] <-------> [Application Data]
One of the advantages of TLSv1.3 is that it speeds up the time taken to complete a handshake. In TLSv1.3 the client receives the "Finished" message from the server before it sends its Certificate and Finished messages back. By the time the client sends its "Finished" message, it has already received the "Finished" and so the handshake has completed and it can immediately start sending application data.
This of course means that the client won't know whether the server has accepted the certificate or not until it next reads data from the server. If it has been rejected then the next thing the client will read will be a failure alert (otherwise it will be normal application data).
I'm aware that the handshake protocol got completely re-written as part of TLS 1.3 however it seems like with all of the various callbacks available I should be able somehow on the client side to determine that authentication has failed without having to attempt to write data to the server.
It's not writing data to the server that is important - it is reading data. Only then will you know whether the server has sent an alert or just normal application data. Until that data has been read there are no callbacks available in OpenSSL that will tell you this - because OpenSSL itself does not know due to the underlying protocol.

MQTT Error BR_ERR_BAD_VERSION on Shelly 1PM with Tasmota

I try to connect a Shelly 1 PM smart power relay to a managed MQTT broker.
The firmware on the device is a custom-built Tasmota 8.3.1 from the dev branch with USE_MQTT_TLS enabled. The port is set correctly to 8883 for TLS and the broker service is running at mqtt.bosch-iot-hub.com
When the device boots up, I can see the log messages on the serial port as follows:
23:03:03 MQT: Connect failed to mqtt.bosch-iot-hub.com:8883, rc 4. Retry in 10 sec
23:03:14 MQT: Attempting connection...
23:03:14 MQT: TLS connection error: 0
Return Code 4 is, according to the Tasmota documentation (https://tasmota.github.io/docs/TLS/), the code for BR_ERR_BAD_VERSION
And this error constant seems to be from BearSSL and means "Incoming record version does not match the expected version." (according to http://sources.freebsd.org/HEAD/src/contrib/bearssl/tools/errors.c)
Using an online TLS testing tool and checking mqtt.bosch-iot-hub, it supports only TLS 1.2 (1.3, 1.1 and 1.0 being disabled as well as SSLv2 and SSLv3). BearSSL website states that it supports TLS 1.2
I tried setting the log level of Tasmota in my_user_config.h , but it does not log any more verbose or detailed information.
#define SERIAL_LOG_LEVEL LOG_LEVEL_DEBUG_MORE // [SerialLog] (LOG_LEVEL_NONE, LOG_LEVEL_ERROR, LOG_LEVEL_INFO, LOG_LEVEL_DEBUG, LOG_LEVEL_DEBUG_MORE)
What is the error message supposed to mean? Is it a TLS incompatibility of the BearSSL stack or on the service side?
How can I enable verbose logging on Tasmota to see detailed TLS handshake information?
Anything else I am missing?
I appreciate after 6 months the question may have been a little expired, however the error code is not the TLS one as you describe, but rather the return code for the MQTT connection, as described in
https://tasmota.github.io/docs/MQTT/#return-codes-rc
which means your error code corresponds to
4 MQTT_CONNECT_BAD_CREDENTIALS the username/password were rejected

erlang failed to resolve ipv6 addresses using parameter from rabbitmq

I'm using rabbitmq cluster in k8s which has only pure ipv6 address. inet return nxdomain error when parsing the k8s service name.
The paramter passed to erlang from rabbitmq is:
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS="+A 128 -kernel inetrc '/etc/rabbitmq/erl_inetrc' -proto_dist inet6_tcp"
RABBITMQ_CTL_ERL_ARGS="-proto_dist inet6_tcp"
erl_inetrc: |-
{inet6, true}.
when rabbitmq using its plugin rabbit_peer_discovery_k8s to invoke k8s api:
2019-10-15 07:33:55.000 [info] <0.238.0> Peer discovery backend does not support locking, falling back to randomized delay
2019-10-15 07:33:55.000 [info] <0.238.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized start
up delay.
2019-10-15 07:33:55.000 [debug] <0.238.0> GET https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/tazou/endpoints/zt4-crmq
2019-10-15 07:33:55.015 [debug] <0.238.0> Response: {error,{failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},{inet,[inet]
,nxdomain}]}}
2019-10-15 07:33:55.015 [debug] <0.238.0> HTTP Error {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},{inet,[inet],nxdom
ain}]}
2019-10-15 07:33:55.015 [info] <0.238.0> Failed to get nodes from k8s - {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}}
,
{inet,[inet],nxdomain}]}
2019-10-15 07:33:55.016 [error] <0.237.0> CRASH REPORT Process <0.237.0> with 0 neighbours exited with reason: no case clause matching {error,"{fa
iled_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from
_config/0 line 167 in application_master:init/4 line 138
2019-10-15 07:33:55.016 [info] <0.43.0> Application rabbit exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kub
ernetes.default.svc.cluster.local\",443}},\n
in k8s console, the address could be resolved:
[rabbitmq]# nslookup -type=AAAA kubernetes.default.svc.cluster.local
Server: 2019:282:4000:2001::6
Address: 2019:282:4000:2001::6#53
kubernetes.default.svc.cluster.local has AAAA address fd01:abcd::1
the inet could return ipv6 address.
kubectl exec -ti zt4-crmq-0 rabbitmqctl eval 'inet:gethostbyname("kubernetes.default.svc.cluster.local").'
{ok,{hostent,"kubernetes.default.svc.cluster.local",[],inet6,16,
[{64769,43981,0,0,0,0,0,1}]}}
as I know, plugin call httpc:request to invoke k8s api. I don't know what's the gap between httpc:request and inet:gethostbyname. I also don't what's used by httpc:request to resolve the address of hostname.
I query for the rabbitmq plugin, It's said that rabbitmq plugin don't aware how erlang resovlve the address. https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/issues/55.
Anything else I could set for erl_inetrc so that erlang could resolve the ipv6 address? what did i miss to config? or how could i debug from erlang side? I'm new to erlang.
B.R,
Tao

Hyperledger Fabric - Peer unable to connect to (raft) Orderer with Mutual TLS

I am running a HLF on kubernetes - (3 raft orderers & 2 peers)
Now as raft requires Mutual TLS I had to setup some certificates.
The 3 raft orderers are able to communicate with eachother, as they are electing a leader, and re-electing another leader when I bring that leader down.
When I setup the peer, I used the same CA to generate the certificates. I am able to create the channel & join it from the peer. However I have to run CORE_PEER_MSPCONFIGPATH=$ADMIN_MSP_PATH prior to those commands, otherwise I get Access Denied error.
I am also forced to append the following flags to every peer channel x command I run.
--tls --cafile $ORD_TLS_PATH/cacert.pem --certfile $CORE_PEER_TLS_CLIENTCERT_FILE --keyfile $CORE_PEER_TLS_CLIENTKEY_FILE --clientauth
I am able to create, fetch, join the channel using the admin msp.
Now once the channel is joined, the peer is unable to connect with the orderer, somehow a bad certificate is given.
Orderer Logs
A bad certificate is used ?
2019-08-15 16:07:55.699 UTC [core.comm] ServerHandshake -> ERRO 221 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.130.2.148:53922
2019-08-15 16:07:55.699 UTC [grpc] handleRawConn -> DEBU 222 grpc: Server.Serve failed to complete security handshake from "10.130.2.148:53922": remote error: tls: bad certificate
Peer Logs
These suggest that it could not validate it with the ca.crt ?
2019-08-15 16:10:17.990 UTC [grpc] DialContext -> DEBU 03a parsed scheme: ""
2019-08-15 16:10:17.990 UTC [grpc] DialContext -> DEBU 03b scheme "" not registered, fallback to default scheme
2019-08-15 16:10:17.991 UTC [grpc] watcher -> DEBU 03c ccResolverWrapper: sending new addresses to cc: [{orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}]
2019-08-15 16:10:17.991 UTC [grpc] switchBalancer -> DEBU 03d ClientConn switching balancer to "pick_first"
2019-08-15 16:10:17.991 UTC [grpc] HandleSubConnStateChange -> DEBU 03e pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, CONNECTING
2019-08-15 16:10:18.009 UTC [grpc] createTransport -> DEBU 03f grpc: addrConn.createTransport failed to connect to {orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
2019-08-15 16:10:18.012 UTC [grpc] HandleSubConnStateChange -> DEBU 040 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, TRANSIENT_FAILURE
2019-08-15 16:10:18.991 UTC [grpc] HandleSubConnStateChange -> DEBU 041 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, CONNECTING
2019-08-15 16:10:19.003 UTC [grpc] createTransport -> DEBU 042 grpc: addrConn.createTransport failed to connect to {orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
2019-08-15 16:10:19.003 UTC [grpc] HandleSubConnStateChange -> DEBU 043 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, TRANSIENT_FAILURE
2019-08-15 16:10:20.719 UTC [grpc] HandleSubConnStateChange -> DEBU 044 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, CONNECTING
2019-08-15 16:10:20.731 UTC [grpc] createTransport -> DEBU 045 grpc: addrConn.createTransport failed to connect to {orderer-2.hlf-orderers.svc.cluster.local:7050 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority". Reconnecting...
2019-08-15 16:10:20.733 UTC [grpc] HandleSubConnStateChange -> DEBU 046 pickfirstBalancer: HandleSubConnStateChange: 0xc00260b710, TRANSIENT_FAILURE
2019-08-15 16:10:20.990 UTC [ConnProducer] NewConnection -> ERRO 047 Failed connecting to {orderer-2.hlf-orderers.svc.cluster.local:7050 [OrdererMSP]} , error: context deadline exceeded
I generated the used certificates as follows:
Orderer Admin
fabric-ca-client enroll -u https://u:p#ca.example.com -M ./OrdererMSP
Orderer Node X
As I use the same certificates for TLS I added the used hosts here for TLS purposes
orderer-x.hlf-orderers.svc.cluster.local #kubernetes
orderer-x.hlf-orderers #kubernetes
orderer-x #kubernetes
localhost #local debug
fabric-ca-client enroll -m orderer-x \
-u https://ox:px#ca.example.com \
--csr.hosts orderer-x.hlf-orderers.svc.cluster.local,orderer-x.hlf-orderers,orderer-x,localhost \
-M orderer-x-MSP
Peer Admin
fabric-ca-client enroll -u https://u:p#ca.example.com -M ./PeerMSP
Peer Node X
fabric-ca-client enroll -m peer-x \
-u https://ox:px#ca.example.com \
--csr.hosts peer-x.hlf-peers.svc.cluster.local,peer-x.hlf-peers,peer-x,localhost \
-M peer-x-MSP
Now all of these, have the same ca.crt (/cacerts/ca.example.com.pem)
configtx.yaml
Orderer:
<<: *OrdererDefaults
OrdererType: etcdraft
EtcdRaft:
Consenters:
- Host: orderer-1.hlf-orderers.svc.cluster.local
Port: 7050
ClientTLSCert: orderer-1-MSP/signcerts/cert.pem
ServerTLSCert: orderer-1-MSP/signcerts/cert.pem
- Host: orderer-2.hlf-orderers.svc.cluster.local
Port: 7050
ClientTLSCert: orderer-2-MSP/signcerts/cert.pem
ServerTLSCert: orderer-2-MSP/signcerts/cert.pem
- Host: orderer-3.hlf-orderers.svc.cluster.local
Port: 7050
ClientTLSCert: orderer-3-MSP/signcerts/cert.pem
ServerTLSCert: orderer-3-MSP/signcerts/cert.pem
Addresses:
- orderer-1.hlf-orderers.svc.cluster.local:7050
- orderer-2.hlf-orderers.svc.cluster.local:7050
- orderer-3.hlf-orderers.svc.cluster.local:7050
I have checked multiple times if the correct certificates are mounted on the correct places and configured.
On the peer side I made sure that:
CORE_PEER_TLS_CLIENTROOTCAS_FILES is set correctly and that the (correct) file gets mounted (CORE_PEER_TLS_CLIENTROOTCAS_FILES: "/var/hyperledger/tls/client/cert/ca.crt")
Idem for CORE_PEER_TLS_CLIENTKEY_FILE & CORE_PEER_TLS_CLIENTCERT_FILE
CORE_PEER_TLS_CLIENTAUTHREQUIRED is set to true
On the orderer side I made sure that:
ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED is set to true
ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE is set correctly
ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY is set correctly
ORDERER_GENERAL_TLS_CLIENTROOTCAS is set correctly
It seems strange to me that the orderers are able to talk to eachother (as they are electing leaders), but that the peer is not able to do so
So it appears to be, that the tlscacerts should be in the msp(s) directory(ies) PRIOR to creating genesis / channel block. Simply mounting them in the pod at runtime is not enough
My msp directories (used in configtx.yaml) look like:
admincerts
tlscacerts
cacerts
...
After this it all started to work
seems like you have got below error
E0923 16:30:14.963567129 31166 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate.
E0923 16:30:15.964456710 31166 ssl_transport_security.cc:188] ssl_info_callback: error occured.
According to your details, All seems to be correct
However check below
certificate signed by unknown authority -> This makes me bit doubt on your certificate mapping
MAKE SURE
PEER:
CORE_PEER_TLS_ENABLED=true
CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.crt
CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/tls/server.key
CORE_PEER_TLS_ROOTCERT_FILE=/data/maersksea-rca-maersksea-chain.pem
CORE_PEER_TLS_CLIENTCERT_FILE=/data/tls/maersksea-peer-maersksea-client.crt
CORE_PEER_TLS_CLIENTKEY_FILE=/data/tls/maersksea-peer-maersksea-client.key
CORE_PEER_TLS_CLIENTAUTHREQUIRED=true
CORE_PEER_TLS_CLIENTROOTCAS_FILES=/data/maersksea-rca-maersksea-chain.pem
Orderer:
ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED=true
ORDERER_GENERAL_TLS_CLIENTROOTCAS=[/data/maersksea-rca-maersksea-chain.pem]

IBM WebSphere Portal V8.5 wcm library syndication

I have a WebSphere Portal Version 8.5 Cluster on AIX 7.1 with multiple Virtual Portals, working with managed pages and each Virtual Portal has it's own libraries and one shared library for all VPs using syndication of that library to each VP.
i successfully created the syndication pair between the syndicator (WAS base portal) and the subscriber (Virtual Portal) and tested connection between them and all is good (make sense since VP are local on the same server). however when trying to syndicate the library content it stays on Queued status and in the SystemOut.log i see the following error log:
[4/25/17 9:33:53:201 IDT] 00004163 PackageConsum E Unexpected exception thrown while updating subscription: [IceId: Current State: ], exception: com.ibm.workplace.wcm.services.WCMServiceRuntimeException: code: 400
com.ibm.workplace.wcm.services.WCMServiceRuntimeException: code: 400
at com.aptrix.syndication.business.subscriber.CatalogRetrieverTask.getSourceCatalog(CatalogRetrieverTask.java:330)
at com.aptrix.syndication.business.subscriber.CatalogRetrieverTask.process(CatalogRetrieverTask.java:144)
at com.aptrix.syndication.business.subscriber.PackageConsumerTask.processPackage(PackageConsumerTask.java:513)
at com.aptrix.syndication.business.subscriber.PackageConsumerTask.processUpdate(PackageConsumerTask.java:267)
at com.aptrix.syndication.business.subscriber.PackageConsumerTask$1.run(PackageConsumerTask.java:183)
at com.ibm.wps.ac.impl.UnrestrictedAccessImpl.run(UnrestrictedAccessImpl.java:84)
at com.ibm.wps.command.ac.ExecuteUnrestrictedCommand.execute(ExecuteUnrestrictedCommand.java:90)
at com.aptrix.syndication.business.subscriber.PackageConsumerTask.doManagedWork(PackageConsumerTask.java:195)
at com.aptrix.syndication.business.ManagedTask.runWork(ManagedTask.java:62)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmWork.runImpl(AbstractWcmWork.java:162)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmSystemWork.access$001(AbstractWcmSystemWork.java:40)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmSystemWork$1.run(AbstractWcmSystemWork.java:92)
at com.ibm.wps.ac.impl.UnrestrictedAccessImpl.run(UnrestrictedAccessImpl.java:84)
at com.ibm.wps.command.ac.ExecuteUnrestrictedCommand.execute(ExecuteUnrestrictedCommand.java:90)
at com.ibm.workplace.wcm.services.repository.PACServiceImpl.runAsPrivileged(PACServiceImpl.java:1878)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmSystemWork.runImpl(AbstractWcmSystemWork.java:87)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmWork.run(AbstractWcmWork.java:146)
at com.ibm.wps.services.workmanager.impl.WasWorkWrapper.run(WasWorkWrapper.java:44)
at com.ibm.ws.asynchbeans.J2EEContext$RunProxy.run(J2EEContext.java:271)
at java.security.AccessController.doPrivileged(AccessController.java:274)
at com.ibm.ws.asynchbeans.J2EEContext.run(J2EEContext.java:797)
at com.ibm.ws.asynchbeans.WorkWithExecutionContextImpl.go(WorkWithExecutionContextImpl.java:222)
at com.ibm.ws.asynchbeans.ABWorkItemImpl.run(ABWorkItemImpl.java:206)
at java.lang.Thread.run(Thread.java:804)
[4/25/17 9:33:53:222 IDT] 00004163 SyndicationEx W Unsuccessful request to send summary: 400
com.aptrix.deployment.wizard.SyndicatorCommunicationException: Unsuccessful request to send summary: 400
at com.ibm.workplace.wcm.api.syndication.SyndicationExtensionsServiceImpl.sendSummaryToSyndicator(SyndicationExtensionsServiceImpl.java:293)
at com.ibm.workplace.wcm.api.syndication.SyndicationExtensionsServiceImpl.processSubscriberCompleting(SyndicationExtensionsServiceImpl.java:246)
at com.aptrix.syndication.business.subscriber.SubscriberTaskManager.processFailedUpdate(SubscriberTaskManager.java:405)
at com.aptrix.syndication.business.subscriber.PackageConsumerTask.processUpdate(PackageConsumerTask.java:400)
at com.aptrix.syndication.business.subscriber.PackageConsumerTask$1.run(PackageConsumerTask.java:183)
at com.ibm.wps.ac.impl.UnrestrictedAccessImpl.run(UnrestrictedAccessImpl.java:84)
at com.ibm.wps.command.ac.ExecuteUnrestrictedCommand.execute(ExecuteUnrestrictedCommand.java:90)
at com.aptrix.syndication.business.subscriber.PackageConsumerTask.doManagedWork(PackageConsumerTask.java:195)
at com.aptrix.syndication.business.ManagedTask.runWork(ManagedTask.java:62)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmWork.runImpl(AbstractWcmWork.java:162)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmSystemWork.access$001(AbstractWcmSystemWork.java:40)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmSystemWork$1.run(AbstractWcmSystemWork.java:92)
at com.ibm.wps.ac.impl.UnrestrictedAccessImpl.run(UnrestrictedAccessImpl.java:84)
at com.ibm.wps.command.ac.ExecuteUnrestrictedCommand.execute(ExecuteUnrestrictedCommand.java:90)
at com.ibm.workplace.wcm.services.repository.PACServiceImpl.runAsPrivileged(PACServiceImpl.java:1878)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmSystemWork.runImpl(AbstractWcmSystemWork.java:87)
at com.ibm.workplace.wcm.services.workmanager.AbstractWcmWork.run(AbstractWcmWork.java:146)
at com.ibm.wps.services.workmanager.impl.WasWorkWrapper.run(WasWorkWrapper.java:44)
at com.ibm.ws.asynchbeans.J2EEContext$RunProxy.run(J2EEContext.java:271)
at java.security.AccessController.doPrivileged(AccessController.java:274)
at com.ibm.ws.asynchbeans.J2EEContext.run(J2EEContext.java:797)
at com.ibm.ws.asynchbeans.WorkWithExecutionContextImpl.go(WorkWithExecutionContextImpl.java:222)
at com.ibm.ws.asynchbeans.ABWorkItemImpl.run(ABWorkItemImpl.java:206)
at java.lang.Thread.run(Thread.java:804)
[4/25/17 9:33:53:227 IDT] 00004163 syndication I Syndication Summary - Subscriber
Syndicator: IntShared_Syn, URL=http://'Was_Server':10039/wps/wcm/connect?MOD=Synd
Subscriber: IntShared_Sub, URL=http://'Was_Server':10039/wps/wcm/connect/'VP_URL_Context'?MOD=Subs
Status: FAILED
Failure Detail: Update failed on subscriber
Unexpected exception thrown while updating subscription: [IceId: Current State: ], exception: com.ibm.workplace.wcm.services.WCMServiceRuntimeException: code: 400
Update Type: REBUILD
Start Date: Tue Apr 25 09:33:53 IDT 2017
Finished Date: Tue Apr 25 09:33:53 IDT 2017
Duration:
Total: 0
Total Failed: 0
[4/25/17 9:33:54:613 IDT] 00000136 syndication I Syndication Summary - Syndicator
Syndicator: IntShared_Syn, URL=http://'Was_Server':10039/wps/wcm/connect?MOD=Synd
Subscriber: IntShared_Sub, URL=http://'VP_HostName':10039/wps/wcm/connect?MOD=Subs
Status: FAILED
Failure Detail: Terminated without confirmation
Returned non-confirmed response: Not confirmed. Unable to contact subscriber. Check the subscriber to ensure it is active and error free. Also review your network connections and your syndication configuration to ensure the subscriber details are correct.
Update Type: REBUILD
Start Date: Tue Apr 25 09:33:53 IDT 2017
Finished Date: Tue Apr 25 09:33:54 IDT 2017
Duration: 1 second
Total: 0
Total Failed: 0
WCM Syndication requires HTTP Basis Authentication to be configured and working.
then I needed to make sure that Trust Association is enabled in WAS Console under Security -> Global Security -> Web and SIP security -> Trust association.
confirmed that the box that says Enable trust association is checked.
also ensured the Interceptor com.ibm.portal.auth.tai.HTTPBasicAuthTAI is created and the configuration were correct.
the cause of the error was that in the fields of urlBlackList and urlWhiteList there was use of the variable ${WpsContextRootPath} which i found out that it is not set anywhere so i change it to /wps instead and now the fields are as follow:
urlBlackList = /wps/myportal*
urlWhiteList = /wps/mycontenthandler*
after Restarting the server and retry syndication - it works!.
also you may follow the direction in this link:
https://developer.ibm.com/answers/questions/206675/why-do-i-see-occasionally-see-a-popup-box-with-a-t.html
but setting these parameters disabled the servlet of vieweing all items in the libraries...
You can try using the ip address instead of the hostname. or Try adding the VP context to the syndicator/subscriber URLs.