[c.s.a.s.f.f.s.m.m.HealthCheckBase: ERROR] - Health Check: service at URL https://cobalt-gke.gk.cobalt.only.sap/auditpublish-dev/greeting responded with message I/O error on GET request for "https://cobalt-gke.gk.cobalt.only.sap/auditpublish-dev/greeting": handshake_failure(40); nested exception is org.bouncycastle.tls.TlsFatalAlert: handshake_failure(40)
Provider: SecureRandom.null algorithm from: BCFIPS_RNG
Provider: Cipher.AES/GCM/NoPadding encryption algorithm from: BCFIPS
Provider: Cipher.AES/GCM/NoPadding decryption algorithm from: BCFIPS
Provider: Cipher.AES/GCM/NoPadding decryption algorithm from: BCFIPS
Provider: Cipher.AES/GCM/NoPadding encryption algorithm from: BCFIPS
Provider: Cipher.AES/GCM/NoPadding encryption algorithm from: BCFIPS
Provider: Cipher.AES/GCM/NoPadding decryption algorithm from: BCFIPS
I am getting this error with Bouncy Castle. Any help would be great help. Thanks
Related
I'm attempting to set up KEDA with azure storage queues for experimentation purposes. I want to use the connection string for the storage account for authentication purposes but the KEDA operator is unable to parse the connection string. The ScaledObject is created and its status is "Ready" but is unable to get information on queue length or talk to the queues at all. I have created an Opaque secret and referenced it using the authenticationRef section as described in the documentation.
The error I see in the logs is:
ERROR azure_queue_scaler error) {"error": "can't parse storage connection string. Missing key or name"}
I have carefully followed the documentation and looked at the KEDA source code but I'm still puzzled.
This is my secret definition:
kind: Secret
metadata:
name: azure-st-conn-string-secret
type: Opaque
data:
connectionString: MY_BASE_64_ENCODED_CONNECTION_STRING
And my TriggerAuth and ScaledObjects:
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: azure-queue-auth
spec:
secretTargetRef:
- parameter: connection
name: azure-st-conn-string-secret
key: connectionString
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: azure-queue-scaledobject
spec:
scaleTargetRef:
name: my-deployment-target
triggers:
- type: azure-queue
metadata:
queueLength: '5'
queueName: my-queue-name
authenticationRef:
name: azure-queue-auth
Having gone through the KEDA source code, this error would be thrown if the connection string is not in the right format. However, I verified that the secret, as created in the cluster, conforms exactly to what the source code expects, i.e:
DefaultEndpointsProtocol=https;AccountName=THE_ACCOUNT_NAME;AccountKey=THE_ACCOUNT_KEY;EndpointSuffix=core.windows.net
What am I doing wrong?
We have configured two Kafka broker in application.YAML, one with SASL KERBEROS and the other one with SASL SCRAM. While starting the service it's connecting to broker with SASL KERBEROS and getting below error for other broker (SASL SCRAM). When we connect to one broker with SALS SCRAM in the application YAML it's connecting without any error
==============================================================================================
Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE
main] o.a.k.c.s.a.SaslClientAuthenticator Set SASL client state to SEND_HANDSHAKE_REQUEST
main] o.a.k.c.s.a.SaslClientAuthenticator Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
main] o.a.k.c.s.a.SaslClientAuthenticator Set SASL client state to INITIAL
main] o.apache.kafka.common.network.Selector Unexpected error from 100.76.140.194; closing connection
java.lang.NullPointerException: null
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendSaslClientToken(SaslClientAuthenticator.java:389)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.sendInitialToken(SaslClientAuthenticator.java:296)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:237)
Application.YAML
binders:
binder1:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
replication-factor: 1
brokers: ${eventhub.broker.hosts2}
zkNodes: ${eventhub.zookeper.hosts2}
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: GSSAPI
ssl:
truststore:
location: ${eventhub.broker.cert.location2}
password: ${eventhub.broker.cert.password2}
jaas:
options:
useKeyTab: true
storeKey: true
keyTab: /scratch/kafka/kafka2/krb5.keytab
serviceName: kafka
principal: kafka/XXXXXXXXXXXXXXXX.COM
default:
consumer:
autoCommitOffset: false
binder2:
type: kafka
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: ${eventhub.broker.hosts} # 10.40.158.93:9093
zkNodes: ${eventhub.zookeper.hosts} #10.40.158.93:2181
autoCreateTopics: false
zkConnectionTimeout: 36000
headers:
- event
- sourceSystem
- userId
- branchCode
- kafka_messageKey
jaas:
loginModule: org.apache.kafka.common.security.scram.ScramLoginModule
options:
username: ${eventhub.broker.user}
password: ${eventhub.broker.password}
configuration:
security:
protocol: SASL_SSL
sasl:
mechanism: SCRAM-SHA-256
ssl:
enabled:
truststore:
location: ${eventhub.broker.cert.location}
password: ${eventhub.broker.cert.password}
Instead of relying on setting JAAS configuration through the binder or setting the java.security.auth.login.config property, when you have multiple clusters with different security contexts within a single application, you need to use the approaches mentioned on KIP-85. Basically, you need to set sasl.jaas.config property which takes precedence over other methods. By using sasl.jaas.config, you can override the restrictions placed by JVM in which a JVM-wide static security context is used, thus ignoring any subsequent JAAS configurations found after the first one.
Here is a sample application that demonstrates how to connect to multiple Kafka clusters with different security contexts as a multi-binder application.
Have RabbitMQ configured to enable TLS with certificates. Key, Cert, and CA defined in .conf file. Upon service startup, error is thrown. Cannot find the cause for this to be thrown and logging isn't giving any more information at the debug level.
Get a client alert failure and am not certain of cause.
2019-03-22 10:04:18.690 [info] <0.7.0> Server startup complete; 4 plugins started.
* rabbitmq_amqp1_0
* rabbitmq_management
* rabbitmq_management_agent
* rabbitmq_web_dispatch
2019-03-22 10:04:24.831 [debug] <0.689.0> Supervisor {<0.689.0>,rabbit_connection_sup} started rabbit_connection_helper_sup:start_link() at pid <0.690.0>
2019-03-22 10:04:24.831 [debug] <0.689.0> Supervisor {<0.689.0>,rabbit_connection_sup} started rabbit_reader:start_link(<0.690.0>, {acceptor,{0,0,0,0},5671}) at pid <0.691.0>
2019-03-22 10:04:24.909 [info] <0.688.0> TLS server: In state certify received CLIENT ALERT: Fatal - Certificate Unknown
Our certs didn't have the correct type of X509v3 Extended Key Usage on the cert.
For x509 Auth, you'll need to assign client web auth when creating the certificate.
X509v3 Extended Key Usage:
TLS Web Client Authentication
This won't fix the issue if your certificate CA is broken and can't be verified, but for my issue, this was the resolution.
I configured API manager to send data to WSO2 data analytics server.
My configuration on API server is:
Event Receiver Configurations: {tcp://wso2-dac-svc.libre-dev.com:7611}
Data Analyzer Configurations: https://wso2-dac-svc.libre-dev.com:8443
On DAC server I installed API_Manager_Analytics.car which has event receiver definitions.
On API server I have imported certificate from DAC server using keytool. I restarted both servers.
I am seeing following error in the log:
TID: [-1] [] [2016-05-16 16:06:11,417] ERROR {org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher} - Error while connection to event receiver {org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher}
org.wso2.carbon.databridge.commons.exception.AuthenticationException: Access denied for user admin to login TCP,wso2-dac-svc.libre-dev.com:7611,TCP,wso2-dac-svc.libre-dev.com:7711
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:54)
at org.wso2.carbon.databridge.agent.thrift.DataPublisher.start(DataPublisher.java:273)
at org.wso2.carbon.databridge.agent.thrift.DataPublisher.<init>(DataPublisher.java:161)
at org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher$ReceiverConnectionWorker.run(AsyncDataPublisher.java:843)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.wso2.carbon.databridge.agent.thrift.exception.AgentAuthenticatorException: Thrift exception
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.ThriftAgentAuthenticator.connect(ThriftAgentAuthenticator.java:51)
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.AgentAuthenticator.connect(AgentAuthenticator.java:51)
... 8 more
Caused by: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:163)
at org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:91)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.wso2.carbon.databridge.commons.thrift.service.secure.ThriftSecureEventTransmissionService$Client.send_connect(ThriftSecureEventTransmissionService.java:82)
at org.wso2.carbon.databridge.commons.thrift.service.secure.ThriftSecureEventTransmissionService$Client.connect(ThriftSecureEventTransmissionService.java:73)
at org.wso2.carbon.databridge.agent.thrift.internal.publisher.authenticator.ThriftAgentAuthenticator.connect(ThriftAgentAuthenticator.java:47)
... 9 more
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1509)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:747)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 15 more
Caused by: sun.security.validator.ValidatorException: No trusted certificate found
at sun.security.validator.SimpleValidator.buildTrustedChain(SimpleValidator.java:394)
at sun.security.validator.SimpleValidator.engineValidate(SimpleValidator.java:133)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1491)
... 23 more
TID: [-1] [] [2016-05-16 16:06:41,363] ERROR {org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher} - Reconnection failed fortcp://wso2-dac-svc.libre-dev.com:7611 {org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher}
TID: [-1] [] [2016-05-16 16:07:11,004] WARN {org.apache.synapse.core.axis2.TimeoutHandler} - Expiring message ID : urn:uuid:6ed9fae5-d1fb-4cdf-885b-e101e79faf40; dropping message after timeout of : 30 seconds {org.apache.synapse.core.axis2.TimeoutHandler}
TID: [-1] [] [2016-05-16 16:07:11,371] ERROR {org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher} - Reconnection failed fortcp://wso2-dac-svc.libre-dev.com:7611 {org.wso2.carbon.databridge.agent.thrift.AsyncDataPublisher}
TID: [-1] [] [2016-05-16 16:07:34,514] WARN {org.apache.synapse.transport.passthru.TargetHandler} - http-outgoing-9: Connection time out while in state: REQUEST_DONE {org.apache.synapse.transport.passthru.TargetHandler}
Basically I cannot get API stats sent to DAS server. Any help is appreciated.
Based on the error log the issue seems to be wit
The Data Analytic Server Thrift runs on port 7711 uses the 'wso2carbon.jks' located in /repository/resources/security by deault. As a result when we need to configure this on /repository/conf/carbon.xml as shown below
<Security> <br>
<!--<br>
KeyStore which will be used for encrypting/decrypting passwords<br>
and other sensitive information.
--><br>
<KeyStore><br>
<!-- Keystore file location--><br>
<Location>${carbon.home}/repository/resources/security/wso2carbon.jks</Location><br>
<!-- Keystore type (JKS/PKCS12 etc.)--><br>
<Type>JKS</Type><br>
<!-- Keystore password--><br>
<Password>wso2carbon</Password>
<!-- Private Key alias--><br>
<KeyAlias>wso2carbon</KeyAlias>
<!-- Private Key password-->
<KeyPassword>wso2carbon</KeyPassword><br>
</KeyStore><br>
For adding a new key store please use the below steps
1. Place the key store in '/repository/resources/security/' folder
2. Update the section 'Security/KeyStore/' of /repository/conf/carbon.xml accordingly
3. Update the keystore references of the data-agent-config.xml accordingly
4. Import the certificate of the new keystore to ESB's trust store located at /repository/resources/security/client-truststore.jks.
Finally, once the private key is changed, its relevent certificate should be imported to the API-Manager trust store located at /repository/resources/security/client-truststore.jks.
Hope this steps will sort the issues on the given error log.
I've configured an ActiveMQ broker with AMQP over SSL with mutual authentication, and it is working well with selfsigned certificates. The problem appeared when trying to test with one of my client's certificate which contains some critical extensions, causing the handshake to fail.
This is the stacktrace:
DEBUG | Transport Connection to: tcp://127.0.0.1:49318 failed: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: Certificate contains unsupported critical extensions: [2.5.29.32] | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: ssl:///127.0.0.1:49318
javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: Certificate contains unsupported critical extensions: [2.5.29.32]
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)[:1.7.0_75]
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1904)[:1.7.0_75]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:279)[:1.7.0_75]
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:273)[:1.7.0_75]
at sun.security.ssl.ServerHandshaker.clientCertificate(ServerHandshaker.java:1682)[:1.7.0_75]
at sun.security.ssl.ServerHandshaker.processMessage(ServerHandshaker.java:176)[:1.7.0_75]
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:901)[:1.7.0_75]
at sun.security.ssl.Handshaker.process_record(Handshaker.java:837)[:1.7.0_75]
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1023)[:1.7.0_75]
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)[:1.7.0_75]
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:889)[:1.7.0_75]
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)[:1.7.0_75]
at org.apache.activemq.transport.tcp.TcpBufferedInputStream.fill(TcpBufferedInputStream.java:50)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.transport.tcp.TcpTransport$2.fill(TcpTransport.java:629)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.transport.tcp.TcpBufferedInputStream.readStream(TcpBufferedInputStream.java:73)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.transport.tcp.TcpBufferedInputStream.read(TcpBufferedInputStream.java:94)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.transport.tcp.TcpTransport$2.read(TcpTransport.java:619)[activemq-client-5.13.1.jar:5.13.1]
at java.io.DataInputStream.readFully(DataInputStream.java:195)[:1.7.0_75]
at org.fusesource.hawtbuf.Buffer.readFrom(Buffer.java:412)[hawtbuf-1.11.jar:1.11]
at org.apache.activemq.transport.amqp.AmqpWireFormat.unmarshal(AmqpWireFormat.java:102)[activemq-amqp-5.13.1.jar:5.13.1]
at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:240)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:232)[activemq-client-5.13.1.jar:5.13.1]
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)[activemq-client-5.13.1.jar:5.13.1]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_75]
My understanding is that this is the default behaviour regarding the certificates extensions and that for any particular cases it should be overridden.
Does anybody knows if my assumption is correct? Anyone has a solution to this problem?
Thanks.