Quarkus migration to 2.x: failed authentication due to: SSL handshake failed. But works in 1.x - ssl

We use this configuration for connection to kafka topic:
mp.messaging.incoming.ddd.connector=smallrye-kafka
mp.messaging.incoming.ddd.topic=trx
mp.messaging.incoming.ddd.bootstrap.servers=xxx:9095
mp.messaging.incoming.ddd.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.incoming.ddd.security.protocol=SASL_SSL
mp.messaging.incoming.ddd.ssl.truststore.location=/truststore.jks
mp.messaging.incoming.ddd.ssl.truststore.password=${KAFKA_PASS}
mp.messaging.incoming.ddd.ssl.enabled.protocols=TLSv1.2
mp.messaging.incoming.ddd.ssl.truststore.type=JKS
mp.messaging.incoming.ddd.ssl.endpoint.identification.algorithm=
mp.messaging.incoming.ddd.sasl.mechanism=SCRAM-SHA-512
mp.messaging.incoming.ddd.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username....
it works perfectly fine in 1.x Quarkus versions. But not in quarkus 2.x.
So running quarkus in debug mode consumers are configured differently.
Quarkus 1.x log
... ...
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm =
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
... ...
Quarkus 2.x log:
... ...
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
... ...
The exact error I get is this:
Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching xxx found.
It is known that the hostname verification can be bypassed with the ssl.endpoint.identification.algorithm parameter to empty value, but quarkus 2 ignore the configuration.
How can I override ssl.endpoint.identification.algorithm parameter in quarkus 2.x?

I fixed the problem by inserting a dummy variable with the default at least one space:
mp.messaging.incoming.ddd.ssl.endpoint.identification.algorithm=${MY_VAR: }
This post was very helpful:
Provide empty string to Quarkus as the default value for an environment variable

Related

Kafka with SSL failed in producer

I am uploading the kafka environment with SSL, until then, without problems...
it goes up normally, but when I create a MySQL connector,
the producer does not receive the config sent by the docker environment!
Any Suggestions?
---
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: ${ZK_HOST}
hostname: ${ZK_HOST}
ports:
- "${ZK_PORT}:${ZK_PORT}"
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: ${ZK_PORT}
ZOOKEEPER_CLIENT_SECURE: 'true'
ZOOKEEPER_SSL_KEYSTORE_LOCATION: /etc/zookeeper/secrets/kafka.keystore.jks
ZOOKEEPER_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
ZOOKEEPER_SSL_TRUSTSTORE_LOCATION: /etc/zookeeper/secrets/kafka.truststore.jks
ZOOKEEPER_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
volumes:
- ./secrets:/etc/zookeeper/secrets
kafka-ssl:
image: confluentinc/cp-kafka:latest
container_name: ${BROKER_HOST}
hostname: ${BROKER_HOST}
ports:
- "${BROKER_PORT}:${BROKER_PORT}"
depends_on:
- ${ZK_HOST}
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: '${ZK_HOST}:${ZK_PORT}'
KAFKA_ADVERTISED_LISTENERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
KAFKA_SSL_KEYSTORE_FILENAME: kafka.keystore.jks
KAFKA_SSL_KEYSTORE_CREDENTIALS: cert_creds
KAFKA_SSL_KEY_CREDENTIALS: cert_creds
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.truststore.jks
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: cert_creds
KAFKA_SSL_CLIENT_AUTH: 'required'
KAFKA_SECURITY_PROTOCOL: SSL
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- ./secrets:/etc/kafka/secrets
schema-registry:
image: confluentinc/cp-schema-registry
container_name: ${SR_HOST}
hostname: ${SR_HOST}
depends_on:
- ${ZK_HOST}
- ${BROKER_HOST}
ports:
- "${SR_PORT}:${SR_PORT}"
environment:
SCHEMA_REGISTRY_HOST_NAME: ${SR_HOST}
SCHEMA_REGISTRY_LISTENERS: 'https://0.0.0.0:${SR_PORT}'
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: '${ZK_HOST}:${ZK_PORT}'
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION: /etc/schema-registry/secrets/kafka.keystore.jks
SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION: /etc/schema-registry/secrets/kafka.keystore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SSL_KEY_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION: /etc/schema-registry/secrets/kafka.truststore.jks
SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION: /etc/schema-registry/secrets/kafka.truststore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
SCHEMA_REGISTRY_SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: https
SCHEMA_REGISTRY_KAFKASTORE_TOPIC: _schemas
SCHEMA_REGISTRY_SSL_CLIENT_AUTH: 'true'
volumes:
- ./secrets:/etc/schema-registry/secrets
connect:
build:
context: .
dockerfile: Dockerfile
image: chethanuk/kafka-connect:5.3.1
hostname: ${SR_CON}
container_name: ${SR_CON}
depends_on:
- ${ZK_HOST}
- ${BROKER_HOST}
- ${SR_HOST}
ports:
- "${SR_CON_PORT}:${SR_CON_PORT}"
environment:
CONNECT_LISTENERS: 'https://0.0.0.0:${SR_CON_PORT}'
CONNECT_REST_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,DELETE,OPTIONS'
CONNECT_REST_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
CONNECT_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
CONNECT_REST_ADVERTISED_HOST_NAME: ${SR_CON}
CONNECT_REST_PORT: ${SR_CON_PORT}
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: https://${SR_HOST}:${SR_PORT}
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: '${ZK_HOST}:${ZK_PORT}'
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.2.1.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
CONNECT_SSL_CLIENT_AUTH: 'true'
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_KEY_PASSWORD: ${SSL_SECRET}
CONNECT_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
CONNECT_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.keystore.jks
CONNECT_SSL_KEYSTORE_PASSWORD: ${SSL_SECRET}
CONNECT_PRODUCER_SECURITY_PROTOCOL: SSL
CONNECT_PRODUCER_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
CONNECT_PRODUCER_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.truststore.jks
CONNECT_PRODUCER_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
CONNECT_CONSUMER_SECURITY_PROTOCOL: SSL
CONNECT_CONSUMER_BOOTSTRAP_SERVERS: 'SSL://${BROKER_HOST}:${BROKER_PORT}'
CONNECT_CONSUMER_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.truststore.jks
CONNECT_CONSUMER_SSL_TRUSTSTORE_PASSWORD: ${SSL_SECRET}
volumes:
- ./secrets:/etc/kafka/secrets
error:
[2021-05-21 05:13:50,157] INFO Requested thread factory for connector MySqlConnector, id = myql named = db-history-config-check (io.debezium.util.Threads)
[2021-05-21 05:13:50,160] INFO ProducerConfig values:
acks = 1
batch.size = 32768
bootstrap.servers = [broker:29092]
buffer.memory = 1048576
client.dns.lookup = default
client.id = myql-dbhistory
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 10000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2021-05-21 05:13:50,162] WARN Couldn't resolve server broker:29092 from bootstrap.servers as DNS resolution failed for broker (org.apache.kafka.clients.ClientUtils)
[2021-05-21 05:13:50,162] INFO [Producer clientId=myql-dbhistory] Closing the Kafka producer with timeoutMillis = 0 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2021-05-21 05:13:50,162] INFO WorkerSourceTask{id=zabbix-hosts-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2021-05-21 05:13:50,162] INFO WorkerSourceTask{id=zabbix-hosts-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask)
[2021-05-21 05:13:50,163] ERROR WorkerSourceTask{id=zabbix-hosts-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:432)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
at io.debezium.relational.history.KafkaDatabaseHistory.start(KafkaDatabaseHistory.java:235)
at io.debezium.relational.HistorizedRelationalDatabaseSchema.<init>(HistorizedRelationalDatabaseSchema.java:40)
at io.debezium.connector.mysql.MySqlDatabaseSchema.<init>(MySqlDatabaseSchema.java:90)
at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:94)
at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:130)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:208)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:88)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:407)
... 14 more
[2021-05-21 05:13:50,164] ERROR WorkerSourceTask{id=zabbix-hosts-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
vars
SSL_SECRET=
ZK_HOST=zookeeper
ZK_PORT=2181
BROKER_HOST=kafka-ssl
BROKER_PORT=9092
SR_HOST=schema-registry
SR_PORT=8181
SR_CON=connect
SR_CON_PORT=8083
HOST=localhost
build and image should not be used together. You've not shown your Dockerfile, so it's unclear what you're doing there, but it may explain why no variables are actually loaded
bootstrap.servers = [broker:29092]
Somewhere in your Connect configuration, you're not using kafka-ssl:9092 as the connection string
Notice that your key and value serializers are using String, not Avro settings, too... Interceptor list is empty, SSL settings don't seem to be applied, etc
To narrow it down, I don't think you need _PRODUCER_BOOTSTRAP_SERVERS or the consumer one.
You should exec into your container and look at the templated connect-distributed.properties file that was created
Note that the Debezium images come with the mysql connector classes, so maybe you don't need your own image?

InfluxDB refuses connection from telegraf when changing from HTTP to HTTPS

In my centos7 server, I have set up Telegraf and InfluxDB. InfluxDB successfully receives data from Telegraf and stores them in the database. But when I reconfigure both services to use https, I see the following error in Telegraf's logs
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [outputs.influxdb] When writing to [https://127.0.0.1:8086]: Post "https://127.0.0.1:8086/write?db=GRAFANA": dial tcp 127.0.0.1:8086: connect: connection refused
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [agent] Error writing to outputs.influxdb: could not write any address
InfluxDB doesn't show any errors in it's logs.
Below is my telegraf.conf file:
[agent]
hostname = "local"
flush_interval = "15s"
interval = "15s"
# Input Plugins
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.io]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.system]]
[[inputs.swap]]
[[inputs.netstat]]
[[inputs.processes]]
[[inputs.kernel]]
# Output Plugin InfluxDB
[[outputs.influxdb]]
database = "GRAFANA"
urls = [ "https://127.0.0.1:8086" ]
insecure_skip_verify = true
username = "telegrafuser"
password = "metricsmetricsmetricsmetrics"
And this is the uncommented [http] section of the influxdb.conf
# Determines whether HTTP endpoint is enabled.
enabled = false
# Determines whether the Flux query endpoint is enabled.
flux-enabled = true
# The bind address used by the HTTP service.
bind-address = ":8086"
# Determines whether user authentication is enabled over HTTP/HTTPS.
auth-enabled = false
# Determines whether HTTPS is enabled.
https-enabled = true
# The SSL certificate to use when HTTPS is enabled.
https-certificate = "/etc/ssl/server-cert.pem"
# Use a separate private key location.
https-private-key = "/etc/ssl/server-key.pem"

Continuous TLS handshake error logs in vault nodes due to LB health check

I am getting continuous TLS handshake errors every 5 sec due to my load balancer pinging vault nodes in every 5 seconds. Kube load balancer is pinging my vault nodes using
nc -vz podip podPort every 5 sec
I have already disabled client cert verification in my config.hcl but still see below logs in my kubectl logs for vault
kubectl logs pod-0 -n mynamespace
[INFO] http: TLS handshake error from 10.x.x.x:60056: EOF 2020-09-02T01:13:32.957Z
[INFO] http: TLS handshake error from 10.x.x.x:23995: EOF 2020-09-02T01:13:37.957Z
[INFO] http: TLS handshake error from 10.x.x.x:54165: EOF 2020-09-02T01:13:42.957Z
Below is my config.hcl which I am loading via kube config map
apiVersion: v1
kind: ConfigMap
metadata:
name: raft-config
labels:
name: raft-config
data:
config.hcl: |
storage "raft" {
path = "/vault-data"
tls_skip_verify = "true"
retry_join {
leader_api_addr = "https://vault-cluster-0:8200"
leader_ca_cert_file = "/opt/ca/vault.crt"
leader_client_cert_file = "/opt/ca/vault.crt"
leader_client_key_file = "/opt/ca/vault.key"
}
retry_join {
leader_api_addr = "https://vault-cluster-1:8200"
leader_ca_cert_file = "/opt/ca/vault.crt"
leader_client_cert_file = "/opt/ca/vault.crt"
leader_client_key_file = "/opt/ca/vault.key"
}
retry_join {
leader_api_addr = "https://vault-cluster-2:8200"
leader_ca_cert_file = "/opt/ca/vault.crt"
leader_client_cert_file = "/opt/ca/vault.crt"
leader_client_key_file = "/opt/ca/vault.key"
}
}
seal "transit" {
address = "https://vaulttransit:8200"
disable_renewal = "false"
key_name = "autounseal"
mount_path = "transit/"
tls_skip_verify = "true"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/opt/ca/vault.crt"
tls_key_file = "/opt/ca/vault.key"
tls_skip_verify = "true"
tls_disable_client_certs = "true"
}
ui=true
disable_mlock = true
As I am using external open source vault image and my load balancer is an internal LB (which has internal CA cert). I am suspecting my vault pod is not able to recognize the CA cert provided by my load balancer when it tries to ping port 8200(TCP listener is started by vault on this port)
These logs are harmless and not causing any issue but they are unnecessary noise which I want to avoid. My vault nodes are working on https and there seems to be no issue in their functionality.
Can someone please help understand why vault TCP listener is trying to do TLS handshake even though I have explicitly specified tls_disable_client_certs = "true"
Again these logs are flooding my pods every 5 sec when my LB tries to do a health check on my pods using nc -vz podip podPort
My vault version is 1.5.3
The messages are not about client certs or CA certs, a TLS handshake happens whether the client presents a certificate or not.
Instead, it is because a TCP connection is created and established and the Go library now wants to start a TLS handshake. Instead, the other side (the health checker) just hangs up and the TLS handshake never happens. Go then logs this message.
You are correct in saying that it is harmless, this is purely a side effect of port-liveness health checking. It is however spammy and annoying.
You have two basic options to get around this:
filter the messages out of the logs when persisting them
change to a different type of health check
I would recommend the second option: switch to a different health check. Vault has a /sys/health endpoint that can be used with HTTPS health checks.
In addition to getting rid of the TLS warning messages, the health endpoint also allows to you check for active and unsealed nodes.

RabbitMQ Web-MQTT WSS closes client connection. Insecure WS and other secure protocols work

I have a deployment of RabbitMQ that uses it's own certificates for end-to-end encryption. It uses both AMQP and MQTT-over-WSS to connect multiple types of clients. AMQP clients are able to connect securely, so I know that the certificate set up is good.
Clients using WS going to ws://hostname:15675/ws can connect fine, but obviously are not secure. Clients attempting to connect to wss://hostname:15676/ws have the connection closed on them. 15676 is the port you will see I have bound the web-mqtt ssl listener to, as shown below. I've gone through both the networking and tls help guide by RabbitMQ, and I see the port correctly bound and can confirm it is exposed and available to the client.
The relevant rabbit.conf:
listeners.tcp.default = 5671
listeners.ssl.default = 5671
ssl_options.cacertfile = /path/to/fullchain.pem
ssl_options.certfile = /path/to/cert.pem
ssl_options.keyfile = /path/to/privkey.pem
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = /path/to/fullchain.pem
web_mqtt.ssl.certfile = /path/to/cert.pem
web_mqtt.ssl.keyfile = /path/to/privkey.pem
Basically, I'm wondering if I have the connection string wrong (wss://hostname:15675/ws)? Do I need to go to /wss? Is it a problem my client is a browser running on localhost -- not HTTPS? Do I have a configuration set incorrectly -- am I missing one?
If there is a better source of documentation/examples of this plugin beyond the RabbitMQ website, I would also be interested.
maybe the configuration mismatch
if there any password for the private file you need to add it also.
refer to the following sample rabbitmq.conf
listeners.ssl.default = 5671
ssl_options.cacertfile = <path/ca-bundle (.pem/.cabundle)>
ssl_options.certfile = <path/cert (.pem/.crt)>
ssl_options.keyfile = <path/key (.pem/.key)>
ssl_options.password = <your private key password>
ssl_options.versions.1 = tlsv1.3
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.ciphers.1 = TLS_AES_256_GCM_SHA384
ssl_options.ciphers.2 = TLS_AES_128_GCM_SHA256
ssl_options.ciphers.3 = TLS_CHACHA20_POLY1305_SHA256
ssl_options.ciphers.4 = TLS_AES_128_CCM_SHA256
ssl_options.ciphers.5 = TLS_AES_128_CCM_8_SHA256
ssl_options.honor_cipher_order = true
ssl_options.honor_ecc_order = true
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = <path/ca-bundle (.pem/.cabundle)>
web_mqtt.ssl.certfile = <path/crt (.pem/.crt)>
web_mqtt.ssl.keyfile = <path/key (.pem/.key)>
web_mqtt.ssl.password = <your private key password>
web_mqtt.ssl.honor_cipher_order = true
web_mqtt.ssl.honor_ecc_order = true
web_mqtt.ssl.client_renegotiation = false
web_mqtt.ssl.secure_renegotiate = true
web_mqtt.ssl.versions.1 = tlsv1.2
web_mqtt.ssl.versions.2 = tlsv1.1
web_mqtt.ssl.ciphers.1 = ECDHE-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.2 = ECDHE-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.3 = ECDHE-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.4 = ECDHE-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.5 = ECDH-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.6 = ECDH-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.7 = ECDH-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.8 = ECDH-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.9 = DHE-RSA-AES256-GCM-SHA384
this is a working configuration file for the rabbitmq-server on ubuntu 20.04
restart the rabbitmq-server
list the listeners port (make sure that the SSL ports enabled) (rabbitmq-diagnostics listeners)
test the SSL (testssl localhost:16567)
also test the telnet (telnet localhost 16567)
please reffer : https://www.rabbitmq.com/ssl.html#erlang-otp-requirements and
troubleshooting
this is worked for me :-)

jax-ws with intermediate https proxy and https endpoint does not work

Problem Description
- We are having problems with a JAX-WS Webservice that wants to connect to
a server using HTTPS in combination with a proxy server.
The setups is as follows:
- WebSphere 6.0.1.47 running on AIX Version: 5300-10-07-1119
- A JAX-WS Webservice application
What happens is as follows:
JAX-WS application in WAS tries to connect to
'https://target.example.domain/url' while using a proxy server
- When the transport chain is started, the following error appears (i have
included the corresponding ffdc's as attachments to this mail) :
java.io.IOException: Async IO operation failed, reason: RC: 76 A socket
must be already connected.;
When we:
1) Use a HTTP destination and DO NOT use a Proxy Server then the
application works
2) Use a HTTPS destination and DO NOT use a Proxy Server then the
application works
3) Use a HTTP destination and USE a Proxy Server then the
application works
4) Use a HTTPS destination and USE a Proxy Server then the application
displays the error described above.
ffdc logs
" ------Start of DE processing------ = [1/14/15 13:04:39:913 CET] , key = java.io.IOException com.ibm.ws.websvcs.transport.http.HTTPConnection.connect 213
Exception = java.io.IOException
Source = com.ibm.ws.websvcs.transport.http.HTTPConnection.connect
probeid = 213
Stack Dump = java.io.IOException: Async IO operation failed, reason: RC: 76 A socket must be already connected.
at com.ibm.io.async.ResultHandler.runEventProcessingLoop(ResultHandler.java:679)
at com.ibm.io.async.ResultHandler$CompletionProcessingRunnable.run(ResultHandler.ja va:910)
at java.lang.Thread.run(Thread.java:813)
Dump of callerThis =
Object type = com.ibm.ws.websvcs.transport.http.HTTPConnection
com.ibm.ws.websvcs.transport.http.HTTPConnection#db30db3
Exception = java.io.IOException
Source = com.ibm.ws.websvcs.transport.http.HTTPConnection.connect
probeid = 213
Dump of callerThis =
Object type = com.ibm.ws.websvcs.transport.http.HTTPConnection
_tc =
defaultMessageFile = com.ibm.ejs.resources.seriousMessages
EXTENSION_NAME_DPID = DiagnosticProvider
ivDumpEnabled = false
ivResourceBundleName = com.ibm.ws.websvcs.resources.websvcsMessages
ivLogger = null
ivDiagnosticProviderID = null
anyTracingEnabled = true
ivLevel = 1
ivName = com.ibm.ws.websvcs.transport.http.HTTPConnection
ivDebugEnabled = true
ivEventEnabled = true
ivEntryEnabled = true
ivDetailEnabled = true
ivConfigEnabled = true
ivInfoEnabled = true
ivServiceEnabled = true
ivWarningEnabled = true
ivErrorEnabled = true
ivFatalEnabled = true
chainname = HttpsOutboundChain:xx-proxy- xxxxx.xxx.xxxx.com:8080:1665256594:10.21.197.161:9443
............."
We have tried setting the properties (https.proxyHost, https.proxyPort) at System level and also in the SOAP header, nothing works.
We are using BindingProv
Any help is much appreciated