Why can I read ksqldb streams but not topics within ksql client? - ssl

I am testing ksqldb on AWS EC2 instances in the latest release (confluent 5.5.1) and have an access problem that I can't solve.
I have a secured Kafka sever (SASL_SSSL, SASL mode PLAIN), an unsecured Schema Registry (another issue with Avro Serializers, but ok for the moment), and a secured KSQL Server and Client.
Topics are filled properly with AVRO data (value only, no key) from a JDBC source connector.
I can access the KSQL Server with ksql without issues
I can access KSQL REST API without issues
When I list topics within ksql, I get the correct list.
When I select a push stream, I get messages when I push something into the topic (with Kafka Connect, in my case).
BUT: When I call "print topic" I get a ~60 sec block in the client, followed by a 'Timeout expired while fetching topic metadata'.
The ksql-kafka.log goes wild with repeated entries like
[2020-09-02 18:52:46,246] WARN [Consumer clientId=consumer-2, groupId=null] Bootstrap broker ip-10-1-2-10.eu-central-1.compute.internal:9093 (id: -3 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1037)
The corresponding broker log shows
Sep 2 18:52:44 ip-10-1-6-11 kafka-server-start: [2020-09-02 18:52:44,704] INFO [SocketServer brokerId=1002] Failed authentication with ip-10-1-2-231.eu-central-1.compute.internal/10.1.2.231 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
This is my ksql-server.properties file:
ksql.service.id= hf_kafka_ksql_001
bootstrap.servers=ip-10-1-11-229.eu-central-1.compute.internal:9093,ip-10-1-6-11.eu-central-1.compute.internal:9093,ip-10-1-2-10.eu-central-1.compute.internal:9093
ksql.streams.state.dir=/var/data/ksqldb
ksql.schema.registry.url=http://ip-10-1-1-22.eu-central-1.compute.internal:8081
ksql.output.topic.name.prefix=ksql-interactive-
ksql.internal.topic.replicas=3
confluent.support.metrics.enable=false
# currently the keystore contains only the ksql server and the certificate chain to the CA
ssl.keystore.location=/var/kafka-ssl/ksql.keystore.jks
ssl.keystore.password=kspassword
ssl.key.password=kspassword
ssl.client.auth=true
# Need to set this to empty, otherwise the REST API is not accessible with the client key.
ssl.endpoint.identification.algorithm=
# currently the truststore contains only the CA certificate
ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
ssl.truststore.password=ctpassword
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
listeners=https://0.0.0.0:8088
advertised.listener=https://ip-10-1-2-231.eu-central-1.compute.internal:8088
authentication.method=BASIC
authentication.roles=admin,ksql,cli
authentication.realm=KsqlServerProps
# authentication for producers, needed for ksql commands like "Create Stream"
producer.ssl.endpoint.identification.algorithm=HTTPS
producer.security.protocol=SASL_SSL
producer.sasl.mechanism=PLAIN
producer.ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
producer.ssl.truststore.password=ctpassword
producer.sasl.mechanism=PLAIN
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
# authentication for consumers, needed for ksql commands like "Create Stream"
consumer.ssl.endpoint.identification.algorithm=HTTPS
consumer.security.protocol=SASL_SSL
consumer.ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
consumer.ssl.truststore.password=ctpassword
consumer.sasl.mechanism=PLAIN
consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
I call ksql with
ksql --user cli --password test --config-file /var/kafka-ssl/ksql_cli.properties https://ip-10-1-2-231.eu-central-1.compute.internal:8088'
This is my ksql client configuration ksql_cli.properties:
security.protocol=SSL
#ssl.client.auth=true
ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
ssl.truststore.password=ctpassword
ssl.keystore.location=/var/kafka-ssl/ksql.keystore.jks
ssl.keystore.password=kspassword
ssl.key.password=kspassword
JAAS config, included as Parameter on service start
KsqlServerProps {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/var/kafka-ssl/cli.password"
debug="false";
};
with cli.password containing the authentication users and passwords for the ksql client.
I call ksql with
ksql --user cli --password test --config-file /var/kafka-ssl/ksql_cli.properties https://ip-10-1-2-231.eu-central-1.compute.internal:8088'
I possibly have tried any permutation of keys, settings etc but to no avail. Obviously there is something wroing in key management. For me, it is surprising that usings streams is ok but the low-level topics is not.
Has someone found a solution for that issue? I am really running ou of ideas here. Thanks.

Found it! It was easy to overlook - the client's configuration needs of course. a SASL setting...
security.protocol=SASL_SSL

Related

How use kafka-topics with SASL auth

I have Kafka brokers in cluster. We use SASL authentication. How can I request for example topics list using kafka-topics.sh?
I assume that I should run
kafka-topics.sh \
--bootstrap-server kafka.broker:9092 \
--command-config config.properties \
--list
And to pass values to config.properties
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
sasl.username=user-name
sasl.password=password
ssl.key.location=/path/to/certs/key.pem
ssl.certificate.location=/path/to/certs/crt.pem
ssl.ca.location=/path/to/certs/ca.pem
When I run it I get
Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:553)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:485)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:134)
at kafka.admin.TopicCommand$TopicService$.createAdminClient(TopicCommand.scala:205)
at kafka.admin.TopicCommand$TopicService$.apply(TopicCommand.scala:209)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:50)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:82)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:167)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:524)
We use the same values to connect it from go service that uses segmentio Kafka driver. What config should be?
To pass SASL credentials you need to use the sasl.jaas.config setting. sasl.username and sasl.password are not valid settings with kafka-topics.sh (and the Java client).
For example:
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="user-name" \
password="password";
Similarly ssl.key.location, ssl.certificate.location and ssl.ca.location are not valid settings, you need to use ssl.keystore.location and ssl.truststore.location instead. See the full list of configurations: https://kafka.apache.org/documentation/#adminclientconfigs
See the SCRAM client configuration section in the Kafka docs if you want more details.

RKE2 Authorized endpoint configuration help required

I have a rancher 2.6.67 server and RKE2 downstream cluster. The cluster was created without authorized cluster endpoint. How to add an authorised cluster endpoint to a RKE2 cluster created by Rancher article describes how to add it in an existing cluster, however although the answer looks promising, I still must miss some detail, because it does not work for me.
Here is what I did:
Created /var/lib/rancher/rke2/kube-api-authn-webhook.yaml file with contents:
apiVersion: v1
kind: Config
clusters:
- name: Default
cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:6440/v1/authenticate
users:
- name: Default
user:
insecure-skip-tls-verify: true
current-context: webhook
contexts:
- name: webhook
context:
user: Default
cluster: Default
and added
"kube-apiserver-arg": [
"authentication-token-webhook-config-file=/var/lib/rancher/rke2/kube-api-authn-webhook.yaml"
to the /etc/rancher/rke2/config.yaml.d/50-rancher.yaml file.
After restarting rke2-server I found the network configuration tab in Rancher and was able to enable authorized endpoint. Here is where my success ends.
I tried to create a serviceaccount and got the secret to have token authorization, but it failed when connecting directly to the api endpoint on the master.
kube-api-auth pod logs this:
time="2022-10-06T08:42:27Z" level=error msg="found 1 parts of token"
time="2022-10-06T08:42:27Z" level=info msg="Processing v1Authenticate request..."
Also the log is full of messages like this:
E1006 09:04:07.868108 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:40.778350 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterAuthToken: failed to list *v3.ClusterAuthToken: the server could not find the requested resource (get clusterauthtokens.meta.k8s.io)
E1006 09:04:45.171554 1 reflector.go:139] pkg/mod/github.com/rancher/client-go#v1.22.3-rancher.1/tools/cache/reflector.go:168: Failed to watch *v3.ClusterUserAttribute: failed to list *v3.ClusterUserAttribute: the server could not find the requested resource (get clusteruserattributes.meta.k8s.io)
I found that SA tokens will not work this way so I tried to use a rancher user token, but that fails as well:
time="2022-10-06T08:37:34Z" level=info msg=" ...looking up token for kubeconfig-user-qq9nrc86vv"
time="2022-10-06T08:37:34Z" level=error msg="clusterauthtokens.cluster.cattle.io \"cattle-system/kubeconfig-user-qq9nrc86vv\" not found"
Checking the cattle-system namespace, there are no SA and secret entries corresponding to the users created in rancher, however I found SA and secret entries related in cattle-impersonation-system.
I tried creating a new user, but that too, only resulted in new entries in cattle-impersonation-system namespace, so I presume kube-api-auth wrongly assumes the location of the secrets to be cattle-system namespace.
Now the questions:
Can I authenticate with downstream RKE2 cluster using normal SA tokens (not ones created through Rancher server)? If so, how?
What did I do wrong about adding the webhook authentication configuration? How to make it work?
I noticed, that since I made the modifications described above, I cannot download the kubeconfig file from the rancher UI for this cluster. What went wrong there?
Thanks in advance for any advice.

IBM MQ expects username and password when using SSL certificates (Error 2035)

I am stuck at using SSL in IBM Websphere MQ (9.2).
I am building a client library for MQ and to get more familiar with MQ on the server side I have installed IBM MQ Developer edition and ran the supplied scripts to create a 'default' MQ server instance.
Created an client connection for the DEV.APP.SVRCONN server connection
Created a personal certificate by using the IBM Key management tool and named it ibmwebspheremq
Enabled SSL on the Queue Manager (QM1) and labelled it ibmwebspheremq
Updated the SSL configuration for the DEV.APP.SVRCONN channel and set the cipherspec property to TLS 1.2, 256-bit Secure Hash Algorithm, 128-bit AES encryption (TLS_RSA_WITH_AES_128_CBC_SHA256) and made SSL required.
Tested my settings with:
amqssslc -l ibmwebspheremq -k C:\ProgramData\IBM\MQ\qmgrs\QM1\ssl\key -c DEV.APP.SVRCONN -x 127.0.0.1 -s TLS_RSA_WITH_AES_128_CBC_SHA256 -m QM1
And that gave me:
Sample AMQSSSLC start
Connecting to queue manager QM1
Using the server connection channel DEV.APP.SVRCONN
on connection name 127.0.0.1.
Using SSL CipherSpec TLS_RSA_WITH_AES_128_CBC_SHA256
Using SSL key repository stem C:\ProgramData\IBM\MQ\qmgrs\QM1\ssl\key
Certificate Label: ibmwebspheremq
No OCSP configuration specified.
MQCONNX ended with reason code 2035
Error details (from log):
The active values of the channel were 'MCAUSER(app) CLNTUSER(Wilko)
SSLPEER(SERIALNUMBER=61:9B:A4:3E,CN=DESKTOP-ROH98N2,C=NL)
SSLCERTI(CN=DESKTOP-ROH98N2,C=NL) ADDRESS(DESKTOP-ROH98N2)'. The
MATCH(RUNCHECK) mode of the DISPLAY CHLAUTH MQSC command can be used to
identify the relevant CHLAUTH record.
ACTION:
Ensure that the application provides a valid user ID and password, or change
the queue manager connection authority (CONNAUTH) configuration to OPTIONAL to
allow client applications to connect which have not supplied a user ID and
password.
----- cmqxrmsa.c : 2086 -------------------------------------------------------
22/11/2021 15:51:37 - Process(15880.45) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(DESKTOP-ROH98N2) Installation(Installation1)
VRMF(9.2.3.0) QMgr(QM1)
Time(2021-11-22T14:51:37.594Z)
CommentInsert1(DEV.APP.SVRCONN)
CommentInsert2(15880(1112))
CommentInsert3(127.0.0.1)
AMQ9999E: Channel 'DEV.APP.SVRCONN' to host '127.0.0.1' ended abnormally.
EXPLANATION:
The channel program running under process ID 15880(1112) for channel
'DEV.APP.SVRCONN' ended abnormally. The host name is '127.0.0.1'; in some cases
the host name cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 630 --------------------------------------------------------
I am kind of stuck, I also saw in the log that there is PEER related info dumped, but I am not sing the SSLPEER settings (I just want to let everyone connect with the same certificate).
EDIT 2:
Output from RUNMQSC QM1 and command DISPLAY QMGR CONNAUTH:
1 : DISPLAY QMGR CONNAUTH
AMQ8408I: Display Queue Manager details.
QMNAME(QM1) CONNAUTH(DEV.AUTHINFO)
Output from RUNMQSC QM1 and command DISPLAY AUTHINFO(name-from-previous-command):
3 : DISPLAY AUTHINFO(DEV.AUTHINFO)
AMQ8566I: Display authentication information details.
AUTHINFO(DEV.AUTHINFO) AUTHTYPE(IDPWOS)
ADOPTCTX(YES) DESCR( )
CHCKCLNT(REQDADM) CHCKLOCL(OPTIONAL)
FAILDLAY(1) AUTHENMD(OS)
ALTDATE(2021-11-18) ALTTIME(15.09.20)
Output from DISPLAY CHLAUTH(*):
4 : DISPLAY CHLAUTH(*)
AMQ8878I: Display channel authentication record details.
CHLAUTH(DEV.ADMIN.SVRCONN) TYPE(USERMAP)
CLNTUSER(admin) USERSRC(CHANNEL)
AMQ8878I: Display channel authentication record details.
CHLAUTH(DEV.ADMIN.SVRCONN) TYPE(BLOCKUSER)
USERLIST(nobody)
AMQ8878I: Display channel authentication record details.
CHLAUTH(DEV.APP.SVRCONN) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(CHANNEL)
CHCKCLNT(REQUIRED)
AMQ8878I: Display channel authentication record details.
CHLAUTH(SYSTEM.ADMIN.SVRCONN) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(CHANNEL)
AMQ8878I: Display channel authentication record details.
CHLAUTH(SYSTEM.*) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(NOACCESS)
I was expecting not having to provide username and password when using certificates. What am I missing here?
Your queue manager is configured to mandate passwords for any client connections that are trying to run with a resolved MCAUSER that is privileged. That is what CHCKCLNT(REQDADM) on your AUTHINFO(DEV.AUTHINFO) does.
In addition, your CHLAUTH rule for the DEV.APP.SVRCONN channel has upgraded this further to mandate passwords for ALL connections using that channel.
If your intent is to have channels that supply a certificate not be subject to this mandate, then you should add a further, more specific, CHLAUTH rule, something along these lines:-
SET CHLAUTH(DEV.APP.SVRCONN) TYPE(SSLPEERMAP) +
SSLPEER('SERIALNUMBER=61:9B:A4:3E,CN=DESKTOP-ROH98N2,C=NL') +
SSLCERTI('CN=DESKTOP-ROH98N2,C=NL') CHCKCLNT(ASQMGR) USERSRC(CHANNEL)
Bear in mind that if this connection is asserting a privileged user id, it will still be required to supply a password from the system-wide setting of CHCKCLNT(REQDADM).
Remember, if you are ever unsure which CHLAUTH rule you are matching against, all those details you saw in the error message can be used to form a DISPLAY CHLAUTH command to discover exactly which rule you have matched. Read more about that in I’m being blocked by CHLAUTH – how can I work out why?

KAFKA and SSL : java.lang.OutOfMemoryError: Java heap space when using kafka-topics command on KAFKA SSL cluster

this is my first post on Stackoverflow, i hope i didnt choose the wrong section.
Context :
Kafka HEAP size is configured on following file :
/etc/systemd/system/kafka.service
With following parameter :
Environment="KAFKA_HEAP_OPTS=-Xms6g -Xmx6g"
OS is "CentOS Linux release 7.7.1908".
Kafka is "confluent-kafka-2.12-5.3.1-1.noarch", installed from the following repository :
# Confluent REPO
[Confluent.dist]
name=Confluent repository (dist)
baseurl=http://packages.confluent.io/rpm/5.3/7
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
[Confluent]
name=Confluent repository
baseurl=http://packages.confluent.io/rpm/5.3
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
I activated SSL on a 3-machine KAFKA cluster few days ago, and suddently, the following command stopped working :
kafka-topics --bootstrap-server <the.fqdn.of.server>:9093 --describe --topic <TOPIC-NAME>
Which return me the following error :
[2019-10-03 11:38:52,790] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1':(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152)
at java.lang.Thread.run(Thread.java:748)
On the server's log, the following line appears when i try to request it via "kafka-topics" :
/var/log/kafka/server.log :
[2019-10-03 11:41:11,913] INFO [SocketServer brokerId=<ID>] Failed authentication with /<ip.of.the.server> (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I was able to use this command properly BEFORE implementing SSL on the cluster. Here is the configuration i'm using.
All functionnality work properly (consumers, producers...) except "kafka-topics" :
# SSL Configuration
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
ssl.keystore.type=<keystore-type>
ssl.keystore.location=<keystore-path>
ssl.keystore.password=<keystore-password>
# Enable SSL between brokers
security.inter.broker.protocol=SSL
# Listeners
listeners=SSL://<fqdn.of.the.server>:9093
advertised.listeners=SSL://<fqdn.of.the.server>:9093
There is no problem with the certificate (which is signed by internal CA, internal CA which i added to the truststore specified on the configuration). OpenSSL show no errors :
openssl s_client -connect <fqdn.of.the.server>:9093 -tls1
>> Verify return code: 0 (ok)
The following command is working pretty well with SSL, thanks to parameter "-consumer.config client-ssl.properties"
kafka-console-consumer --bootstrap-server <fqdn.of.the.server>:9093 --topic <TOPIC-NAME> -consumer.config client-ssl.properties
"client-ssl.properties" content is :
security.protocol=SSL
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
Right now, i'm forced to use "--zookeeper", which according to the documentation, is deprecated :
--zookeeper <String: hosts> DEPRECATED, The connection string for
the zookeeper connection in the form
host:port. Multiple hosts can be
given to allow fail-over.
And of course, it's working pretty well :
kafka-topics --zookeeper <fqdn.of.the.server>:2181 --describe --topic <TOPIC-NAME>
Topic:<TOPIC-NAME> PartitionCount:3 ReplicationFactor:2
Configs:
Topic: <TOPIC-NAME> Partition: 0 Leader: <ID-3> Replicas: <ID-3>,<ID-1> Tsr: <ID-1>,<ID-3>
Topic: <TOPIC-NAME> Partition: 1 Leader: <ID-1> Replicas: <ID-1>,<ID-2> Isr: <ID-2>,<ID-1>
Topic: <TOPIC-NAME> Partition: 2 Leader: <ID-2> Replicas: <ID-2>,<ID-3> Isr: <ID-2>,<ID-3>
So, my question is : why am i unable to use "--bootstrap-server" atm ? Because of the "zookeeper" deprecation, i'm worried about not to be able to consult my topics, and their details...
I believe that kafka-topics needs the same option than kafka-console-consumer, aka "-consumer.config"...
Ask if any additionnal precision needed.
Thanks a lot, hope my question is clear and readable.
Blyyyn
I finally found a way to deal with this SSL error. The key is to use the following setting :
--command-config client-ssl.properties
This is working with the most part of KAFKA commands, like kafka-consumer-groups, and of course kafka-topics. See examples below :
kafka-consumer-groups --bootstrap-server <kafka-hostname>:<kafka-port> --group <consumer-group> --topic <topic> --reset-offsets --to-offset <offset> --execute --command-config <ssl-config>
kafka-topics --list --bootstrap-server <kafka-hostname>:<kafka-port> --command-config client-ssl.properties
ssl-config was "client-ssl.properties",see initial post for content.
Beware, by using IP address on , you'll have an error if the machine certificate doesnt have alternative name with that IP address. Try to have correct DNS resolution and use FQDN if possible.
Hope this solution will help, cheers!
Blyyyn
Stop your Brokers and run below ( assuming you have more that 1.5GB RAM on your server)
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
then start your Brokers on all 3 nodes and then try it.
Note that for consumer and producer clients you need to prefix security.protocol accordingly inside your client-ssl.properties.
For Kafka Consumers:
consumer.security.protocol=SASL_SSL
For Kafka Producers:
producer.security.protocol=SASL_SSL

mbsync authentication failed

I was able to configure mbsync and mu4e in order to use my gmail account (so far everything works fine). I am now in the process of using mu4e-context to control multiple accounts.
I cannot retrieve emails from my openmailbox account whereas I receive this error
Reading configuration file .mbsyncrc
Channel ombx
Opening master ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave ombx-local...
Connection is now encrypted
Logging in...
IMAP command 'LOGIN <user> <pass>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
In other posts I've seen people suggesting AuthMechs Login or PLAIN but mbsync doesn't recognizes the command. Here is my .mbsyncrc file
IMAPAccount openmailbox
Host imap.ombx.io
User user#openmailbox.org
UseIMAPS yes
# AuthMechs LOGIN
RequireSSl yes
PassCmd "echo ${PASSWORD:-$(gpg2 --no-tty -qd ~/.authinfo.gpg | sed -n 's,^machine imap.ombx.io .*password \\([^ ]*\\).*,\\1,p')}"
IMAPStore ombx-remote
Account openmailbox
MaildirStore ombx-local
Path ~/Mail/user#openmailbox.org/
Inbox ~/Mail/user#openmailbox.org/Inbox/
Channel ombx
Master :ombx-remote:
Slave :ombx-local:
# Exclude everything under the internal [Gmail] folder, except the interesting folders
Patterns *
Create Slave
Expunge Both
Sync All
SyncState *
I am using Linux Mint and my isync is version 1.1.2
Thanks in advance for any help
EDIT: I have run a debug option and I have upgraded isync to version 1.2.1
This is what the debug returned:
Reading configuration file .mbsyncrc
Channel ombx
Opening master store ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave store ombx-local...
pattern '*' (effective '*'): Path, no INBOX
got mailbox list from slave:
Connection is now encrypted
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Openmailbox is ready to
handle your requests.
Logging in...
Authenticating with SASL mechanism PLAIN...
>>> 1 AUTHENTICATE PLAIN <authdata>
1 NO [AUTHENTICATIONFAILED] Authentication failed.
IMAP command 'AUTHENTICATE PLAIN <authdata>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
My .msyncrc file now contains these options instead
SSLType IMAPS
SSLVersions TLSv1.2
AuthMechs PLAIN
At the end, the solution was to use the correct password. Since openmailbox uses an application password for third-party e-mail clients I was using the wrong (original) password instead of the application password.