Kafka Security implementation issue SASL SSL and SCRAM - ssl

I'm facing error while starting kafka server,
have setup the SSL and it's working fine for kafka 3 brokers. And zookeeper is also setup with SSL
Now tried to setup the SCRAM with SASL_SSL for kafka broker from server property file.
It's not working I have created a user with following command
kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zookeeper-client.properties --entity-type users --entity-name broker-admin --alter --add-config 'SCRAM-SHA-512=[password=DEM123]'
and I can see user is created.
but while trying to run the command to run kafka broker
kafka-server-start.sh -daemon server-0.properties
It is having some error while I have checked server.log file
[2021-10-05 16:21:38,369] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /config/users/broker-admin
Can anyone support me?
let me share my zookeeper.proerpties file
dataDir=/var/www/kafka/data/zookeeper
clientPort=2181
secureClientPort=2182
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
ssl.trustStore.location=/var/www/kafka/ssl/kafka.zookeeper.truststore.jks
ssl.trustStore.password=zookeepbook
ssl.keyStore.location=/var/www/kafka/ssl/kafka.zookeeper.keystore.jks
ssl.keyStore.password=zookeepbook
ssl.clientAuth=need
maxClientCnxns=0
admin.enableServer=true
admin.serverPort=9090
server.1=localhost:2888:3888
server.properties file content :
broker.id=0
listeners=SASL_SSL://localhost:9092
advertised.listeners=SASL_SSL://localhost:9092
zookeeper.connect=localhost:2182
log.dirs=/var/www/kafka/data/broker-0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2
zookeeper.ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
zookeeper.ssl.truststore.password=zookeepbookbrk0
zookeeper.ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
zookeeper.ssl.keystore.password=zookeepbookbrk0
zookeeper.set.acl=true
ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
ssl.truststore.password=zookeepbookbrk0
ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
ssl.keystore.password=zookeepbookbrk0
ssl.key.password=zookeepbookbrk0
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=none
ssl.protocol=TLSv1.2
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username='broker-admin' password=DEM123;
super.users=User:broker-admin
authorizer.class.name=kafka.security.authorizer.AclAuthorizer

Can you try to set 'skipACL=yes' to your zookeeper.properties ?
If you authenticated with Zookeeper by using SSL client certs when you created 'broker-admin' user, I think it because access from other than the place where you executed the command is denied.

Related

Apache Kafka doens't start after SSL configuration

I have a Apache Kafka (v. 2.13-3.0.0) installed on a remote Ubuntu server.
I follow this tutorial to secure my cluster:
https://medium.com/egen/securing-kafka-cluster-using-sasl-acl-and-ssl-dec15b439f9d
but when I try to start Kafka with jaas conf file with the commands:
export KAFKA_OPTS=-Djava.security.auth.login.config=<kafka-binary-
dir>/config/kafka_server_jaas.conf
./bin/kafka-server-start.sh ./config/server.properties
I receive the error:
[2021-11-12 10:30:47,864] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-11-12 10:30:48,089] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-11-12 10:30:48,099] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.lang.ClassNotFoundException: kafka.security.auth.SimpleAclAuthorizer
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.utils.Utils.loadClass(Utils.java:417)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
These are the SSL config in server.properties file:
########### SECURITY using SCRAM-SHA-512 and SSL
listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,SASL_SSL://localhost:9094
security.inter.broker.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
ssl.client.auth=required
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
# Broker security settings
ssl.truststore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/truststore/kafka.truststore.jks
ssl.truststore.password=giuseppe
ssl.keystore.location=/home/kafka/Downloads/kafka_2.13-3.0.0/config/keystore/kafka.keystore.jks
ssl.keystore.password=giuseppe
ssl.key.password=giuseppe
# ACLs
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
super.users=User:admin
#zookeeper SASL
zookeeper.set.acl=false
########### SECURITY using SCRAM-SHA-512 and SSL
If I try to comment the 2 rows of ACL I receive the error:
[2021-11-12 11:05:29,301] INFO [ThrottledChannelReaper-
ControllerMutation]: Starting
(kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-11-12 11:05:29,331] ERROR [KafkaServer id=0] Fatal error
during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to acquire lock on
file .lock in /tmp/kafka-logs. A Kafka instance in another process
or thread is using this directory.
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:241)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:37)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:236)
at kafka.log.LogManager.<init>(LogManager.scala:112)
at kafka.log.LogManager$.apply(LogManager.scala:1283)
at kafka.server.KafkaServer.startup(KafkaServer.scala:254)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
What is the cause? May it be a wrong configuration?
Thanks.
Update:
Changing the row in:
# ACLs authorizer.class.name=org.apache.kafka.server.authorizer.Authorizer
there is this error: org.apache.kafka.common.KafkaException: Could not find
a public no-argument constructor for
org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
I receive this new error:
[2021-11-12 16:51:57,613] ERROR Exiting Kafka due to fatal exception
(kafka.Kafka$)
org.apache.kafka.common.KafkaException: Could not find a public no-argument
constructor for org.apache.kafka.server.authorizer.Authorizer at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:392)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:406)
at kafka.security.authorizer.AuthorizerUtils$.createAuthorizer(AuthorizerUtils.scala:31)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1583)
at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1394)
at kafka.Kafka$.buildServer(Kafka.scala:67)
at kafka.Kafka$.main(Kafka.scala:87)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.NoSuchMethodException:
org.apache.kafka.server.authorizer.Authorizer.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3508)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2711)
at org.apache.kafka.common.utils.Utils.newInstance(Utils.java:390)
... 7 more
It just seems that if you change the
kafka.security.auth.SimpleAclAuthorizer
to
kafka.security.authorizer.AclAuthorizer
It should work; it worked for me.
Kafka 3.0 removed SimpleAclAuthorizer
Pull request - https://github.com/apache/kafka/commit/976e78e405d57943b989ac487b7f49119b0f4af4#diff-e0ccf1b5c964d2c303b6a69a8b8b67df5a6bfbae8aa514f580d353c4c6bf8e36
The blog seems to be using version 2.2.0.

Securing communication between Kafka client and Zookeeper server

I've configured both Kafka server and Zookeeper server to use SSL/TLS using the JKS. I've confirmed this using openssl. I'm using Bitnami Helm charts of Kafka and Zookeeper. Below is the log output from Kafka. I'm pretty sure that the Kafka client isn't sending requests to Zookeeper server securely because of the Zookeeper logs. How do I ensure Kafka client uses SSL/TLS. I think the kafka client needs to use a client.properties file when executing config commands with args. But I don't know how to pass this file in during configuration. The logs show that Kafka client is trying to add a user called zookeeperUser to Zookeeper. This communication is non secure.
Kafka Logs
09:56:31.43
09:56:31.43 Welcome to the Bitnami kafka container
09:56:31.44 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
09:56:31.44 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
09:56:31.44
09:56:31.44 INFO ==> ** Starting Kafka setup **
09:56:31.56 DEBUG ==> Validating settings in KAFKA_* env vars...
09:56:31.65 INFO ==> Initializing Kafka...
09:56:31.66 INFO ==> No injected configuration files found, creating default config files
09:56:32.96 INFO ==> Configuring Kafka for inter-broker communications with SASL_SSL authentication.
09:56:33.13 INFO ==> Configuring Kafka for client communications with SASL_SSL authentication.
09:56:33.43 INFO ==> Custom JAAS authentication file detected. Skipping generation.
09:56:33.43 WARN ==> The following environment variables will be ignored: KAFKA_CLIENT_USERS, KAFKA_CLIENT_PASSWORDS, KAFKA_INTER_BROKER_USER, KAFKA_INTER_BROKER_PASSWORD, KAFKA_ZOOKEEPER_USER and KAFKA_ZOOKEEPER_PASSWORD
09:56:33.44 INFO ==> Creating users in Zookeeper
09:56:33.44 DEBUG ==> Creating user zookeeperUser in zookeeper
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Error while executing config command with args '--zookeeper zookeeper.default.svc.cluster.local:3181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=zookeeperPassword],SCRAM-SHA-512=[password=zookeeperPassword] --entity-type users --entity-name zookeeperUser'
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:262)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:258)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:119)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1881)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:116)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:94)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
client.properties
cat > client.properties <<EOF
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
ssl.truststore.location=/tmp/kafka.truststore.jks
ssl.truststore.password=******
EOF
Zookeeper Logs
2021-02-11 09:56:43,055 [myid:1] - ERROR [nioEventLoopGroup-7-1:NettyServerCnxnFactory$CertificateVerifier#434] - Unsuccessful handshake with session 0x0
2021-02-11 09:56:43,055 [myid:1] - WARN [nioEventLoopGroup-7-1:NettyServerCnxnFactory$CnxnChannelHandler#273] - Exception caught
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
... 17 more

KAFKA and SSL : java.lang.OutOfMemoryError: Java heap space when using kafka-topics command on KAFKA SSL cluster

this is my first post on Stackoverflow, i hope i didnt choose the wrong section.
Context :
Kafka HEAP size is configured on following file :
/etc/systemd/system/kafka.service
With following parameter :
Environment="KAFKA_HEAP_OPTS=-Xms6g -Xmx6g"
OS is "CentOS Linux release 7.7.1908".
Kafka is "confluent-kafka-2.12-5.3.1-1.noarch", installed from the following repository :
# Confluent REPO
[Confluent.dist]
name=Confluent repository (dist)
baseurl=http://packages.confluent.io/rpm/5.3/7
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
[Confluent]
name=Confluent repository
baseurl=http://packages.confluent.io/rpm/5.3
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
I activated SSL on a 3-machine KAFKA cluster few days ago, and suddently, the following command stopped working :
kafka-topics --bootstrap-server <the.fqdn.of.server>:9093 --describe --topic <TOPIC-NAME>
Which return me the following error :
[2019-10-03 11:38:52,790] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1':(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152)
at java.lang.Thread.run(Thread.java:748)
On the server's log, the following line appears when i try to request it via "kafka-topics" :
/var/log/kafka/server.log :
[2019-10-03 11:41:11,913] INFO [SocketServer brokerId=<ID>] Failed authentication with /<ip.of.the.server> (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I was able to use this command properly BEFORE implementing SSL on the cluster. Here is the configuration i'm using.
All functionnality work properly (consumers, producers...) except "kafka-topics" :
# SSL Configuration
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
ssl.keystore.type=<keystore-type>
ssl.keystore.location=<keystore-path>
ssl.keystore.password=<keystore-password>
# Enable SSL between brokers
security.inter.broker.protocol=SSL
# Listeners
listeners=SSL://<fqdn.of.the.server>:9093
advertised.listeners=SSL://<fqdn.of.the.server>:9093
There is no problem with the certificate (which is signed by internal CA, internal CA which i added to the truststore specified on the configuration). OpenSSL show no errors :
openssl s_client -connect <fqdn.of.the.server>:9093 -tls1
>> Verify return code: 0 (ok)
The following command is working pretty well with SSL, thanks to parameter "-consumer.config client-ssl.properties"
kafka-console-consumer --bootstrap-server <fqdn.of.the.server>:9093 --topic <TOPIC-NAME> -consumer.config client-ssl.properties
"client-ssl.properties" content is :
security.protocol=SSL
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
Right now, i'm forced to use "--zookeeper", which according to the documentation, is deprecated :
--zookeeper <String: hosts> DEPRECATED, The connection string for
the zookeeper connection in the form
host:port. Multiple hosts can be
given to allow fail-over.
And of course, it's working pretty well :
kafka-topics --zookeeper <fqdn.of.the.server>:2181 --describe --topic <TOPIC-NAME>
Topic:<TOPIC-NAME> PartitionCount:3 ReplicationFactor:2
Configs:
Topic: <TOPIC-NAME> Partition: 0 Leader: <ID-3> Replicas: <ID-3>,<ID-1> Tsr: <ID-1>,<ID-3>
Topic: <TOPIC-NAME> Partition: 1 Leader: <ID-1> Replicas: <ID-1>,<ID-2> Isr: <ID-2>,<ID-1>
Topic: <TOPIC-NAME> Partition: 2 Leader: <ID-2> Replicas: <ID-2>,<ID-3> Isr: <ID-2>,<ID-3>
So, my question is : why am i unable to use "--bootstrap-server" atm ? Because of the "zookeeper" deprecation, i'm worried about not to be able to consult my topics, and their details...
I believe that kafka-topics needs the same option than kafka-console-consumer, aka "-consumer.config"...
Ask if any additionnal precision needed.
Thanks a lot, hope my question is clear and readable.
Blyyyn
I finally found a way to deal with this SSL error. The key is to use the following setting :
--command-config client-ssl.properties
This is working with the most part of KAFKA commands, like kafka-consumer-groups, and of course kafka-topics. See examples below :
kafka-consumer-groups --bootstrap-server <kafka-hostname>:<kafka-port> --group <consumer-group> --topic <topic> --reset-offsets --to-offset <offset> --execute --command-config <ssl-config>
kafka-topics --list --bootstrap-server <kafka-hostname>:<kafka-port> --command-config client-ssl.properties
ssl-config was "client-ssl.properties",see initial post for content.
Beware, by using IP address on , you'll have an error if the machine certificate doesnt have alternative name with that IP address. Try to have correct DNS resolution and use FQDN if possible.
Hope this solution will help, cheers!
Blyyyn
Stop your Brokers and run below ( assuming you have more that 1.5GB RAM on your server)
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
then start your Brokers on all 3 nodes and then try it.
Note that for consumer and producer clients you need to prefix security.protocol accordingly inside your client-ssl.properties.
For Kafka Consumers:
consumer.security.protocol=SASL_SSL
For Kafka Producers:
producer.security.protocol=SASL_SSL

Exception while loading Zookeeper JAAS login context 'Client' [duplicate]

when i run the zookeeper from the package in the kakfa_2.12-2.3.0 i am getting the following error
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf"
$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
and the zookeeper_jaas.conf is
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
and the zookeeper.properties file is
server=localhost:9092
#server=localhost:2888:3888
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="ibm" password="ibm-secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=**strong text**/kafka/apache-zookeeper-3.5.5-bin/zookeeperkeys/client.truststore.jks
ssl.truststore.password=test1234
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
can anyone suggest what could be the reason
You seem to have mixed up a bunch of Kafka SASL configuration into your Zookeeper configuration files. Both Zookeeper and Kafka have different SASL support so it's not going to work.
I'm guessing you want to enable SASL authentication between Kafka and Zookeeper. In that case you need to follow the Zookeeper Server-Client guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication
Zookeeper does not support SASL Plain, but DigestMD5 is pretty similar. In that case your jaas.conf file should look like:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret"
user_bob="bobsecret";
};
Then you need to configure your Kafka brokers to connect to Zookeeper with SASL. You can do that using another jaas.conf file (this time loading it in Kafka):
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="bob"
password="bobsecret";
};
Note: you can also enable SASL between the Zookeeper servers. To do so, follow the Server-Server guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Server-Server+mutual+authentication

Correlation Id errors for Kafka console producer

I have started a standalone kafka server(version 2.11-0.11.0.1) with 1 node and 1 zookeeper, i am trying to implement ssl with acls but unable to produce.
Performed the following steps:
Started kafka node using following configurations ie (server.properties):
broker.id=0
listeners=PLAINTEXT://127.0.0.1:9092,SSL://127.0.0.1:9093
advertised.listeners=SSL://127.0.0.1:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.segment.bytes=1073741824
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
ssl.keystore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/kaf-
new/server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/kaf-
new/server.truststore.jks
ssl.truststore.password=test1234
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=User:CN=jewel,OU=atos,O=atos,L=mum,ST=maha,C=in
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
inter.broker.listener.name=SSL
Modified producer.properties as follows:
bootstrap.servers=localhost:9093
compression.type=none
ssl.keystore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/prod/server.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
security.protocol=SSL
ssl.truststore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/kaf-new/client.truststore.jks
ssl.truststore.password=test1234
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
Have create the ACLS at zookeeper with following command:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:CN=jewel,OU=atos,O=atos,L=mum,ST=maha,C=in --producer --topic
securing-hey
Try producing to topics with following command:
bin/kafka-console-producer.sh --broker-list localhost:9093 --topic securing-hey --producer.config config/producer.properties
It Fails with the following error:
WARN Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. (org.apache.kafka.clients.NetworkClient)
Can you please suggest what can i do to proceed further, your help will be appreciated .