Correlation Id errors for Kafka console producer - authentication

I have started a standalone kafka server(version 2.11-0.11.0.1) with 1 node and 1 zookeeper, i am trying to implement ssl with acls but unable to produce.
Performed the following steps:
Started kafka node using following configurations ie (server.properties):
broker.id=0
listeners=PLAINTEXT://127.0.0.1:9092,SSL://127.0.0.1:9093
advertised.listeners=SSL://127.0.0.1:9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.segment.bytes=1073741824
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
ssl.keystore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/kaf-
new/server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/kaf-
new/server.truststore.jks
ssl.truststore.password=test1234
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=User:CN=jewel,OU=atos,O=atos,L=mum,ST=maha,C=in
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
inter.broker.listener.name=SSL
Modified producer.properties as follows:
bootstrap.servers=localhost:9093
compression.type=none
ssl.keystore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/prod/server.keystore.jks
ssl.keystore.password=test123
ssl.key.password=test123
security.protocol=SSL
ssl.truststore.location=/u/jewel/ssl+acl/kafka_2.11-0.11.0.1/kaf-new/client.truststore.jks
ssl.truststore.password=test1234
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
Have create the ACLS at zookeeper with following command:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:CN=jewel,OU=atos,O=atos,L=mum,ST=maha,C=in --producer --topic
securing-hey
Try producing to topics with following command:
bin/kafka-console-producer.sh --broker-list localhost:9093 --topic securing-hey --producer.config config/producer.properties
It Fails with the following error:
WARN Connection to node -1 terminated during authentication. This may indicate that authentication failed due to invalid credentials. (org.apache.kafka.clients.NetworkClient)
Can you please suggest what can i do to proceed further, your help will be appreciated .

Related

Kafka Security implementation issue SASL SSL and SCRAM

I'm facing error while starting kafka server,
have setup the SSL and it's working fine for kafka 3 brokers. And zookeeper is also setup with SSL
Now tried to setup the SCRAM with SASL_SSL for kafka broker from server property file.
It's not working I have created a user with following command
kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zookeeper-client.properties --entity-type users --entity-name broker-admin --alter --add-config 'SCRAM-SHA-512=[password=DEM123]'
and I can see user is created.
but while trying to run the command to run kafka broker
kafka-server-start.sh -daemon server-0.properties
It is having some error while I have checked server.log file
[2021-10-05 16:21:38,369] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /config/users/broker-admin
Can anyone support me?
let me share my zookeeper.proerpties file
dataDir=/var/www/kafka/data/zookeeper
clientPort=2181
secureClientPort=2182
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
ssl.trustStore.location=/var/www/kafka/ssl/kafka.zookeeper.truststore.jks
ssl.trustStore.password=zookeepbook
ssl.keyStore.location=/var/www/kafka/ssl/kafka.zookeeper.keystore.jks
ssl.keyStore.password=zookeepbook
ssl.clientAuth=need
maxClientCnxns=0
admin.enableServer=true
admin.serverPort=9090
server.1=localhost:2888:3888
server.properties file content :
broker.id=0
listeners=SASL_SSL://localhost:9092
advertised.listeners=SASL_SSL://localhost:9092
zookeeper.connect=localhost:2182
log.dirs=/var/www/kafka/data/broker-0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2
zookeeper.ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
zookeeper.ssl.truststore.password=zookeepbookbrk0
zookeeper.ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
zookeeper.ssl.keystore.password=zookeepbookbrk0
zookeeper.set.acl=true
ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
ssl.truststore.password=zookeepbookbrk0
ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
ssl.keystore.password=zookeepbookbrk0
ssl.key.password=zookeepbookbrk0
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=none
ssl.protocol=TLSv1.2
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username='broker-admin' password=DEM123;
super.users=User:broker-admin
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
Can you try to set 'skipACL=yes' to your zookeeper.properties ?
If you authenticated with Zookeeper by using SSL client certs when you created 'broker-admin' user, I think it because access from other than the place where you executed the command is denied.

Trying establish Kafka with 2 way SSL but getting an error

I am trying to establish 2 way TLS authentication with Kafka (version 2.4.0).
My server.properties configuration:
listeners=SSL://kafka1:9092
advertised.listeners=SSL://<public_ip>:9092
ssl.keystore.location=path/server.keystore.jks
ssl.keystore.password=pass
ssl.truststore.location=path/server.truststore.jks
ssl.truststore.password=pass
ssl.endpoint.identification.algorithm=HTTPS
security.inter.broker.protocol=SSL
client.properties:
bootstrap.servers=kafka1:9092
security.protocol=SSL
ssl.keystore.location=path/client.keystore.jks
ssl.keystore.password=pass
ssl.truststore.location = path/client.truststore.jks
ssl.truststore.password=pass
kafkaServer and zooKeeper are on.
But when I try to create topic:
kafka-topics.sh --bootstrap-server kafka1:9092 --create --replication-factor 1 --partitions 1 --topic sometopic --command-config config/client.properties
get an error:
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: No name matching kafka1 found
My certificate connected to kafka1 name, and there is mapping of kafka1 to local_ip.
Have you seen such an error before?

KAFKA and SSL : java.lang.OutOfMemoryError: Java heap space when using kafka-topics command on KAFKA SSL cluster

this is my first post on Stackoverflow, i hope i didnt choose the wrong section.
Context :
Kafka HEAP size is configured on following file :
/etc/systemd/system/kafka.service
With following parameter :
Environment="KAFKA_HEAP_OPTS=-Xms6g -Xmx6g"
OS is "CentOS Linux release 7.7.1908".
Kafka is "confluent-kafka-2.12-5.3.1-1.noarch", installed from the following repository :
# Confluent REPO
[Confluent.dist]
name=Confluent repository (dist)
baseurl=http://packages.confluent.io/rpm/5.3/7
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
[Confluent]
name=Confluent repository
baseurl=http://packages.confluent.io/rpm/5.3
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
I activated SSL on a 3-machine KAFKA cluster few days ago, and suddently, the following command stopped working :
kafka-topics --bootstrap-server <the.fqdn.of.server>:9093 --describe --topic <TOPIC-NAME>
Which return me the following error :
[2019-10-03 11:38:52,790] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1':(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152)
at java.lang.Thread.run(Thread.java:748)
On the server's log, the following line appears when i try to request it via "kafka-topics" :
/var/log/kafka/server.log :
[2019-10-03 11:41:11,913] INFO [SocketServer brokerId=<ID>] Failed authentication with /<ip.of.the.server> (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I was able to use this command properly BEFORE implementing SSL on the cluster. Here is the configuration i'm using.
All functionnality work properly (consumers, producers...) except "kafka-topics" :
# SSL Configuration
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
ssl.keystore.type=<keystore-type>
ssl.keystore.location=<keystore-path>
ssl.keystore.password=<keystore-password>
# Enable SSL between brokers
security.inter.broker.protocol=SSL
# Listeners
listeners=SSL://<fqdn.of.the.server>:9093
advertised.listeners=SSL://<fqdn.of.the.server>:9093
There is no problem with the certificate (which is signed by internal CA, internal CA which i added to the truststore specified on the configuration). OpenSSL show no errors :
openssl s_client -connect <fqdn.of.the.server>:9093 -tls1
>> Verify return code: 0 (ok)
The following command is working pretty well with SSL, thanks to parameter "-consumer.config client-ssl.properties"
kafka-console-consumer --bootstrap-server <fqdn.of.the.server>:9093 --topic <TOPIC-NAME> -consumer.config client-ssl.properties
"client-ssl.properties" content is :
security.protocol=SSL
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
Right now, i'm forced to use "--zookeeper", which according to the documentation, is deprecated :
--zookeeper <String: hosts> DEPRECATED, The connection string for
the zookeeper connection in the form
host:port. Multiple hosts can be
given to allow fail-over.
And of course, it's working pretty well :
kafka-topics --zookeeper <fqdn.of.the.server>:2181 --describe --topic <TOPIC-NAME>
Topic:<TOPIC-NAME> PartitionCount:3 ReplicationFactor:2
Configs:
Topic: <TOPIC-NAME> Partition: 0 Leader: <ID-3> Replicas: <ID-3>,<ID-1> Tsr: <ID-1>,<ID-3>
Topic: <TOPIC-NAME> Partition: 1 Leader: <ID-1> Replicas: <ID-1>,<ID-2> Isr: <ID-2>,<ID-1>
Topic: <TOPIC-NAME> Partition: 2 Leader: <ID-2> Replicas: <ID-2>,<ID-3> Isr: <ID-2>,<ID-3>
So, my question is : why am i unable to use "--bootstrap-server" atm ? Because of the "zookeeper" deprecation, i'm worried about not to be able to consult my topics, and their details...
I believe that kafka-topics needs the same option than kafka-console-consumer, aka "-consumer.config"...
Ask if any additionnal precision needed.
Thanks a lot, hope my question is clear and readable.
Blyyyn
I finally found a way to deal with this SSL error. The key is to use the following setting :
--command-config client-ssl.properties
This is working with the most part of KAFKA commands, like kafka-consumer-groups, and of course kafka-topics. See examples below :
kafka-consumer-groups --bootstrap-server <kafka-hostname>:<kafka-port> --group <consumer-group> --topic <topic> --reset-offsets --to-offset <offset> --execute --command-config <ssl-config>
kafka-topics --list --bootstrap-server <kafka-hostname>:<kafka-port> --command-config client-ssl.properties
ssl-config was "client-ssl.properties",see initial post for content.
Beware, by using IP address on , you'll have an error if the machine certificate doesnt have alternative name with that IP address. Try to have correct DNS resolution and use FQDN if possible.
Hope this solution will help, cheers!
Blyyyn
Stop your Brokers and run below ( assuming you have more that 1.5GB RAM on your server)
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
then start your Brokers on all 3 nodes and then try it.
Note that for consumer and producer clients you need to prefix security.protocol accordingly inside your client-ssl.properties.
For Kafka Consumers:
consumer.security.protocol=SASL_SSL
For Kafka Producers:
producer.security.protocol=SASL_SSL

Kafka with SSL - write to a topic - authorization error

I am trying to produce from a command line to a topic that is on a local Kafka cluster with SSL enabled.
Topic was just created with:
kafka-topics --zookeeper localhost:2181 --create --topic simple --replication-factor 1 --partitions 1
Command for producing is:
kafka-avro-console-producer \
--broker-list localhost:9092 --topic simple \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}' \
--property schema.registry.url=http://localhost:8080
Typing:
{"f1": "Alyssa"}
ERROR:
{"f1": "Alyssa"}
Error when sending message to topic simple with key: null, value: 12 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback:52) org.apache.kafka.common.errors.TopicAuthorizationException:
Not authorized to access topics: [simple]
How to add access to this topic?
What is the correct command for ACL (I am running it on my local machine).
Since you have an SSL protected cluster, then I suggest creating a config file, and using this parameter for the CLI to reference it.
--producer.config <String: config file> Producer config properties file. Note
that [producer-property] takes
precedence over this config
An example file would look like
bootstrap.servers=localhost:9092
# SSL related properties
security.protocol=SSL
ssl.keystore.location=/path/to/keystore-cert.jks
ssl.truststore.location=/path/to/truststore-cert.jks
ssl.key.password=xxxx
ssl.keystore.password=yyyy
ssl.truststore.password=zzzz
Otherise, you must pass each of these one-by-one onto the --property option
I got the same due to SSL properties, pls pass SSL.
below is sample for AVRO message
./kafka-avro-console-producer --topic <topic_name> --broker-list <broker:port> --producer.config ssl.properties --property schema.registry.url=<schema_registru_url (for avro msg)> --property value.schema="$(< /path/to/avro_chema.avsc)" --property key.schema='{"type":"string"}' --property parse.key=true --property key.separator=":"

Kafka Authentication Producer Unable to Connect Producer

I am try to replicate the SASL_PLAIN or SASL_SSL authentication described at: http://docs.confluent.io/3.0.0/kafka/sasl.html#sasl-configuration-for-kafka-brokers
In config/server.properties, I added the following 4 lines:
listeners=SASL_SSL://localhost:9092
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
In config/producer.properties, I added the following two lines:
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
Then I set the following environment variable in the server terminal:
KAFKA_OPTS=/home/kafka/kafka_server_jaas.conf
This file has the following content:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
And in the producer terminal I define the following env variable:
KAFKA_OPTS=/home/kafka/kafka_client_jaas.conf
And this file has the following content:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="alice-dsecret";
};
I start the server with the following command:
./bin/kafka-server-start.sh config/server.properties
And the producer with following command:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
Both start without problems. But, as soon as I type something on the producer console, I get the following message that keeps scrolling:
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
If I remove the security configuration from the server and the producer configuration, everything works as expected. I am using Kafka 0.10.0.1.
UPDATE:
I did some more investigations, turning log levels to DEBUG on server reveals something weird. As soon as I specify the listeners field in server.properties, the server goes in a weird state. It establishes connection to itsself that it cannot authenticate. The protocol in this case was SASL_PLAINTEXT.
The logs as here:
2016-09-15 21:43:02 DEBUG SaslClientAuthenticator:204 - Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
2016-09-15 21:43:02 DEBUG NetworkClient:476 - Completed connection to node 0
2016-09-15 21:43:02 DEBUG Acceptor:52 - Accepted connection from /127.0.0.1 on /127.0.0.1:9092. sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400]
2016-09-15 21:43:02 DEBUG Processor:52 - Processor 2 listening to new connection from /127.0.0.1:42815
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:269 - Set SASL server state to HANDSHAKE_REQUEST
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:310 - Handle Kafka request SASL_HANDSHAKE
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:354 - Using SASL mechanism 'PLAIN' provided by client
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:269 - Set SASL server state to AUTHENTICATE
2016-09-15 21:43:02 DEBUG SaslClientAuthenticator:204 - Set SASL client state to INITIAL
2016-09-15 21:43:02 DEBUG SaslClientAuthenticator:204 - Set SASL client state to INTERMEDIATE
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:269 - Set SASL server state to FAILED
2016-09-15 21:43:02 DEBUG Selector:345 - Connection with /127.0.0.1 disconnected
java.io.IOException: javax.security.sasl.SaslException: Authentication failed: Invalid JAAS configuration [Caused by javax.security.sasl.SaslException: Authentication failed: Invalid username or password]
at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:243)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:64)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:318)
at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at kafka.network.Processor.poll(SocketServer.scala:472)
There is absolutely no other client or server running. This is one server talking to himself.
Any thoughts?
Help came from the Kafka forum. See http://mail-archives.apache.org/mod_mbox/kafka-users/201609.mbox/%3CCAHX2Snk11vg7DXNVUr9oE97ikFSQUoT3kBLAxYymEDj7E14XrQ%40mail.gmail.com%3E
I had the credentials wrong. They were:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="alice-secret"
user_alice="alice-secret";
};
Instead of:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
Also, the console consumer needs to be called in a certain. First the flag --new-consumer should be provided. Second, bootstrap server should be specified. Leading to this:
bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092