I am try to replicate the SASL_PLAIN or SASL_SSL authentication described at: http://docs.confluent.io/3.0.0/kafka/sasl.html#sasl-configuration-for-kafka-brokers
In config/server.properties, I added the following 4 lines:
listeners=SASL_SSL://localhost:9092
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
In config/producer.properties, I added the following two lines:
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
Then I set the following environment variable in the server terminal:
KAFKA_OPTS=/home/kafka/kafka_server_jaas.conf
This file has the following content:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
And in the producer terminal I define the following env variable:
KAFKA_OPTS=/home/kafka/kafka_client_jaas.conf
And this file has the following content:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice"
password="alice-dsecret";
};
I start the server with the following command:
./bin/kafka-server-start.sh config/server.properties
And the producer with following command:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
Both start without problems. But, as soon as I type something on the producer console, I get the following message that keeps scrolling:
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
WARN Bootstrap broker localhost:9092 disconnected (org.apache.kafka.clients.NetworkClient)
If I remove the security configuration from the server and the producer configuration, everything works as expected. I am using Kafka 0.10.0.1.
UPDATE:
I did some more investigations, turning log levels to DEBUG on server reveals something weird. As soon as I specify the listeners field in server.properties, the server goes in a weird state. It establishes connection to itsself that it cannot authenticate. The protocol in this case was SASL_PLAINTEXT.
The logs as here:
2016-09-15 21:43:02 DEBUG SaslClientAuthenticator:204 - Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE
2016-09-15 21:43:02 DEBUG NetworkClient:476 - Completed connection to node 0
2016-09-15 21:43:02 DEBUG Acceptor:52 - Accepted connection from /127.0.0.1 on /127.0.0.1:9092. sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400]
2016-09-15 21:43:02 DEBUG Processor:52 - Processor 2 listening to new connection from /127.0.0.1:42815
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:269 - Set SASL server state to HANDSHAKE_REQUEST
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:310 - Handle Kafka request SASL_HANDSHAKE
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:354 - Using SASL mechanism 'PLAIN' provided by client
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:269 - Set SASL server state to AUTHENTICATE
2016-09-15 21:43:02 DEBUG SaslClientAuthenticator:204 - Set SASL client state to INITIAL
2016-09-15 21:43:02 DEBUG SaslClientAuthenticator:204 - Set SASL client state to INTERMEDIATE
2016-09-15 21:43:02 DEBUG SaslServerAuthenticator:269 - Set SASL server state to FAILED
2016-09-15 21:43:02 DEBUG Selector:345 - Connection with /127.0.0.1 disconnected
java.io.IOException: javax.security.sasl.SaslException: Authentication failed: Invalid JAAS configuration [Caused by javax.security.sasl.SaslException: Authentication failed: Invalid username or password]
at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:243)
at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:64)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:318)
at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at kafka.network.Processor.poll(SocketServer.scala:472)
There is absolutely no other client or server running. This is one server talking to himself.
Any thoughts?
Help came from the Kafka forum. See http://mail-archives.apache.org/mod_mbox/kafka-users/201609.mbox/%3CCAHX2Snk11vg7DXNVUr9oE97ikFSQUoT3kBLAxYymEDj7E14XrQ%40mail.gmail.com%3E
I had the credentials wrong. They were:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="alice-secret"
user_alice="alice-secret";
};
Instead of:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
Also, the console consumer needs to be called in a certain. First the flag --new-consumer should be provided. Second, bootstrap server should be specified. Leading to this:
bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092
Related
I'm facing error while starting kafka server,
have setup the SSL and it's working fine for kafka 3 brokers. And zookeeper is also setup with SSL
Now tried to setup the SCRAM with SASL_SSL for kafka broker from server property file.
It's not working I have created a user with following command
kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zookeeper-client.properties --entity-type users --entity-name broker-admin --alter --add-config 'SCRAM-SHA-512=[password=DEM123]'
and I can see user is created.
but while trying to run the command to run kafka broker
kafka-server-start.sh -daemon server-0.properties
It is having some error while I have checked server.log file
[2021-10-05 16:21:38,369] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /config/users/broker-admin
Can anyone support me?
let me share my zookeeper.proerpties file
dataDir=/var/www/kafka/data/zookeeper
clientPort=2181
secureClientPort=2182
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
ssl.trustStore.location=/var/www/kafka/ssl/kafka.zookeeper.truststore.jks
ssl.trustStore.password=zookeepbook
ssl.keyStore.location=/var/www/kafka/ssl/kafka.zookeeper.keystore.jks
ssl.keyStore.password=zookeepbook
ssl.clientAuth=need
maxClientCnxns=0
admin.enableServer=true
admin.serverPort=9090
server.1=localhost:2888:3888
server.properties file content :
broker.id=0
listeners=SASL_SSL://localhost:9092
advertised.listeners=SASL_SSL://localhost:9092
zookeeper.connect=localhost:2182
log.dirs=/var/www/kafka/data/broker-0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2
zookeeper.ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
zookeeper.ssl.truststore.password=zookeepbookbrk0
zookeeper.ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
zookeeper.ssl.keystore.password=zookeepbookbrk0
zookeeper.set.acl=true
ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
ssl.truststore.password=zookeepbookbrk0
ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
ssl.keystore.password=zookeepbookbrk0
ssl.key.password=zookeepbookbrk0
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=none
ssl.protocol=TLSv1.2
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username='broker-admin' password=DEM123;
super.users=User:broker-admin
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
Can you try to set 'skipACL=yes' to your zookeeper.properties ?
If you authenticated with Zookeeper by using SSL client certs when you created 'broker-admin' user, I think it because access from other than the place where you executed the command is denied.
I've configured both Kafka server and Zookeeper server to use SSL/TLS using the JKS. I've confirmed this using openssl. I'm using Bitnami Helm charts of Kafka and Zookeeper. Below is the log output from Kafka. I'm pretty sure that the Kafka client isn't sending requests to Zookeeper server securely because of the Zookeeper logs. How do I ensure Kafka client uses SSL/TLS. I think the kafka client needs to use a client.properties file when executing config commands with args. But I don't know how to pass this file in during configuration. The logs show that Kafka client is trying to add a user called zookeeperUser to Zookeeper. This communication is non secure.
Kafka Logs
09:56:31.43
09:56:31.43 Welcome to the Bitnami kafka container
09:56:31.44 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-kafka
09:56:31.44 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-kafka/issues
09:56:31.44
09:56:31.44 INFO ==> ** Starting Kafka setup **
09:56:31.56 DEBUG ==> Validating settings in KAFKA_* env vars...
09:56:31.65 INFO ==> Initializing Kafka...
09:56:31.66 INFO ==> No injected configuration files found, creating default config files
09:56:32.96 INFO ==> Configuring Kafka for inter-broker communications with SASL_SSL authentication.
09:56:33.13 INFO ==> Configuring Kafka for client communications with SASL_SSL authentication.
09:56:33.43 INFO ==> Custom JAAS authentication file detected. Skipping generation.
09:56:33.43 WARN ==> The following environment variables will be ignored: KAFKA_CLIENT_USERS, KAFKA_CLIENT_PASSWORDS, KAFKA_INTER_BROKER_USER, KAFKA_INTER_BROKER_PASSWORD, KAFKA_ZOOKEEPER_USER and KAFKA_ZOOKEEPER_PASSWORD
09:56:33.44 INFO ==> Creating users in Zookeeper
09:56:33.44 DEBUG ==> Creating user zookeeperUser in zookeeper
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Error while executing config command with args '--zookeeper zookeeper.default.svc.cluster.local:3181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=zookeeperPassword],SCRAM-SHA-512=[password=zookeeperPassword] --entity-type users --entity-name zookeeperUser'
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:262)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:258)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:119)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1881)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:116)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:94)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
client.properties
cat > client.properties <<EOF
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
ssl.truststore.location=/tmp/kafka.truststore.jks
ssl.truststore.password=******
EOF
Zookeeper Logs
2021-02-11 09:56:43,055 [myid:1] - ERROR [nioEventLoopGroup-7-1:NettyServerCnxnFactory$CertificateVerifier#434] - Unsuccessful handshake with session 0x0
2021-02-11 09:56:43,055 [myid:1] - WARN [nioEventLoopGroup-7-1:NettyServerCnxnFactory$CnxnChannelHandler#273] - Exception caught
io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:471)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440)
... 17 more
I have a distributed kafka with 3 brokers that have port numbers 9093, 9094, 9095 and I added the SSL with port numbers 9096, 9097, 9098.
I am getting the following error when I run the producer client:
[2020-06-15 10:08:07,892] ERROR [Producer clientId=console-producer] Connection to node -1 (/myip-address:9096) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2020-06-15 10:08:07,893] WARN [Producer clientId=console-producer] Bootstrap broker myip-address:9096 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
I have tried to use: ssl.endpoint.identification.algorithm= but that did not solve the problem for me.
I am using Kafka 2.5
I can share my config files if need be.
What else could I try to troubleshoot this issue?
Thank you.
I was able to solve this issue by simply using my
domain-name:9096,domain-name:9097,domain-name:9098
instead of:
my-ip-address:9096,my-ip-address:9097,my-ip-address:9098
So, using the actual domain name is important due to the certificate being created with the domain name.
In my current project we are trying to set up an activeMQ cluster with LevelDB replication. Our configuration has a ZooKeeper ensemble of three nodes and an ActiveMQ cluster of three nodes.
The following is the configuration used for activeMQ: (of course the hostname is different for each node in the cluster)
<persistenceAdapter>
<replicatedLevelDB
replicas="3"
bind="tcp://0.0.0.0:0"
hostname="activemq1"
zkAddress="zk1:2181,zk2:2181,zk3:2181"
zkPath="/activemq/leveldb-stores"
/>
</persistenceAdapter>
We start up three instances of zookeeper and three instances of activemq. We observe that the zookeeper leader gets correctly elected. But in activeMQ cluster Master election is not happening. Go through the log we came to know that there is a authentication problem with zookeeper. (as per the log, I am having less knowledge in zookeeper/activemq). Herewith I pasted the logs for reference.
INFO: Loading '/opt/activemq//bin/env'
INFO: Using java '/usr/bin/java'
INFO: Starting in foreground, this is just for debugging purposes (stop process by pressing CTRL+C)
INFO: Creating pidfile /data/activemq/activemq.pid
Java Runtime: Oracle Corporation 1.8.0_91 /usr/lib/jvm/java-8-openjdk-amd64/jre
Heap sizes: current=62976k free=59998k max=932352k
JVM args: -Xms64M -Xmx1G -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/opt/activemq/conf.tmp/login.config -Dcom.sun.management.jmxremote -Djava.awt.headless=true -Djava.io.tmpdir=/opt/activemq//tmp -
Dactivemq.classpath=/opt/activemq/conf.tmp:/opt/activemq//../lib/: -Dactivemq.home=/opt/activemq/ -
Dactivemq.base=/opt/activemq/ -Dactivemq.conf=/opt/activemq/conf.tmp -Dactivemq.data=/data/activemq
Extensions classpath:[/opt/activemq/lib,/opt/activemq/lib/camel,/opt/activemq/lib/optional,/opt/activemq/lib/web,/opt/activemq/lib/extra]
ACTIVEMQ_HOME: /opt/activemq
ACTIVEMQ_BASE: /opt/activemq
ACTIVEMQ_CONF: /opt/activemq/conf.tmp
ACTIVEMQ_DATA: /data/activemq
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#7823a2f9: startup date [Sat Jun 17 09:15:51 UTC 2017]; root of context hierarchy
INFO | JobScheduler using directory: /data/activemq/localhost/scheduler
INFO | Using Persistence Adapter: Replicated LevelDB[/data/activemq/leveldb, ip-172-20-44-97.ec2.internal:2181,ip-172-20-45-105.ec2.internal:2181,ip-172-20-48-226.ec2.internal:2181//activemq/leveldb-stores]
INFO | Starting StateChangeDispatcher
INFO | Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
INFO | Client environment:host.name=activemq-m1n59
INFO | Client environment:java.version=1.8.0_91
INFO | Client environment:java.vendor=Oracle Corporation
INFO | Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
INFO | Client environment:java.class.path=/opt/activemq//bin/activemq.jar
INFO | Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
INFO | Client environment:java.io.tmpdir=/opt/activemq//tmp
INFO | Client environment:java.compiler=<NA>
INFO | Client environment:os.name=Linux
INFO | Client environment:os.arch=amd64
INFO | Client environment:os.version=4.4.65-k8s
INFO | Client environment:user.name=root
INFO | Client environment:user.home=/root
INFO | Client environment:user.dir=/tmp
INFO | Initiating client connection, connectString=ip-172-20-44-97.ec2.internal:2181,ip-172-20-45-105.ec2.internal:2181,ip-172-20-48-226.ec2.internal:2181 sessionTimeout=2000 watcher=org.apache.activemq.leveldb.replicated.groups.ZKClient#4b41dd5c
WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/activemq/conf.tmp/login.config'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
WARN | unprocessed event state: AuthFailed
INFO | Opening socket connection to server ip-172-20-45-105.ec2.internal/172.20.45.105:2181
WARN | Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.8.0_91] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)[:1.8.0_91] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/activemq/conf.tmp/login.config'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
INFO | Opening socket connection to server ip-172-20-48-226.ec2.internal/172.20.48.226:2181
WARN | unprocessed event state: AuthFailed
WARN | Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.8.0_91] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)[:1.8.0_91] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) [zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
WARN | SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/activemq/conf.tmp/login.config'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it.
INFO | Opening socket connection to server ip-172-20-44-97.ec2.internal/172.20.44.97:2181
WARN | unprocessed event state: AuthFailed
WARN | Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.8.0_91] at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)[:1.8.0_91] at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)[zookeeper-3.4.6.jar:3.4.6-1569965] at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
Please help to get out from this problem.
If anyone having idea of deploying Zookeeper with ActiveMQ cluster in Kubernetes please share your ideas. since we are trying to deploy it in Kubernetes.
I am using Kafka 0.8.2-beta and have 2 Ubuntu 14 virtual machines:
172.30.141.127 is running Zookeeper
172.30.141.184 is running a Kafka broker
I'm starting the Zookeeper instance and all if fine. Then I try to start the broker and connect it to 172.30.141.127:2181. It seems to be able to connect and establish a session on a specific port, but then it losses connection due to some exception that doesn't seem to be logged.
The broker output:
[2015-01-19 11:03:55,029] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,030] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,031] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,031] INFO Client environment:os.arch=i386 (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,032] INFO Client environment:os.version=3.16.0-23-generic (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,033] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,033] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,037] INFO Client environment:user.dir=/home/osboxes/Desktop/kafka_2.11-0.8.2-beta (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,039] INFO Initiating client connection, connectString=172.30.141.127:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#1ecf473 (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:55,129] INFO Opening socket connection to server 172.30.141.127/172.30.141.127:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-01-19 11:03:55,186] INFO Socket connection established to 172.30.141.127/172.30.141.127:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2015-01-19 11:03:55,203] WARN Session 0x0 for server 172.30.141.127/172.30.141.127:2181, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:68)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-01-19 11:03:56,552] INFO Opening socket connection to server 172.30.141.127/172.30.141.127:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-01-19 11:03:56,555] INFO Socket connection established to 172.30.141.127/172.30.141.127:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2015-01-19 11:03:56,567] WARN Session 0x0 for server 172.30.141.127/172.30.141.127:2181, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:68)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-01-19 11:03:57,131] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2015-01-19 11:03:58,075] INFO Opening socket connection to server 172.30.141.127/172.30.141.127:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-01-19 11:03:58,077] INFO Socket connection established to 172.30.141.127/172.30.141.127:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2015-01-19 11:03:58,195] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2015-01-19 11:03:58,196] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
[2015-01-19 11:03:58,251] FATAL [Kafka Server 1], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 2000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:157)
at kafka.server.KafkaServer.startup(KafkaServer.scala:83)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:46)
at kafka.Kafka.main(Kafka.scala)
[2015-01-19 11:03:58,279] INFO [Kafka Server 1], shutting down (kafka.server.KafkaServer)
[2015-01-19 11:03:58,295] INFO [Kafka Server 1], shut down completed (kafka.server.KafkaServer)
[2015-01-19 11:03:58,308] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 2000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:157)
at kafka.server.KafkaServer.startup(KafkaServer.scala:83)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:28)
at kafka.Kafka$.main(Kafka.scala:46)
at kafka.Kafka.main(Kafka.scala)
[2015-01-19 11:03:58,335] INFO [Kafka Server 1], shutting down (kafka.server.KafkaServer)
Zookeper outputs:
[2015-01-19 11:03:55,245] INFO Accepted socket connection from /172.30.141.184:54089 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2015-01-19 11:03:55,315] WARN Exception causing close of session 0x0 due to java.io.IOException: Connection reset by peer (org.apache.zookeeper.server.NIOServerCnxn)
[2015-01-19 11:03:55,329] INFO Closed socket connection for client /172.30.141.184:54089 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
[2015-01-19 11:03:56,613] INFO Accepted socket connection from /172.30.141.184:54090 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2015-01-19 11:03:56,615] WARN Exception causing close of session 0x0 due to java.io.IOException: Connection reset by peer (org.apache.zookeeper.server.NIOServerCnxn)
[2015-01-19 11:03:56,617] INFO Closed socket connection for client /172.30.141.184:54090 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
[2015-01-19 11:03:58,133] INFO Accepted socket connection from /172.30.141.184:54091 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2015-01-19 11:03:58,134] WARN Exception causing close of session 0x0 due to java.io.IOException: Connection reset by peer (org.apache.zookeeper.server.NIOServerCnxn)
[2015-01-19 11:03:58,135] INFO Closed socket connection for client /172.30.141.184:54091 (no session established for client) (org.apache.zookeeper.server.NIOServerCnxn)
Ping works between the 2 machines. Telnet to 2181 kind of works, in that it connects, but gets disconnected from time to time. This leads me to think that the problem is with the Zookeeper instance. Both processes are started as root.
Any ideas why this is happening? Thanks
You are probably hitting the max number of connections per host.
This happens if you have [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn$Factory#247] - Too many connections from /127.0.0.1 - max is 10 in your zookeeper logs.
Fix it by setting maxClientCnxns=[something more than 10; 0 for unlimited] in your conf/zoo.cfg.
Docs (search for maxClientCnxns)
No idea about the warning, but I had the same Error problem
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 2000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
....
In my case when I was setting up producer and consumer, I provided wrong ip/port. When I changed it to correct one with:
bin/kafka-console-producer.sh --broker-list kafka_ip:kafka_port --topic test
bin/kafka-console-consumer.sh --zookeeper zookeeper_id:zookeper_port --topic test --from-beginning
my problem was solved.
As answered by #RickyA
"You are probably hitting the max number of connections per host."
Try clearing your logs folder which in my case was F:\tmp\kafka-logs.