Unable to list the consumer groups from remote kafka cluster - ssl

We have a remote kafka cluster which has multiple brokers. It has SSL & Jaas configuration setup for authentication. I have installed kafka in my local windows 10 system, and I am trying to connect to this remote kafka cluster to run the consumer & producer batch commands.
Now, I am able to get the list of topics from the remote cluster, also can create new topics but unable to list the consumer groups, it is showing error,
kafka-consumer-groups.bat --bootstrap-server {kafka-broker-1}:9093 --command-config config/consumer.properties --list
Error: Executing consumer group command failed due to org.apache.kafka.common.KafkaException: Failed to find brokers to send ListGroups.

Related

Filebeat not pushing events to multiple Kafka broker in a single Kafka cluster

The Filebeat in my setup pushes the events to a Kafka cluster with 2 brokers. I have added only one node in the host list but both the brokers in the cluster were discovered. I understood this from the Filebeat logs.
But, even though both the brokers were discovered the events are published to only broker.
Below is teh filebeat config for output
output.kafka:
hosts: ["host1:port"]
topic: '%{[fields.document_type]}'
worker: 4
partition.round_robin:
reachable_only: true
required_acks: 1
compression: gzip
compression_level: 3
Even in the logs it lists that it is connected to only one of the broker as registered one.
2019-01-07T06:12:38.789-0800 INFO kafka/log.go:53 client/metadata fetching metadata for all topics from broker host1:port
2019-01-07T06:12:38.799-0800 INFO kafka/log.go:53 Connected to broker at host1:port (unregistered)
2019-01-07T06:12:38.806-0800 INFO kafka/log.go:53 client/brokers registered new broker #1 at host1:port
2019-01-07T06:12:38.806-0800 INFO kafka/log.go:53 client/brokers registered new broker #0 at host2:port
2019-01-07T06:12:38.806-0800 INFO kafka/log.go:53 kafka message: Successfully initialized new client
2019-01-07T06:12:38.806-0800 INFO pipeline/output.go:105 Connection to kafka(host1:port,host2:port) established
2019-01-07T06:12:38.807-0800 INFO kafka/log.go:53 kafka message: Producer.MaxMessageBytes must be smaller than MaxRequestSize; it will be ignored.
2019-01-07T06:12:38.808-0800 INFO kafka/log.go:53 producer/broker/0 starting up
2019-01-07T06:12:38.814-0800 INFO kafka/log.go:53 Connected to broker at host2:port (registered as #0)
The zookeeper console list both the broker in cluster so the Kafka cluster are fine too.
I am not able to figure the mistake that is causing the Filebeat to push to only one Kafka broker.
Thank you
I think I found the reason for this. My Kafka cluster was on default setting which was set to one partition. Also I started Kafka cluster with one node when the topic was created. Hence all the topic's partition was allocated to one node. When the 2nd node was added Kafka do not automatically re balance the partitions across the nodes causing all partition in one place .Hence Filebeat was sending data to only one node.

console producer is trying to connect non-leader and non-replica/isr node in Apache kafka

My kafka cluster is having 9 nodes .
Bcz I am trying to implement SSL, I am currently trying to set up SSL on only 1 node.
Now as I have done SSL setup for node with id 1001 and to test this, I created a topic 'SSLtest' which has leader as well as replica on 1001 only.
When I try to push data from console producer on same machine to the topic 'SSLtest' (i.e 1001), it tries to connect machine 1006 as log says :
Error when handling request {topics=[SSLtest],allow_auto_topic_creation=true} (kafka.server.KafkaApis)
kafka.common.BrokerEndPointNotAvailableException: End point with listener name SSL not found for broker 1006.
Why can't it push data to 1001 directly. What is it trying to do on 1006.

MQTT and AWS ELB : How to make ELB _forget_ to which node a client was previously connected to?

I have a cluster of 2 RabbitMQ nodes (each running version 3.6.10 of RabbitMQ with MQTT plugin enabled) and an AWS classic load balancer in front of them. Server and clients exchange MQTT messages.
Clients (apps running on mobile devices and using Eclipse Paho client lib) connect to the load balancer which distributes connections in round-robin fashion.
When I bring down one node, say Node1, all clients that were connected to Node1 get a callback indicating connection to the broker is lost.
These clients try to reconnect but the connection attempt fails indicating the broker is not reachable.
From AWS console I can see that AWS ELB detects that Node1 is down and marks it as "OutOfService".
Connection requests from new clients are routed to the "InService" node Node2; however, connection requests from existing clients that were previously connected to Node1 always fail!
ELB is configured with idle timeout of 180 seconds. Enabling or disabling connection draining in ELB did not make any difference.
Is there any specific configuration to make ELB forget that the existing clients were connected to Node1 and allow them to connect to Node2?
I tried by adding following HA policy :
rabbitmqctl set_policy ha-mqtt "^mqtt" \ '{"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}'
With this policy in place, all queues created for MQTT clients were mirrored. Now when Node1 is down, connection attempts from existing client IDs also get routed to the other active node!
This makes me wonder what is the relationship between client IDs from MQTT clients and their connection to broker nodes? I thought mirroring of queues is necessary only to retain and access messages that were not yet acknowledged when the queue master node goes down. But I see that the clients are not even able to establish a connection!

Activemq stops working - activemq/zookeeper setup

I've configured 3 zookeepers and 3 activemq instances in 1 cluster.
Scenario
3 activemq instances with only 1 master and other two is slave.
all 3 activemq instances are running, i.e. sudo service activemq status returns running but checking the logs, 1 instance(activemq1) is currently waiting for other cluster members, 1 instance(activemq2) stops, 1 instance(activemq3) has error. Assumming that we only require two instance to elect master, this setup should be able to run successfully .
two activemq instances should be running
zookeeper instances are running fine.
Issue
Below are the stacktraces of the respective activemq instances. Based on my understanding, it needs at least two properly running activemq intances for the cluster to nominate a master instance. Given that all activemq instanes produces running when issued with sudo service activemq status , I'm assuming there is an issue inside each activemq instances - refer to below stacktraces. Now, I noticed on logs, that activemq1 only fails to be properly running since other activemq instances failed internally. Notice the stacktrace on activemq2, it's stucked after it successfully connected to zookeeper and activemq3 has issue, I still need to figure out. The issue is fixed when I restarted activemq2 and activemq3. However, I can't be sure this won't happen again, thus this question.
activem1 show the below stacktrace, which I assume that this is because the other 2 activemq instances are running but has errors
Session establishment complete on server 10.5.4.111/10.5.4.111:2181, sessionid = 0x1582db00708000c, negotiated timeout = 4000
Not enough cluster members connected to elect a master.
Not enough cluster members connected to elect a master.
Not enough cluster members connected to elect a master.
activemq2 has the below stacktrace, which is the one I don't understand. It has stopped after successful connection to zookeeper, which should be detected by other activemq instances belonging to cluster-activem1 and activemq3
Opening socket connection to server 10.5.4.111/10.5.4.111:2181
Socket connection established to 10.5.4.111/10.5.4.111:2181, initiating session
Session establishment complete on server 10.5.4.111/10.5.4.111:2181, sessionid = 0x1582db00708000d, negotiated timeout = 4000
activemq3 has the below stacktrace
org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:568)[apache-jsp-8.0.9.M3.jar:2.3]
Configuration for activemq
the previous config here is with 2s zkSessionTimeout - which is the default. I made it to 4s as per googled to maximize the time needed for an activemq instance registers itself to zookeeper.
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:61619"
zkAddress="zookeeper_addresses_here"
hostname="activemq_hostname_here"
zkSessionTimeout="4s"
/>
</persistenceAdapter>
Configuration for zookeeper
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/my/data/dir
clientPort=2181
server.1=activemq1_privateIP:2888:3888
server.2=activemq2_privateIP:2888:3888
server.3=activemq3_privateIP::2888:3888
autopurge.purgeInterval=24
autopurge.snapRetainCount=5
Zookeeper version 3.4.9
ActiveMQ version 5.13.4
Setup via Opswork
The attribute "directory" master-slave mq is need to refer to the same folder

ActiveMQ Durable consumer is in use for client and subscriptionName via STOMP

I have an iOS client that connects to several ActiveMQ topics and queues via STOMP protocol. When I connect to the server, I send the following message:
2012-10-30 10:19:29,757 [MQ NIO Worker 2] TRACE StompIO
CONNECT
passcode:*****
login:system
2012-10-30 10:19:29,758 [MQ NIO Worker 2] DEBUG ProtocolConverter
2012-10-30 10:19:29,775 [MQ NIO Worker 2] TRACE StompIO
CONNECTED
heart-beat:0,0
session:ID:mbp.local-0123456789
server:ActiveMQ/5.6.0
version:1.0
And then, I subscribe to several topics using the following message:
2012-10-30 10:19:31,028 [MQ NIO Worker 2] TRACE StompIO
SUBSCRIBE
activemq.subscriptionName:user#mail.com-/topic/SPOT.SPOTCODE
activemq.prefetchSize:1
activemq.dispatchAsync:true
destination:/topic/SPOT.SPOTCODE
client-id:1234
activemq.retroactive:true
I'm facing two problems with the ActiveMQ server. Each time I connect, the Number of Consumers column in the web interface gets incremented, so I have just one real consumer but the count is around 50 consumers. But the most problematic issue is that when I plug another iOS device into my laptop to test the messaging environment, i get the following error when connecting to ActiveMQ:
WARN | Async error occurred: javax.jms.JMSException: Durable consumer is in use for client: ID:mbp.local-0123456789 and subscriptionName: user#mail.com-/topic/SPOT.SPOTCODE
This seems to be that disconnecting from ActiveMQ via STOMP is not working propertly, because this logging attempt is made when the other device is not running the app. I've tried the following things in order to solve the issue:
Always logoff when attempting to subscribe to the topics.
Subscribe
I'm currently using v5.6.0 executing the server on my laptop.
IF you read the STOMP page on the ActiveMQ site you will notice that client-id and activemq-subscriptionName must match in order to use STOMP durable subscribers. These value should be different for each of you client's otherwise you will see the same errors because of the name clashes.