I have Kafka brokers in cluster. We use SASL authentication. How can I request for example topics list using kafka-topics.sh?
I assume that I should run
kafka-topics.sh \
--bootstrap-server kafka.broker:9092 \
--command-config config.properties \
--list
And to pass values to config.properties
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
sasl.username=user-name
sasl.password=password
ssl.key.location=/path/to/certs/key.pem
ssl.certificate.location=/path/to/certs/crt.pem
ssl.ca.location=/path/to/certs/ca.pem
When I run it I get
Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to create new KafkaAdminClient
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:553)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:485)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:134)
at kafka.admin.TopicCommand$TopicService$.createAdminClient(TopicCommand.scala:205)
at kafka.admin.TopicCommand$TopicService$.apply(TopicCommand.scala:209)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:50)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
at org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:131)
at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
at org.apache.kafka.common.security.JaasContext.loadClientContext(JaasContext.java:82)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:167)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:524)
We use the same values to connect it from go service that uses segmentio Kafka driver. What config should be?
To pass SASL credentials you need to use the sasl.jaas.config setting. sasl.username and sasl.password are not valid settings with kafka-topics.sh (and the Java client).
For example:
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="user-name" \
password="password";
Similarly ssl.key.location, ssl.certificate.location and ssl.ca.location are not valid settings, you need to use ssl.keystore.location and ssl.truststore.location instead. See the full list of configurations: https://kafka.apache.org/documentation/#adminclientconfigs
See the SCRAM client configuration section in the Kafka docs if you want more details.
I am testing ksqldb on AWS EC2 instances in the latest release (confluent 5.5.1) and have an access problem that I can't solve.
I have a secured Kafka sever (SASL_SSSL, SASL mode PLAIN), an unsecured Schema Registry (another issue with Avro Serializers, but ok for the moment), and a secured KSQL Server and Client.
Topics are filled properly with AVRO data (value only, no key) from a JDBC source connector.
I can access the KSQL Server with ksql without issues
I can access KSQL REST API without issues
When I list topics within ksql, I get the correct list.
When I select a push stream, I get messages when I push something into the topic (with Kafka Connect, in my case).
BUT: When I call "print topic" I get a ~60 sec block in the client, followed by a 'Timeout expired while fetching topic metadata'.
The ksql-kafka.log goes wild with repeated entries like
[2020-09-02 18:52:46,246] WARN [Consumer clientId=consumer-2, groupId=null] Bootstrap broker ip-10-1-2-10.eu-central-1.compute.internal:9093 (id: -3 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1037)
The corresponding broker log shows
Sep 2 18:52:44 ip-10-1-6-11 kafka-server-start: [2020-09-02 18:52:44,704] INFO [SocketServer brokerId=1002] Failed authentication with ip-10-1-2-231.eu-central-1.compute.internal/10.1.2.231 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
This is my ksql-server.properties file:
ksql.service.id= hf_kafka_ksql_001
bootstrap.servers=ip-10-1-11-229.eu-central-1.compute.internal:9093,ip-10-1-6-11.eu-central-1.compute.internal:9093,ip-10-1-2-10.eu-central-1.compute.internal:9093
ksql.streams.state.dir=/var/data/ksqldb
ksql.schema.registry.url=http://ip-10-1-1-22.eu-central-1.compute.internal:8081
ksql.output.topic.name.prefix=ksql-interactive-
ksql.internal.topic.replicas=3
confluent.support.metrics.enable=false
# currently the keystore contains only the ksql server and the certificate chain to the CA
ssl.keystore.location=/var/kafka-ssl/ksql.keystore.jks
ssl.keystore.password=kspassword
ssl.key.password=kspassword
ssl.client.auth=true
# Need to set this to empty, otherwise the REST API is not accessible with the client key.
ssl.endpoint.identification.algorithm=
# currently the truststore contains only the CA certificate
ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
ssl.truststore.password=ctpassword
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
listeners=https://0.0.0.0:8088
advertised.listener=https://ip-10-1-2-231.eu-central-1.compute.internal:8088
authentication.method=BASIC
authentication.roles=admin,ksql,cli
authentication.realm=KsqlServerProps
# authentication for producers, needed for ksql commands like "Create Stream"
producer.ssl.endpoint.identification.algorithm=HTTPS
producer.security.protocol=SASL_SSL
producer.sasl.mechanism=PLAIN
producer.ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
producer.ssl.truststore.password=ctpassword
producer.sasl.mechanism=PLAIN
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
# authentication for consumers, needed for ksql commands like "Create Stream"
consumer.ssl.endpoint.identification.algorithm=HTTPS
consumer.security.protocol=SASL_SSL
consumer.ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
consumer.ssl.truststore.password=ctpassword
consumer.sasl.mechanism=PLAIN
consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
I call ksql with
ksql --user cli --password test --config-file /var/kafka-ssl/ksql_cli.properties https://ip-10-1-2-231.eu-central-1.compute.internal:8088'
This is my ksql client configuration ksql_cli.properties:
security.protocol=SSL
#ssl.client.auth=true
ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
ssl.truststore.password=ctpassword
ssl.keystore.location=/var/kafka-ssl/ksql.keystore.jks
ssl.keystore.password=kspassword
ssl.key.password=kspassword
JAAS config, included as Parameter on service start
KsqlServerProps {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/var/kafka-ssl/cli.password"
debug="false";
};
with cli.password containing the authentication users and passwords for the ksql client.
I call ksql with
ksql --user cli --password test --config-file /var/kafka-ssl/ksql_cli.properties https://ip-10-1-2-231.eu-central-1.compute.internal:8088'
I possibly have tried any permutation of keys, settings etc but to no avail. Obviously there is something wroing in key management. For me, it is surprising that usings streams is ok but the low-level topics is not.
Has someone found a solution for that issue? I am really running ou of ideas here. Thanks.
Found it! It was easy to overlook - the client's configuration needs of course. a SASL setting...
security.protocol=SASL_SSL
I am trying to produce from a command line to a topic that is on a local Kafka cluster with SSL enabled.
Topic was just created with:
kafka-topics --zookeeper localhost:2181 --create --topic simple --replication-factor 1 --partitions 1
Command for producing is:
kafka-avro-console-producer \
--broker-list localhost:9092 --topic simple \
--property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f1","type":"string"}]}' \
--property schema.registry.url=http://localhost:8080
Typing:
{"f1": "Alyssa"}
ERROR:
{"f1": "Alyssa"}
Error when sending message to topic simple with key: null, value: 12 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback:52) org.apache.kafka.common.errors.TopicAuthorizationException:
Not authorized to access topics: [simple]
How to add access to this topic?
What is the correct command for ACL (I am running it on my local machine).
Since you have an SSL protected cluster, then I suggest creating a config file, and using this parameter for the CLI to reference it.
--producer.config <String: config file> Producer config properties file. Note
that [producer-property] takes
precedence over this config
An example file would look like
bootstrap.servers=localhost:9092
# SSL related properties
security.protocol=SSL
ssl.keystore.location=/path/to/keystore-cert.jks
ssl.truststore.location=/path/to/truststore-cert.jks
ssl.key.password=xxxx
ssl.keystore.password=yyyy
ssl.truststore.password=zzzz
Otherise, you must pass each of these one-by-one onto the --property option
I got the same due to SSL properties, pls pass SSL.
below is sample for AVRO message
./kafka-avro-console-producer --topic <topic_name> --broker-list <broker:port> --producer.config ssl.properties --property schema.registry.url=<schema_registru_url (for avro msg)> --property value.schema="$(< /path/to/avro_chema.avsc)" --property key.schema='{"type":"string"}' --property parse.key=true --property key.separator=":"
I am getting the following WARN message while I start my host which is one of the Host Controller (HC) that is attached to the Domain Controller(DC).
[Server:server-two] 14:06:13,822 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 33) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
And my host-slave.xml has the following config...
<server-identities>
<!-- Replace this with either a base64 password of your own, or use a vault with a vault expression -->
<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>
</server-identities>
I hope this config is the reason...... maybe I didn't understand..... but I couldn't find node identifier property rather this is the default secret value which I hope could be the cause of this WARN message.
However, I didn't mention HC to lookup host-slave.xml..... the command which I ran to start my HC is.....
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
nnn.nn.nn.88 is my DC
Else please advise what's cause of the WARN message.
And please let me know the implication of this WARN message and advise us on the required config to overcome and sort out any consecutive consequences that would've been bound for this WARN.
I'm new to wildfly, and noticed this warning when I started it standalone from eclipse (I'm doing the following tutorial: https://wwu-pi.github.io/tutorials/lectures/eai/020_tutorial_jboss_project.html)
The fix was to add a node-identifier to the core-environment in the subsystem:
<subsystem xmlns="urn:jboss:domain:transactions:2.0">
<core-environment node-identifier="meindertwillemhoving">
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
</subsystem>
This is in file [wildfly]\standalone\configuration\standalone.xml.
This is the same answer as https://developer.jboss.org/message/880136#880136
According to WFLY-10541 if you are using WildFly v14.0.0 or newer you can pass the following to the startup script to set the transaction node identifier:
-Djboss.tx.node.id=<some-unique-id>
Setting the node identifier to an unique value is only required for proper handling of XA Transactions.
You can set it as follows in your XML configuration:
<subsystem xmlns="urn:jboss:domain:transactions:6.0">
<core-environment node-identifier="${jboss.tx.node.id}">
It needs to be a unique value up to 23 bytes long.
More about this here: http://www.mastertheboss.com/jboss-server/jboss-configuration/configuring-transactions-jta-using-jboss-as7-wildfly
Building on #kaptan's answer I added the following to the bottom of
bin/standalone.conf:
JAVA_OPTS="$JAVA_OPTS -Djboss.tx.node.id=`hostname -f`
This way I don't have to remember to add the "-Djboss.tx.node.id=" when running up wildfly by hand.
For this <server-identities> is not the issue. In fact, it shouldn't be touched at all.
When JBoss is started in domain mode by domain.sh, by default there will be three servers server-one server-two server-three. When you are running one more HC attached to the DC.... the defaulted server which is in auto-start mode will get clash when we start HC attaching to DC,- by the following command.
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
Or by having the host configuration at HC (default host.xml... until unless we choose a different one....).
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
In order to solve this, we need to turn auto-start to false..... And we need to create a new server-group...... To that group we need to add dc-created-server and hc-created-server..... we can choose the appropriate same profile either full-ha or full for both created servers across DC and HC.
SO when we start the group by configuring the required HEAP size including permgen space... You could start both DC and HC.... and in DC you could see both of your-created-servers are started in the created server-group.
DC- Domain Controller
HC- Host Controller
To deploy you need to upload .ear or web-archive in the Application Console. You cannot place it in the deployments folder as how you do in standalone mode with .dodeploy file.
If you upload the same .ear next version do the Replace option instead of the Remove & Add option in the upload process.
I am getting below exception
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
Configuration: RabbitMQ 3.3.5 on windows
On Config file in %APPDATA%\RabbitMQ\rabbit.config
I have done below change as per https://www.rabbitmq.com/access-control.html
[{rabbit, [{loopback_users, []}]}].
I also tried creating a user/pwd - test/test doesn't seem to make it work.
Tried the Steps from this post.
Other Configuration Details are as below:
Tomcat hosted Spring Application Context:
<!-- Rabbit MQ configuration Start -->
<!-- Connection Factory -->
<rabbit:connection-factory id="rabbitConnFactory" virtual-host="/" username="guest" password="guest" port="5672"/>
<!-- Spring AMQP Template -->
<rabbit:template id="rabbitTemplate" connection-factory="rabbitConnFactory" routing-key="ecl.down.queue" queue="ecl.down.queue" />
<!-- Spring AMQP Admin -->
<rabbit:admin id="admin" connection-factory="rabbitConnFactory"/>
<rabbit:queue id="ecl.down.queue" name="ecl.down.queue" />
<rabbit:direct-exchange name="ecl.down.exchange">
<rabbit:bindings>
<rabbit:binding key="ecl.down.key" queue="ecl.down.queue"/>
</rabbit:bindings>
</rabbit:direct-exchange>
In my Controller Class
#Autowired
RmqMessageSender rmqMessageSender;
//Inside a method
rmqMessageSender.submitToECLDown(orderInSession.getOrderNo());
In My Message sender:
import org.springframework.amqp.core.AmqpTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Component("messageSender")
public class RmqMessageSender {
#Autowired
AmqpTemplate rabbitTemplate;
public void submitToRMQ(String orderId){
try{
rabbitTemplate.convertAndSend("Hello World");
} catch (Exception e){
LOGGER.error(e.getMessage());
}
}
}
Above exception Block gives below Exception
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
Error Log
=ERROR REPORT==== 7-Nov-2014::18:04:37 ===
closing AMQP connection <0.489.0> (10.1.XX.2XX:52298 -> 10.1.XX.2XX:5672):
{handshake_error,starting,0,
{amqp_error,access_refused,
"PLAIN login refused: user 'guest' can only connect via localhost",
'connection.start_ok'}}
Pls find below the pom.xml entry
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
<version>1.3.6.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-amqp</artifactId>
<version>4.0.4.RELEASE</version>
</dependency>
Please let me know if you have any thoughts/suggestions
I am sure what Artem Bilan has explained here might be one of the reasons for this error:
Caused by: com.rabbitmq.client.AuthenticationFailureException:
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN.
For details see the
but the solution for me was that I logged in to rabbitMQ admin page (http://localhost:15672/#/users) with the default user name and password which is guest/guest then added a new user and for that new user I enabled the permission to access it from virtual host and then used the new user name and password instead of default guest and that cleared the error.
To complete #cpu-100 answer,
in case you don't want to enable/use web interface, you can create a new credentials using command line like below and use it in your code to connect to RabbitMQ.
$ rabbitmqctl add_user YOUR_USERNAME YOUR_PASSWORD
$ rabbitmqctl set_user_tags YOUR_USERNAME administrator
$ rabbitmqctl set_permissions -p / YOUR_USERNAME ".*" ".*" ".*"
user 'guest' can only connect via localhost
That's true since RabbitMQ 3.3.x. Hence you should upgrade to the same version the client library, or just upgrade Spring AMQP to the latest version (if you use dependency managent system).
Previous version of client used 127.0.0.1 as default value for the host option of ConnectionFactory.
The error
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
can occur if the credentials that your application is trying to use to connect to RabbitMQ are incorrect or missing.
I had this happen when the RabbitMQ credentials stored in my ASP.NET application's web.config file had a value of "" for the password instead of the actual password string value.
To allow guest access remotely, write this
[{rabbit, [{loopback_users, []}]}].
to here
c:\Users\[your user name]\AppData\Roaming\RabbitMQ\rabbitmq.config
then restart the rabbitmq windows service (Source https://www.rabbitmq.com/access-control.html)
On localhost , By default use 'amqp://guest:guest#localhost:5672'
So on a remote or hosted RabbitMQ. Let's say you have the following credentials
username: niceboy
password: notnice
host: goxha.com
port : 1597
then the uri you should pass will be
amqp://niceboy:notnice#goxha.com:1597
following the template amqp://user:pass#host:10000
if you have a vhost you can do amqp://user:pass#host:10000/vhost where the trailing vhost will be the name of your vhost
New solution:
The node module can't handle : in a password properly. Even url encoded, like it would work normally, it does not work.
Don't use typicalspecial characters from an URL in the password!
Like one of the following: : . ? + %
Original, wrong answer:
The error message clearly complains about using PLAIN, it does not mean the crendentials are wrong, it means you must use encrypted data delivery (TLS) instead of plaintext.
Changing amqp:// in the connection string to amqps:// (note the s) solves this.
just add login password to connect to RabbitMq
CachingConnectionFactory connectionFactory =
new CachingConnectionFactory("rabbit_host");
connectionFactory.setUsername("login");
connectionFactory.setPassword("password");
For me the solution was simple: the user name is case sensitive. Failing to use the correct caps will also lead to the error.
if you use the number as your password, maybe you should try to change your password using string.
I can login using deltaqin:000000 on the website, but had this while running the program. then change the password to deltaiqn. and it works.
I made exactly what #grepit made.
But I had to made some changes in my Java code:
In Producer and Receiver project I altered:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("your-host-ip");
factory.setUsername("username-you-created");
factory.setPassword("username-password");
Doing that, you are connecting an specific host as the user you have created.
It works for me!
In my case I had this error, cuz of wrongly set password (I tried to use 5672, when the actual one in my system was 5676).
Maybe this will help someone to double check ports...
I was facing this issue due to empty space at the end of the password(spring.rabbitmq.password=rabbit ) in spring boot application.properties got resolved on removing the empty space. Hope this checklist helps some one facing this issue.
For C# coder, I tried below code and It worked, may be this can help someone so posting here.
scenario- RabbitMQ queue is running on another system in local area network but I was having same error.
by default there is a "guest" user exists. but you can not access remote server's queue (rabbitMq) using "guest" user so you need to create new user, Here I created "tester001" user to access data of remote server's queue.
ConnectionFactory factory = new ConnectionFactory();
factory.UserName = "tester001";
factory.Password = "testing";
factory.VirtualHost = "/";
factory.HostName = "192.168.1.101";
factory.Port = AmqpTcpEndpoint.UseDefaultPort;
If you tried all of these answers for your issue but you still got "ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN", maybe you should remove rabbitmq and install it with a newer version.
Newer version worked for me.
Add one user and pass and connect to them. You can add 1 user via env variables (e.g., useful when Rabbit initializes in a Docker): RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS. See more details here:
https://stackoverflow.com/a/70676040/1200914
set ConnectionFactory or Connection hostname to localhost