Exception while loading Zookeeper JAAS login context 'Client' [duplicate] - authentication

when i run the zookeeper from the package in the kakfa_2.12-2.3.0 i am getting the following error
$ export KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/kafka_2.12-2.3.0/config/zookeeper_jaas.conf"
$ ./bin/zookeeper-server-start.sh config/zookeeper.properties
and the zookeeper_jaas.conf is
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
and the zookeeper.properties file is
server=localhost:9092
#server=localhost:2888:3888
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="ibm" password="ibm-secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
ssl.truststore.location=**strong text**/kafka/apache-zookeeper-3.5.5-bin/zookeeperkeys/client.truststore.jks
ssl.truststore.password=test1234
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
can anyone suggest what could be the reason

You seem to have mixed up a bunch of Kafka SASL configuration into your Zookeeper configuration files. Both Zookeeper and Kafka have different SASL support so it's not going to work.
I'm guessing you want to enable SASL authentication between Kafka and Zookeeper. In that case you need to follow the Zookeeper Server-Client guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication
Zookeeper does not support SASL Plain, but DigestMD5 is pretty similar. In that case your jaas.conf file should look like:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret"
user_bob="bobsecret";
};
Then you need to configure your Kafka brokers to connect to Zookeeper with SASL. You can do that using another jaas.conf file (this time loading it in Kafka):
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="bob"
password="bobsecret";
};
Note: you can also enable SASL between the Zookeeper servers. To do so, follow the Server-Server guide: https://cwiki.apache.org/confluence/display/ZOOKEEPER/Server-Server+mutual+authentication

Related

Kafka Security implementation issue SASL SSL and SCRAM

I'm facing error while starting kafka server,
have setup the SSL and it's working fine for kafka 3 brokers. And zookeeper is also setup with SSL
Now tried to setup the SCRAM with SASL_SSL for kafka broker from server property file.
It's not working I have created a user with following command
kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zookeeper-client.properties --entity-type users --entity-name broker-admin --alter --add-config 'SCRAM-SHA-512=[password=DEM123]'
and I can see user is created.
but while trying to run the command to run kafka broker
kafka-server-start.sh -daemon server-0.properties
It is having some error while I have checked server.log file
[2021-10-05 16:21:38,369] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /config/users/broker-admin
Can anyone support me?
let me share my zookeeper.proerpties file
dataDir=/var/www/kafka/data/zookeeper
clientPort=2181
secureClientPort=2182
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
ssl.trustStore.location=/var/www/kafka/ssl/kafka.zookeeper.truststore.jks
ssl.trustStore.password=zookeepbook
ssl.keyStore.location=/var/www/kafka/ssl/kafka.zookeeper.keystore.jks
ssl.keyStore.password=zookeepbook
ssl.clientAuth=need
maxClientCnxns=0
admin.enableServer=true
admin.serverPort=9090
server.1=localhost:2888:3888
server.properties file content :
broker.id=0
listeners=SASL_SSL://localhost:9092
advertised.listeners=SASL_SSL://localhost:9092
zookeeper.connect=localhost:2182
log.dirs=/var/www/kafka/data/broker-0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
zookeeper.ssl.client.enable=true
zookeeper.ssl.protocol=TLSv1.2
zookeeper.ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
zookeeper.ssl.truststore.password=zookeepbookbrk0
zookeeper.ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
zookeeper.ssl.keystore.password=zookeepbookbrk0
zookeeper.set.acl=true
ssl.truststore.location=/var/www/kafka/ssl/kafka.broker-0.truststore.jks
ssl.truststore.password=zookeepbookbrk0
ssl.keystore.location=/var/www/kafka/ssl/kafka.broker-0.keystore.jks
ssl.keystore.password=zookeepbookbrk0
ssl.key.password=zookeepbookbrk0
security.inter.broker.protocol=SASL_SSL
ssl.client.auth=none
ssl.protocol=TLSv1.2
sasl.enabled.mechanisms=SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
listener.name.sasl_ssl.scram-sha-512.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username='broker-admin' password=DEM123;
super.users=User:broker-admin
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
Can you try to set 'skipACL=yes' to your zookeeper.properties ?
If you authenticated with Zookeeper by using SSL client certs when you created 'broker-admin' user, I think it because access from other than the place where you executed the command is denied.

kafka connect to s3 fails to start

I'm trying to configure the kafka-connect to send my data from kafka to s3.
I'm newbie in aspect of kafka, and I'm trying to implement this flow without any ssl encryptions just to get the hang of it.
kafka version : 2.12-2.2.0
kafka-connect : 4.1.1 (https://api.hub.confluent.io/api/plugins/confluentinc/kafka-connect-s3/versions/4.1.1/archive)
In the server.properties file the only change that I did is setting the advertised.listeners to my ec2 IP:
advertised.listeners=PLAINTEXT://ip:9092
kafka-connect properties :
# Kafka broker IP addresses to connect to
bootstrap.servers=localhost:9092
# Path to directory containing the connector jar and dependencies
plugin.path=/root/kafka_2.12-2.2.0/plugins/
# Converters to use to convert keys and values
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
# The internal converters Kafka Connect uses for storing offset and configuration data
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
security.protocol=SASL_PLAINTEXT
consumer.security.protocol=SASL_PLAINTEXT
my s3-sink.properties file :
name=s3.sink
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=my_topic
s3.region=us-east-1
s3.bucket.name=my_bucket
s3.part.size=5242880
flush.size=3
storage.class=io.confluent.connect.s3.storage.S3Storage
format.class=io.confluent.connect.s3.format.json.JsonFormat
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE
I'm launcing kafka-connect with the following command :
connect-standalone.sh kafka-connect.properties s3-sink.properties
At first I got the following error :
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
From other posts I saw that I need to create a jaas config file so that what I did :
cat config/kafka_server_jass.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="userName"
serviceName="kafka"
password="password";
};
and :
export KAFKA_OPTS="-Djava.security.auth.login.config=/root/kafka_2.12-2.2.0/config/kafka_server_jass.conf"
Now I'm getting the following error :
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
help :)
You might also need to define principal and keytab inside your jaas configuration:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="userName"
serviceName="kafka"
password="password";
useKeyTab=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com#EXAMPLE.COM";
};

KAFKA and SSL : java.lang.OutOfMemoryError: Java heap space when using kafka-topics command on KAFKA SSL cluster

this is my first post on Stackoverflow, i hope i didnt choose the wrong section.
Context :
Kafka HEAP size is configured on following file :
/etc/systemd/system/kafka.service
With following parameter :
Environment="KAFKA_HEAP_OPTS=-Xms6g -Xmx6g"
OS is "CentOS Linux release 7.7.1908".
Kafka is "confluent-kafka-2.12-5.3.1-1.noarch", installed from the following repository :
# Confluent REPO
[Confluent.dist]
name=Confluent repository (dist)
baseurl=http://packages.confluent.io/rpm/5.3/7
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
[Confluent]
name=Confluent repository
baseurl=http://packages.confluent.io/rpm/5.3
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
I activated SSL on a 3-machine KAFKA cluster few days ago, and suddently, the following command stopped working :
kafka-topics --bootstrap-server <the.fqdn.of.server>:9093 --describe --topic <TOPIC-NAME>
Which return me the following error :
[2019-10-03 11:38:52,790] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1':(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152)
at java.lang.Thread.run(Thread.java:748)
On the server's log, the following line appears when i try to request it via "kafka-topics" :
/var/log/kafka/server.log :
[2019-10-03 11:41:11,913] INFO [SocketServer brokerId=<ID>] Failed authentication with /<ip.of.the.server> (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I was able to use this command properly BEFORE implementing SSL on the cluster. Here is the configuration i'm using.
All functionnality work properly (consumers, producers...) except "kafka-topics" :
# SSL Configuration
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
ssl.keystore.type=<keystore-type>
ssl.keystore.location=<keystore-path>
ssl.keystore.password=<keystore-password>
# Enable SSL between brokers
security.inter.broker.protocol=SSL
# Listeners
listeners=SSL://<fqdn.of.the.server>:9093
advertised.listeners=SSL://<fqdn.of.the.server>:9093
There is no problem with the certificate (which is signed by internal CA, internal CA which i added to the truststore specified on the configuration). OpenSSL show no errors :
openssl s_client -connect <fqdn.of.the.server>:9093 -tls1
>> Verify return code: 0 (ok)
The following command is working pretty well with SSL, thanks to parameter "-consumer.config client-ssl.properties"
kafka-console-consumer --bootstrap-server <fqdn.of.the.server>:9093 --topic <TOPIC-NAME> -consumer.config client-ssl.properties
"client-ssl.properties" content is :
security.protocol=SSL
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
Right now, i'm forced to use "--zookeeper", which according to the documentation, is deprecated :
--zookeeper <String: hosts> DEPRECATED, The connection string for
the zookeeper connection in the form
host:port. Multiple hosts can be
given to allow fail-over.
And of course, it's working pretty well :
kafka-topics --zookeeper <fqdn.of.the.server>:2181 --describe --topic <TOPIC-NAME>
Topic:<TOPIC-NAME> PartitionCount:3 ReplicationFactor:2
Configs:
Topic: <TOPIC-NAME> Partition: 0 Leader: <ID-3> Replicas: <ID-3>,<ID-1> Tsr: <ID-1>,<ID-3>
Topic: <TOPIC-NAME> Partition: 1 Leader: <ID-1> Replicas: <ID-1>,<ID-2> Isr: <ID-2>,<ID-1>
Topic: <TOPIC-NAME> Partition: 2 Leader: <ID-2> Replicas: <ID-2>,<ID-3> Isr: <ID-2>,<ID-3>
So, my question is : why am i unable to use "--bootstrap-server" atm ? Because of the "zookeeper" deprecation, i'm worried about not to be able to consult my topics, and their details...
I believe that kafka-topics needs the same option than kafka-console-consumer, aka "-consumer.config"...
Ask if any additionnal precision needed.
Thanks a lot, hope my question is clear and readable.
Blyyyn
I finally found a way to deal with this SSL error. The key is to use the following setting :
--command-config client-ssl.properties
This is working with the most part of KAFKA commands, like kafka-consumer-groups, and of course kafka-topics. See examples below :
kafka-consumer-groups --bootstrap-server <kafka-hostname>:<kafka-port> --group <consumer-group> --topic <topic> --reset-offsets --to-offset <offset> --execute --command-config <ssl-config>
kafka-topics --list --bootstrap-server <kafka-hostname>:<kafka-port> --command-config client-ssl.properties
ssl-config was "client-ssl.properties",see initial post for content.
Beware, by using IP address on , you'll have an error if the machine certificate doesnt have alternative name with that IP address. Try to have correct DNS resolution and use FQDN if possible.
Hope this solution will help, cheers!
Blyyyn
Stop your Brokers and run below ( assuming you have more that 1.5GB RAM on your server)
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
then start your Brokers on all 3 nodes and then try it.
Note that for consumer and producer clients you need to prefix security.protocol accordingly inside your client-ssl.properties.
For Kafka Consumers:
consumer.security.protocol=SASL_SSL
For Kafka Producers:
producer.security.protocol=SASL_SSL

Kafka SASL: OAUTHBEARER and PLAIN simultaniously

What I am trying to do is -
For Clients to Broker communication - use OAUTHBEARER authentication
For Broker to Broker communication - use PLAIN authentication
I Have following JAAS configuration:
{
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="inter"
password="inter-secret"
user_inter="inter-secret"
user_admin="YvNzcbmqhA0DfxjP";
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="zookeeper"
password="zookeeper-secret";
};
}
And I have following configs in server.properties:
sasl.enabled.mechanisms=PLAIN,OAUTHBEARER
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.server.callback.handler.class=br.com.jairsjunior.security.oauthbearer.OauthAuthenticateValidatorCallbackHandler
But if start the kafka service I am seeing the error like below:
used by: java.lang.IllegalArgumentException: Must supply exactly 1 non-null JAAS mechanism configuration (size was 2)
at org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredValidatorCallbackHandler.configure(OAuthBearerUnsecuredValidatorCallbackHandler.java:114)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:122)
... 17 more
which indicates kafka is not allowing to specify multiple JAAS mechanism configurations.
So how can I specify multiple JAAS configs, and setup authentication mechanisms like below:
CLient to Broker ----> OAUTHBEARER
Broker to Broker ----> PLAIN
Thanks!
I am currently also working on the problem to use plain and oauthbearer simultaniously, which I have not solved yet but I solved your specific question in the following way.
This is my Jaas Configuration:
internal.KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_test="test";
};
external.KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="username"
password="pw";
};
Then I set the setting in the server.properties the following way:
inter.broker.listener.name: INTERNAL
sasl.mechanism.inter.broker.protocol: PLAIN
listener.security.protocol.map: INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_SSL
listeners: "INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092"
sasl.enabled.mechanisms: PLAIN,OAUTHBEARER
listener.name.external.oauthbearer.sasl.server.callback.handler.class: my.module.kafka.security.oauthbearer.OauthAuthenticateValidatorCallbackHandler
listener.name.external.oauthbearer.sasl.login.callback.handler.class: my.module.kafka.security.oauthbearer.OauthAuthenticateLoginCallbackHandler
When you it this way you won't get your error. Sadly I get another error when the broker want to set up the external connection:
javax.security.auth.callback.UnsupportedCallbackException: Unrecognized SASL Login callback
at org.apache.kafka.common.security.authenticator.AbstractLogin$DefaultLoginCallbackHandler.handle(AbstractLogin.java:105)
at org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.identifyToken(OAuthBearerLoginModule.java:316)
at org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.login(OAuthBearerLoginModule.java:301)
... 32 more
It seems like the kafka brokers are ignoring oauthbearer callbackhandler. This is a bit strange because external is working perfectly when I configure it as the only listener.
I hope it helps you with your problem!

Spring AMQP + RabbitMQ 3.3.5 ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN

I am getting below exception
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
Configuration: RabbitMQ 3.3.5 on windows
On Config file in %APPDATA%\RabbitMQ\rabbit.config
I have done below change as per https://www.rabbitmq.com/access-control.html
[{rabbit, [{loopback_users, []}]}].
I also tried creating a user/pwd - test/test doesn't seem to make it work.
Tried the Steps from this post.
Other Configuration Details are as below:
Tomcat hosted Spring Application Context:
<!-- Rabbit MQ configuration Start -->
<!-- Connection Factory -->
<rabbit:connection-factory id="rabbitConnFactory" virtual-host="/" username="guest" password="guest" port="5672"/>
<!-- Spring AMQP Template -->
<rabbit:template id="rabbitTemplate" connection-factory="rabbitConnFactory" routing-key="ecl.down.queue" queue="ecl.down.queue" />
<!-- Spring AMQP Admin -->
<rabbit:admin id="admin" connection-factory="rabbitConnFactory"/>
<rabbit:queue id="ecl.down.queue" name="ecl.down.queue" />
<rabbit:direct-exchange name="ecl.down.exchange">
<rabbit:bindings>
<rabbit:binding key="ecl.down.key" queue="ecl.down.queue"/>
</rabbit:bindings>
</rabbit:direct-exchange>
In my Controller Class
#Autowired
RmqMessageSender rmqMessageSender;
//Inside a method
rmqMessageSender.submitToECLDown(orderInSession.getOrderNo());
In My Message sender:
import org.springframework.amqp.core.AmqpTemplate;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
#Component("messageSender")
public class RmqMessageSender {
#Autowired
AmqpTemplate rabbitTemplate;
public void submitToRMQ(String orderId){
try{
rabbitTemplate.convertAndSend("Hello World");
} catch (Exception e){
LOGGER.error(e.getMessage());
}
}
}
Above exception Block gives below Exception
org.springframework.amqp.AmqpAuthenticationException: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
Error Log
=ERROR REPORT==== 7-Nov-2014::18:04:37 ===
closing AMQP connection <0.489.0> (10.1.XX.2XX:52298 -> 10.1.XX.2XX:5672):
{handshake_error,starting,0,
{amqp_error,access_refused,
"PLAIN login refused: user 'guest' can only connect via localhost",
'connection.start_ok'}}
Pls find below the pom.xml entry
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
<version>1.3.6.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-amqp</artifactId>
<version>4.0.4.RELEASE</version>
</dependency>
Please let me know if you have any thoughts/suggestions
I am sure what Artem Bilan has explained here might be one of the reasons for this error:
Caused by: com.rabbitmq.client.AuthenticationFailureException:
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN.
For details see the
but the solution for me was that I logged in to rabbitMQ admin page (http://localhost:15672/#/users) with the default user name and password which is guest/guest then added a new user and for that new user I enabled the permission to access it from virtual host and then used the new user name and password instead of default guest and that cleared the error.
To complete #cpu-100 answer,
in case you don't want to enable/use web interface, you can create a new credentials using command line like below and use it in your code to connect to RabbitMQ.
$ rabbitmqctl add_user YOUR_USERNAME YOUR_PASSWORD
$ rabbitmqctl set_user_tags YOUR_USERNAME administrator
$ rabbitmqctl set_permissions -p / YOUR_USERNAME ".*" ".*" ".*"
user 'guest' can only connect via localhost
That's true since RabbitMQ 3.3.x. Hence you should upgrade to the same version the client library, or just upgrade Spring AMQP to the latest version (if you use dependency managent system).
Previous version of client used 127.0.0.1 as default value for the host option of ConnectionFactory.
The error
ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
can occur if the credentials that your application is trying to use to connect to RabbitMQ are incorrect or missing.
I had this happen when the RabbitMQ credentials stored in my ASP.NET application's web.config file had a value of "" for the password instead of the actual password string value.
To allow guest access remotely, write this
[{rabbit, [{loopback_users, []}]}].
to here
c:\Users\[your user name]\AppData\Roaming\RabbitMQ\rabbitmq.config
then restart the rabbitmq windows service (Source https://www.rabbitmq.com/access-control.html)
On localhost , By default use 'amqp://guest:guest#localhost:5672'
So on a remote or hosted RabbitMQ. Let's say you have the following credentials
username: niceboy
password: notnice
host: goxha.com
port : 1597
then the uri you should pass will be
amqp://niceboy:notnice#goxha.com:1597
following the template amqp://user:pass#host:10000
if you have a vhost you can do amqp://user:pass#host:10000/vhost where the trailing vhost will be the name of your vhost
New solution:
The node module can't handle : in a password properly. Even url encoded, like it would work normally, it does not work.
Don't use typicalspecial characters from an URL in the password!
Like one of the following: : . ? + %
Original, wrong answer:
The error message clearly complains about using PLAIN, it does not mean the crendentials are wrong, it means you must use encrypted data delivery (TLS) instead of plaintext.
Changing amqp:// in the connection string to amqps:// (note the s) solves this.
just add login password to connect to RabbitMq
CachingConnectionFactory connectionFactory =
new CachingConnectionFactory("rabbit_host");
connectionFactory.setUsername("login");
connectionFactory.setPassword("password");
For me the solution was simple: the user name is case sensitive. Failing to use the correct caps will also lead to the error.
if you use the number as your password, maybe you should try to change your password using string.
I can login using deltaqin:000000 on the website, but had this while running the program. then change the password to deltaiqn. and it works.
I made exactly what #grepit made.
But I had to made some changes in my Java code:
In Producer and Receiver project I altered:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("your-host-ip");
factory.setUsername("username-you-created");
factory.setPassword("username-password");
Doing that, you are connecting an specific host as the user you have created.
It works for me!
In my case I had this error, cuz of wrongly set password (I tried to use 5672, when the actual one in my system was 5676).
Maybe this will help someone to double check ports...
I was facing this issue due to empty space at the end of the password(spring.rabbitmq.password=rabbit ) in spring boot application.properties got resolved on removing the empty space. Hope this checklist helps some one facing this issue.
For C# coder, I tried below code and It worked, may be this can help someone so posting here.
scenario- RabbitMQ queue is running on another system in local area network but I was having same error.
by default there is a "guest" user exists. but you can not access remote server's queue (rabbitMq) using "guest" user so you need to create new user, Here I created "tester001" user to access data of remote server's queue.
ConnectionFactory factory = new ConnectionFactory();
factory.UserName = "tester001";
factory.Password = "testing";
factory.VirtualHost = "/";
factory.HostName = "192.168.1.101";
factory.Port = AmqpTcpEndpoint.UseDefaultPort;
If you tried all of these answers for your issue but you still got "ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN", maybe you should remove rabbitmq and install it with a newer version.
Newer version worked for me.
Add one user and pass and connect to them. You can add 1 user via env variables (e.g., useful when Rabbit initializes in a Docker): RABBITMQ_DEFAULT_USER and RABBITMQ_DEFAULT_PASS. See more details here:
https://stackoverflow.com/a/70676040/1200914
set ConnectionFactory or Connection hostname to localhost