Java RabbitMQ configuration - rabbitmq

I am new in using rabbitMQ and I am trying to sent an 'hello' message over internet,
I am implementing the example available in the rabbitMQ website Java RabbitMQ Hello world example, but in the example they use localhost, I try to change it to the IP address for the sender and receiver computer as explained at the website and put the sender code at a machine and receiver code on another, but it doesn't work.
My questions:
1) is rabbitMQ works over internet or it works just over local network?
2) in both cases, how to configure each computer and what each one should have?
3) Do I need to install rabbitMQ on both machines? or on one of them to run it a server?
Please if anyone can help me in configure them step-by-step, get me an answer with details.

It is a grant problem.
The user guest guest ( default for rabbitmq ) works only in localhost.
Please read this post:
Can't access RabbitMQ web management interface after fresh install
and also this:
RabbitMQ 3.3.1 can not login with guest/guest
To enable guest guest and/or create a new user.
The best practice is to create another user.

Let me answer your questions one by one
1) Yes. RabbitMQ should work over internet, you should be able to connect by giving the public ip of the RabbitMQ server. If you connecting to a server with username/password enabled then it should be provided while creating the connection.
ConnectionFactory factory = new ConnectionFactory();
factory.setUsername("username");
factory.setPassword("pwd");
2) One of the machines should have the RabbitMQ server(broker) installed and running. You can produce or consume messages from any of the machines using Java RabbitMQ client. If you had 3 machines then all three, the RabbitMQ server, message producer and message consumer could be on 3 machines.
3) You don't need to install RabbitMQ on both the machines. Install only on the machine which is running as server.

Related

Mosquitto broker with SSL encryption for bridge connection

Let me first explain what I am trying to achieve first and then I'll get into the details of the things I have tried already.
So, we have a VM that is on our premise and another VM that is on a customer's premise. The access to these VMs are only available to certain IP addresses. So, we could say that they are secure enough for our use-case.
Data from customer's environment flows through and into our VM through a mosquitto broker that is set-up on both these environments. This is done with the help of broker bridging that works fine. However, since this bridge is over the internet, we want to ensure that the data is encrypted and that no-one could intercept this over the internet and use this data in a malicious manner.
To achieve this we are making the use of SSL broker encryption. The first method I tried is to use PKS encryption method.
Here is the broker config at the customer environment.
listener 8883
connection bridgetest
address 147.1.20.1:8883
bridge_identity bridge1
bridge_psk 123456789
topic # both
And here is the broker config at our environment.
listener 8883
psk_hint SAAS Deployments
psk_file c:\DemoCompany\psk_file.txt
The contents of the psk_file.txt are very simple and same as the bridge identity and the bridge_psk provided in the config of customer environment.
The problem I am facing here is that even though I change the bridge_identity or the bridge_psk at customer's environment to something that is not in the psk_file.txt, I am still able to connect the 2 brokers over the bridge.
My understanding of this was that if I change the bridge_psk to some random hex code, the connection should get rejected. But that doesn't seem to happen. Am I doing something wrong or missing something over here?
The following config files work for me with v2.0.9 builds shipped from the mosquitto PPA on Ubuntu
Client broker:
listener 1889
connection bridge
address 127.0.0.1:1890
bridge_identity bridge1
bridge_psk 123456789987654321
topic # both 0
Bridge broker
listener 1890
psk_hint my test bridge
psk_file /temp/psk/psk_file.txt
use_identity_as_username true
The use_identity_as_username is required as from Mosquitto v2 onward allow_anonymous defaults to false

How to find RabbitMQ URL?

Rabbit MQ URL looks like :
BROKER_URL: "amqp://user:password#remote.server.com:port//vhost"
This is not clear where we can find the URL, login and password of RabbitMQ
when we need to acccess from remote worker (outside of Localhost).
In other way, how to set RabbitMQ IP adress, login and password from Celery / RabbitMQ
You can create new user for accessing your RabbitMQ broker.
Normally port used is 5672 but you can change it in your configuration file.
So suppose your IP is 1.1.1.1 and you created user test with password test and you want to access vhost "dev" (without quotes) then it will look something like this:
amqp://test:test#1.1.1.1:5672/dev
I will recommend to enable RabbitMQ Management Plugin to play around RabbitMQ.
https://www.rabbitmq.com/management.html
To add to the accepted answer:
As of 2022, the default username and password are guest
In my experience, ignoring vhost is safe while getting started with RabbitMQ
If using RabbitMQ as part of a Docker Compose setup (e.g. for testing), other containers in the same application should be able to access RabbitMQ via its service name. For example, if the name of the service in docker-compose.yml is rabbitmq, passing amqp://guest:guest#rabbitmq:5672/ should work

Which protocol for two servers where one is behind a firewall?

There are two servers:
Local Server
behind a firewall (DSL Router)
connected to microcontrollers (actors & sensors)
Cloud Server
sends commands to Local Servers
The idea is that the Cloud Server sends commands to the Local Server. E.g. to trigger an actor. If there was no firewall, the best way would be IMHO to have a REST API on the Local Server. Unfortunately configuring a NAT is not an option.
What is the simplest and most common way to solve this?
Your other options are:
a) pulling, webrequest from local to online server.
b) service bus, also a pulling pattern but with a queue (i.e. Azure Service Bus or Event Hub in example)
c) server of manufactor, sometimes there is already a online service ready, like meethue-API for the hue Philips IoT Lamps
Let me know if you need more hint's.
Frank

Remote Docker Host Authentication

Hi I'm currently working on a side project. In this project I'll have a central server that will need to connect to several remote docker daemons. My problem is with authentication.
Given that the project will be hosted on Digitalocean, my first thought suggested that I'll accept only connections from the private networking interface. The problem is that that interface is accessible by all other servers in the same datacenter.
Second thought is to allow only requests from the central server using the DOCKER_HOST config, the problem is that if I understand correctly the if the private IP of the centeral server get known, the IP can be spoofed.
Third thought is to enable TLS ( https://docs.docker.com/articles/https/ ), I've never dealt with those things before and the tutorial is unclear for me, I lack the knowledge of the terminologies and it's being used heavily.
So basically the problem is that I have a central client and multiple remote docker hosts, what is the best way to connect to them? Thank you.
EDIT: I managed to solve the problem using HTTP authentication by running nginx as a proxy in front of the docker daemon.
My understand is you are trying to build a docker cluster, which can manage all nodes from one single central server.
this is very likely docker's Docker Swarm project, from their doc, they give some simple idea how this is work:
open a TCP port on each node for communication with the swarm manager
install Docker on each node
create and manage TLS certificates to secure your swarm
Sorry this should post as a comment but I do not have enough rep to do that.

RabbitMQ Shovel plugin stuck on "starting" status

RabbitMQ starts up just fine, but the shovel plugin status is listed as "starting".
I'm using the following rabbitmq.config:
Each broker is running on a separate AWS instance. The remote server is windows 2008 server, the local server is Amazon Linux.
[{rabbitmq_shovel,
[{shovels,
[{scrape_request_shovel,
[{sources, [{broker,"amqp://test_user:test_password#localhost"}]},
{destinations, [{broker, "amqp://test_user:test_password#ec2-###-##-###-###.compute-1.amazonaws.com"}]},
{queue, <<"scp_request">>},
{ack_mode, on_confirm},
{publish_properties, [{delivery_mode, 2}]},
{publish_fields, [{exchange, <<"">>},
{routing_key, <<"scp_request">>}]},
{reconnect_delay, 5}
]}
]
}]
}].
Running the following command:
sudo rabbitmqctl eval 'rabbit_shovel_status:status().'
returns:
[{scrape_request_shovel,starting,{{2012,7,11},{23,38,47}}}]
According to This question, this can result if the users haven't been set up correctly on the two brokers. However, I've double-checked that I've set up the users correctly via rabbitmqctl user_add on both machines -- have even tried it with a different set of users, to be sure.
I also ran an nmap scan of port 5672 on the remote host to verify is was up and running on that port.
UPDATE Problem isn't solved but this does appear to be a result of connection problems with the remote server. I changed "reconnect_delay" to 0 in my config file, to avoid having shovel infinitely re-try the connection. Highly recommend others with this problem do this as well, as it allows you to get error messages out of rabbit_shovel_status. In my case I got the following error:
[{scrape_request_shovel,
{terminated,
{{badmatch,{error,access_refused}},
[{rabbit_shovel_worker,make_conn_and_chan,1},
{rabbit_shovel_worker,handle_cast,2},
{gen_server2,handle_msg,2},
{proc_lib,init_p_do_apply,3}]}},
{{2012,7,12},{0,4,37}}}]
Answering my own question here, in case others encounter this issue. This error (and also a timeout error if you get it, {{badmatch,{error,etimedout}}, ), is almost certainly a communications problem between the two machines, most likely due to port access / firewall settings.
There were a couple of dumb things I was doing here:
1) Was using the wrong DNS for my remote EC2 instance (D'oh! really dumb -- can't tell you how long I spent banging my head against the wall on this one...). Remember that stopping and starting your instance generates a new DNS, if you don't have an elastic IP associated with the instance.
2) My remote instance is a windows server, and I realized you have to open up port 5672 both in windows firewall and in EC2 security groups -- there are two overlapping levels of access controls here, and opening up the port in the EC2 management console isn't sufficient if your machine is windows server on EC2, as you also have to configure the windows server firewall.