Running two RabbitMQ instances as Windows Service in single machine - rabbitmq

I have tried and successfully configured and run two rabbitmq instances as an application in Windows Server 2012. Following the tutorials below,
https://lazareski.com/multiple-rabbitmq-instances-on-1-machine/
https://www.rabbitmq.com/clustering.html
but I have to repeat the above steps as soon as the server restarts or when my session is over.
How do I go about configuring and running two RabbitMQ Services in a single windows machine to permanently run them in background ?

It's probably best to use the .zip distribution for Windows. Then, using two separate administrative accounts, set the required environment variables for two nodes in that account's environment. RABBITMQ_SERVICENAME will also have to be set and unique. Then, extract the .zip and run rabbitmq-service.bat install and rabbitmq-service.bat run.
The above is untested. Running multiple Erlang VMs on the same machine is not recommended as it can affect performance.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

Is it possible to reconfigure Rabbit MQ once you have changed machine to point at the new machine without uninstalling or demising it

I've come into an issue post install of Rabbit MQ where it was all set up and configured with the web apps on the machine and communicating to local applications however the machine had to be moved to a different tranch of machines and renamed as a result. Now Rabbit MQ can no longer serve or handle comms as intended as it's config points to rabbit#PREVIOUS_MACHINE instead of rabbit#CURRENT_MACHINE.
In the rabbit MQ config however, to complicate this, there was some configuration that was done from the users on the system that were fed into the local apps that are then encrypted into that local app's database and used for communicating with all the local apps. the issue here is if I drop and recreate Rabbit MQ a make a new user this won't align to what the other internal apps are using and I believe they are not configurable post install so a reinstall of everything is the potential impact.
the question is, is it possible to re-config or update the current RabbitMQ installation files to now point at the local machine name instead of the previous machine name AND I guess by proxy is this something that would even work. The docs over at rabbitmq don't quite deal with this specific scenario, unfortunately from what I've read through.
so i want to confirm that RMQ is the absolute dogs tits of a black magic box.
anyway,
following these steps from here minus the first two
How to change RabbitMQ node name without changing my hostname
this is the inverse of my problem pretty much. But for those in the future who have this issue;
I had Rabbit MQ on another machine installed and running, the machines name was changed and the solution was to uninstall the service, delete the db and reinstall the service. SOMEHOW rmq manages to keep the config knowledge of all the queues that were in the db in the system and when you reinstall the service it brings all the queues back as well. the only issue I had after that was to remember my username and password that were not default user set ups and I did so that solved my issue. still have no idea how RMQ manages to remember the previous configs despite deleting the local db, crazy cool. very grateful to whoever built that into the tool

DC/OS has three roles, they are master, slave, slave_public, why can't put them on one host?

I just investigate DC/OS, I find that DC/OS has three roles:master, slave, slave_public, I want to deploy a cluster which can host master, slave or slave_public roles on one host, but currently I can't do that.
I want to know that why can't put them on one host when designed. If I do that, could I get some suggestions?
I just have the idea. If I can't do, I'll quit using DCOS, I'll use mesos and marathon.
Is there someone has the idea with me? I look forward to the reply.
This is by design, and things are actually being worked on to re-enforce that an machine is installed with only one role because things break with more than one.
If you're trying to demo / experiment with DC/OS and you only have one machine, you can use Virtual Machines or Docker to partition that one machine into multiple machines / parts which you can install DC/OS on. dcos-vagrant and dcos-docker can help you there.
As far as installing though, the configuration for each of the three roles is incompatible with one another. The "master" role causes a whole bunch of pieces of software to be started / installed on a host (Mesos-DNS, Mesos master, marathon, exhibitor, zookeeper, 3dt, adminrouter, rexray, spartan, navstar among others) which listen on various ports. The "slave" role causes a machine to have a mesos-agent (mesos renamed mesos-slave to mesos-agent, hence the disconnect) configured and started on the agent. The mesos-agent is configured to control / most ports greater than 1024 to tasks which are launched by mesos frameworks on the agent. Several of those ports are used by services which are run on masters, resulting in odd conflicts and hard to fix bad behavior.
In the case of running the "slave" and "slave_public" on the same host, those two conflict more directly, because both of them cause mesos-agent to be run on the host, with slightly different configuration. Both the mesos-agent (the one configured with the "slave" role and the one with the "slave_public" role are configured to listen on port 5051. Only one of them can use it though, so you end up with one of the agents being non-functional.
DC/OS only supports running a node as either a master or an agent(slave). You are correct that Mesos does not have this limitation. But DC/OS is more than just a Mesos/Marathon. To enable all the additional features of DC/OS there are various components built around Mesos and Marathon. At times these components behave differently whether they are running on a master or an agent and at other times the components that exist on a master may or may not exist on an agent or vice versa. So running a master and an agent on the same node would lead to conflicts/issues.
If you are looking to run a small development setup before scaling the solution out to a bigger distributed system DC/OS Vagrant might be a good starting point.

How to halt provisioning of one VM until another VM is done?

Using Vagrant+Chef Solo I'm setting up two VMs: #1 is a TeamCity server, #2 is a TeamCity agent. Provisioning is done by first installing the TeamCity server package on VM #1, then the agent VM is booted and requests data from the server which is used to install the agent. That whole thing works fine.
But now I want to alter the server after the agent is done provisioning. I want to modify the server's database directly, to change an attribute that is only available after the agent has spun up. But is there a way for one VM's provisioning to trigger another VM? Once the agent is done I'd like to somehow resume provisioning the server, so I can make the database edit..
Any thoughts, recommendations, or feedback welcomed. I'm new to Vagrant, Chef, and TeamCity, so there's a chance I'm missing a much easier solution.
* Why do I want to edit the DB directly you may be wondering? TeamCity agents must be authorized before they can be used, and I want to do this programmatically. The solution I've found is to directly edit the DB, because authorization functionality is not exposed via the TeamCity REST API (as far as I can tell)
If you can test the agent is installed/answering, you may add a ruby block looping over this test before continuing the recipe execution.
This loop should have a sleep and a counter to avoid infinite loops.
I've no knowledge of teamcity, so can't tell if it's the best way.
In general, Chef is designed to manage your system, not simply provision it (though this is less true in the modern Cloud world with "golden image" strategies). Nonetheless, in your case, you best bet is to just setup chef-client as a service that runs every 15 minutes. Once the client has finished provisioning, the next run of the server will be able to authorize it.
If you really want to "trigger" the one from the other, you'd need either do that externally with something like etcd or consul, or you would need to setup an ssh keypair between the boxes and add a ruby_block on the client that either does the database modification directly, or calls chef-client on the server.

mass-restarting httpd on lots of EC2 instances

I am running a variable number of EC2 instances (CentOS 64) that contain an apache web server that caches a bunch of code in production mode.
Now every time I make some changes to the code (generally on a weekly basis) I have to log into each one of them instances and do a "su" then "service httpd restart"
Is there a way to automate this so that I can run a single command on one of the instances it would connect to all others and restart it? Getting really time consuming especially when the application has spawned some 20-30 instances on its own (happens on some days when we get high traffic)
Thanks!
Dancer's shell, dsh, is provided specifically to do this. No 'scripting' required. As #tix3 suggests, you should probably also convince sudo on those machines (configure /etc/sudoers using visudo) to configure them to accept your restart command.

Celery, rabbitmq: How to install remote worker?

Can I have multiple machines to execute the tasks and return messages that are distributed by django? I looked into celery/rabbitmq, I'm not sure if I can setup celery workers on remote computers. Can anyone guide me through here?
If this is not possible or very hard, any alternative solution for the problem?
You can do this by installing your Django project on to the remote computer, and then ensuring that it is configured to use the correct broker, database server, and media directory (assuming your tasks need access to that).