Is it okay to delete RabbitMQ's `/` VHost? - rabbitmq

In our environment, we have several RabbitMQ VHosts defined: one for dev, one for qa, one for staging and so on. The default VHost / is unused and shows no users as having access, nor does it have any exchanges or queues defined.
Is it okay to run rabbitmqctl delete_vhost '/' to remove this VHost? Does rabbitmq-server or any of the clients place any special meaning on it, or break if it is missing?

Special meaning of / vhost is the default vhost, which clients will be connecting to if other vhost is not specified.
It's safe to delete it, if there are no clients connecting to it.
But you should make sure you have configured all plugins (like MQTT or STOMP if you use them) to use your custom vhosts.
Or you can just leave it be, since no users have access to it anyway.

Related

Wireguard with dynamic setup for iot

At the moment we have multiple raspberry pies placed at different locations on different networks.
Our current solution to be able to reach them if something goes wrong is auto-ssh with jump host.
Recently I stumbled on Wireguard which could be another more slim way to solve the calling home problem.
The problem is that we would like the setup phase to be more dynamic, we don't want to do special configuration per node we have out there, we just want them to call home with a key and then be apart of the network.
Two questions:
Is Wireguard for us or are there other problems that I can't foresee here.
Is there a way to set it up dynamically with one key and let the clients get random ips?
wireguard always needs a unique keypair / host. So not what you are looking for.
If you just want a phone home option with ip connectivity I would suggest an openvpn server and client. If you use a username/password config (not using certificates), you can reuse the config on multiple clients. Openvpn will act as an dhcp server.
an howto:
https://openvpn.net/community-resources/how-to/
search for:
client-cert-not-required
The option that Maxim Sagaydachny is also valid for command access, an alternative to salt could be puppet with mco/bolt.
On any option you choose, be sure that the daemon restarts when it crashes, reboots, fails...
for systemd services this would be an override with:
[service]
restart=always

Apache force DNS lookups

I've got an Apache that's proxying requests to an external entity:
ProxyPass /something https://external.example.com/somethingelse
This external site likes to switch the values of that domain based on where they want their traffic. Apache seemingly doesn't pick up the new value until it's restarted. Is there a way to force Apache to do new lookups based on certain amount of time? After some research and even looking at the code, I don't see an obvious answer. If that isn't an option, any other suggestions?
According to Apache documentation:
DNS resolution for origin domains DNS resolution happens when the
socket to the origin domain is created for the first time. When
connection reuse is enabled, each backend domain is resolved only once
per child process, and cached for all further connections until the
child is recycled.
There is ProxyPass key=value parameter to control this:
disablereuse Off This parameter should be used when you want to force
mod_proxy to immediately close a connection to the backend after being
used, and thus, disable its persistent connection and pool for that
backend. This helps in various situations where a firewall between
Apache httpd and the backend server (regardless of protocol) tends to
silently drop connections or when backends themselves may be under
round- robin DNS. When connection reuse is enabled each backend domain
is resolved (with a DNS query) only once per child process and cached
for all further connections until the child is recycled. To disable
connection reuse, set this property value to On.

How to find RabbitMQ URL?

Rabbit MQ URL looks like :
BROKER_URL: "amqp://user:password#remote.server.com:port//vhost"
This is not clear where we can find the URL, login and password of RabbitMQ
when we need to acccess from remote worker (outside of Localhost).
In other way, how to set RabbitMQ IP adress, login and password from Celery / RabbitMQ
You can create new user for accessing your RabbitMQ broker.
Normally port used is 5672 but you can change it in your configuration file.
So suppose your IP is 1.1.1.1 and you created user test with password test and you want to access vhost "dev" (without quotes) then it will look something like this:
amqp://test:test#1.1.1.1:5672/dev
I will recommend to enable RabbitMQ Management Plugin to play around RabbitMQ.
https://www.rabbitmq.com/management.html
To add to the accepted answer:
As of 2022, the default username and password are guest
In my experience, ignoring vhost is safe while getting started with RabbitMQ
If using RabbitMQ as part of a Docker Compose setup (e.g. for testing), other containers in the same application should be able to access RabbitMQ via its service name. For example, if the name of the service in docker-compose.yml is rabbitmq, passing amqp://guest:guest#rabbitmq:5672/ should work

Apache http server one instance per Virtual host

I was interested in working with apache http server based on next parameters:
On a single server running listenin in one single port
Having condigured several Virtualhosts, one per domain
running each Virtualhost as an instance listening in por 80
been able to reload one domain configuration without having to restart the rest.
I have doubts about the memory consumption and if there's, how should i improve it.
I don't think that would be a memory problem (correct me if I'm wrong) as soon as there's only one http server running?
or maybe yes because each instance comsumes independent memory?
should be same memory compsumption as running all the VirtuallHosts on the main apache config file?
Many thanks, I mainly want to run one instance per domain because I want to be able to restart each VirtualHost configuration when is needed without having to restart the others.
Thanx
First I don't think you can run several apache instance if they are all listening to port 80. Only one process can bind the port.
Apache will have several child processes, all child of the process listenign on port 80, but each child process can be used for any VirtualHost.
You could achieve it by binding different IP on port 80, so having IP based VirtualHosts. Or by using one Apache as a proxy for other Apache instances binded on other ports.
But the restart problem is not a real problem. Apache can perform safe-restart (reload on some distributions) where each child process is reloaded after the end of his running job. So it's a transparent restart, without any HTTP request killed. Adding or removing a VirtualHost does not need a restart, a simple reload is enought.
I have to think there are ways of achieving what you want without individual instances. Seriously large virtual hosting companies use apache, I am hard pressed to believe your needs are more complex than theirs. Example: http://httpd.apache.org/docs/2.0/vhosts/mass.html
Maybe you should run two apache servers to do a rolling restart when it is really needed, which would prevent any individual site from being down as well.

How do I configure Apache to forward some URLs to two resin containers?

I have two resin servers - r-server-a and r-server-b. I created two because both have web applications that need to be in the root context path '/' (and using same port '80').
However, both web applications need to see each other (i.e. access the other application's resources & pages). Which is why I thought I'd use an apache server to handle the two.
How do I do that?
What you need is mod_proxy in Apache, in the apache config (like the virtual host config) put:
ProxyPass / http://localhost:8080/<web-app context root>/
ProxyPassReverse / http://localhost:8080/<web-app context root>/
Both using same port means not the same IP. that might be same machine two instances each bound to one NIC or two separate machines. This is not that clear from the question, however, it does not matter for that much.
For several reasons I would pick NGINx as a reversed proxy (instead of apache) and configure it accordingly.
See at tornado's documentation how they do that for tornado (in that case, 4 instances on each server) and copy the concept to your location. Good luck.