consume queue from multiple vhosts with kombu - rabbitmq

I have the following situation. I have a list of vhosts. In each vhost i have a queue (same name in all vhosts). Is there a way to simultaneously consume the queues? (i don't want to create a separate process for each vhost) I want to have a single consumer, consuming from all the queues.
I'm using kombu and rabbitmq.
Thanks

Based on https://www.rabbitmq.com/uri-spec.html:
amqp_URI = "amqp://" amqp_authority [ "/" vhost ] [ "?" query ]
amqp_authority = [ amqp_userinfo "#" ] host [ ":" port ]
amqp_userinfo = username [ ":" password ]
username = *( unreserved / pct-encoded / sub-delims )
password = *( unreserved / pct-encoded / sub-delims )
vhost = segment
You need one connection for each vhost.
So, no you can't.
And in general you can't have a single subscriber for multiple queues, even if they are in the same vhost

Related

How to read the data from MQTT Consumer from queue instead of topic using the telegraf

I have an ActiveMQ broker and data is coming to the queue inside this broker.
I am trying to read the data from the same broker but I am not able to read the data.
Below I have given my telegraf configuration. I have provided the topic name.
I tried creating a topic and sending custom data and that data I am able to read properly.
[[inputs.mqtt_consumer]]
servers = ["provided"]
qos = 0
## Topics that will be subscribed to.
topics = [
"topic_name",
]
connection_timeout = "30s"
## If unset, a random client ID will be generated.
client_id = "telegraf"
## Username and password to connect MQTT server.
username = "provided"
password = "provided"
## Optional TLS Config
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## Use TLS but skip chain & host verification
# insecure_skip_verify = false
[[inputs.activemq]]
## ActiveMQ WebConsole URL
url = "provided"
## Required ActiveMQ Endpoint
## deprecated in 1.11; use the url option
# server = "192.168.50.10"
# port = 8161
## Credentials for basic HTTP authentication
username = "provided"
password = "provided"
[[outputs.file]]
## ## Files to write to, "stdout" is a specially handled file.
files = ["stdout","/etc/telegraf/metrics.out"]
The data coming from devices is going to the queue, not the topic.
As you can see the data is present inside the queue.
so now coming to my main question how I can read the data from the queue not from the topic using telegraf?
MQTT supports topics by default. You either need to change your message flow to publish to topics, or configure your ActiveMQ broker to use Virtual Topic Subscription Strategy for MQTT (where messages are stored in queues).
ref: https://activemq.apache.org/mqtt
Note: Please edit your post to hide your broker URL and admin password!

How to configure NServiceBus with RabbitMQ that has LDAP enabled

Rabbit MQ set up in my organization uses LDAP for Authenticaton and Authorization.
How can I configure NServiceBus (or RabbitMQ) to use the credentials that the service is running under (- like integrated security for SQL Connections).
Rabbmit MQ Configuration
[
{rabbit,
[{auth_backends, [rabbit_auth_backend_ldap]}]},
{rabbitmq_auth_backend_ldap,
[ {servers, ["ad.xxxx.xxx"]},
{dn_lookup_attribute, "userPrincipalName"},
{dn_lookup_base, "OU=xxxx Users,DC=ad,DC=xxxx,DC=xxx"},
{log, true},
{group_lookup_base, "OU=xxxx Users,DC=ad,DC=xxxx,DC=xxx"},
{tag_queries, [{administrator, {in_group, "CN=GRP_Name,OU=XXXX Users,DC=ad,DC=XXXX,DC=XXX"}},
{management, {in_group, in_group, "CN=GRP_Name,OU=XXXX Users,DC=ad,DC=XXXX,DC=XXX"}}]}
]
}
].
NServiceBus Code:
var endpointConfiguration = new EndpointConfiguration("Receiver.Service");
var transport = endpointConfiguration.UseTransport<RabbitMQTransport>();
transport.UseConventionalRoutingTopology();
transport.ConnectionString("host=rabbitmq.sb.xxxx.xxx");
RabbitMQ's LDAP support requires that client applications pass a username and password. There is no equivalent to SQL's integrated security.
In your case, user's must have a DN whose value ends with OU=xxxx Users,DC=ad,DC=xxxx,DC=xxx. Your NServiceBus application will have to pass a username and password of an account with the expected DN.
https://www.rabbitmq.com/ldap.html
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

How can I use the Chef JSON to set a redis and sidekiq configuration

I'm using AWS OpsWorks for a Rails application with Redis and Sidekiq and would like to do the following:
Override the maxmemory config for redis
Only run Redis & Sidekiq on a selected EC2 instance
My current JSON config only has the database.yml overrides:
{
"deploy": {
"appname": {
"database": {
"username": "user",
"password": "password",
"database": "db_production",
"host": "db.host.com",
"adapter": "mysql2"
}
}
}
}
Override the maxmemory config for redis
Take a look and see if your Redis cookbook of choice gives you an attribute to set that / provide custom config values. I know the main redisio one lets you set config value, as I do it on my stacks (I set the path to the on disk cache, I believe)
Only run Redis & Sidekiq on a selected EC2 instance
This part is easy: create a Layer for Redis (or Redis/Sidekiq) and add an instance to that layer.
Now, because Redis is on a different instance than your Rails server, you won't necessarily know what the IP address for your Redis server is. Especially since you'll probably want to use the internal EC2 IP address vs the public IP address for the box (using the internal address means you're already inside the default firewall).
Sooo... what you'll probably need to do is to write a custom cookbook for your app, if you haven't already. In your attributes/default.rb write some code like this:
redis_instance_details = nil
redis_stack_name = "REDIS"
redis_instance_name, redis_instance_details = node["opsworks"]["layers"][redis_stack_name]["instances"].first
redis_server_dns = "127.0.0.1"
if redis_instance_details
redis_server_dns = redis_instance_details["private_dns_name"]
end
Then later in the attributes file set your redis config to your redis_hostname (maybe using it to set:
default[:deploy][appname][:environment_variables][:REDIS_URL] = "redis://#{redis_server_dns}:#{redis_port_number}"
Hope this helps!

RabbitMQ Shovel Frozen 'Running'

I have a RabbitMQ shovel that I have been using for some time.
I have a PC '192.168.7.1' that is shovelling messages from another PC '192.168.7.6'. This works unless '192.168.7.6' reboots, then the shovel on '192.168.7.1' stays in the RUNNING state and never again receives messages and also never reconnects. So messages just buffer up on '192.168.7.6' indefinitely.
Here is an excerpt from my config file showing the shovel config:
[{rabbit, [{disk_free_limit, {mem_relative, 1.0}}]},
{rabbitmq_shovel,
[ {shovels, [ {backbone_shovel,
[ {sources,
[ {brokers, [ "amqp://guest:guest#192.168.7.6"
]}
, {queue.declare, [
{queue, <<"backbone">>}
, durable
]}
]}
, {destinations,
[ {broker, "amqp://guest:guest#localhost"}
, {queue.declare, [
{queue, <<"backbone">>}
, durable
]}
]}
, {queue, <<"backbone">>}
, {ack_mode, on_confirm}
, {reconnect_delay, 5}
]},
Here is an excerpt from the rabbitmq shovel management plugin when the source (192.168.7.6) is rebooting:
backbone_shovel
running
type: network
virtual_host: /
host: 192.168.7.6
username: guest
ssl: false
type: network
virtual_host: /
host: localhost
username: guest
ssl: false
2012-11-26 11:03:51
How can I force a shovel to restart when the target dies?
Answering my own question just in case anyone else has the same problem.
This simple solution is to put the shovel on the client (PC '192.168.7.6') instead of the server (PC '192.168.7.1'). Then if the client restarts, the shovel restarts. So the shovel never gets out of sync with the state of the client.

rabbitmq shovel state always starting

With the following rabbitmq config
[ {mnesia, [{dump_log_write_threshold, 100}]},
{rabbit, [{vm_memory_high_watermark, 0.4}]},
{rabbitmq_shovel,
[{shovels,
[{devShovel,
[{sources, [{broker, "amqp://shoveluser:shoveluser#server2:5672"}]},
{destinations, [{broker, "amqp://shoveluser:shoveluser#localhost:5672"}]},
{queue, <<"queue">>},
{publish_fields,[{exchange,<<"DataExchange">>}]}
]
}]
}]
}
].
and all of the relevant queues / exchanges declared I am able to start my rabbitmq server. However, when I check the shovel management, the plugin always displays starting as the state of the shovel. What causes this and is there any way to get more info ?
Make sure to check the user is setup correctly on the brokers.