MassTransit: Using default RabbitMQ exchange? - rabbitmq

Using MassTransit with RabbitMQ trying to use default exchange.
With RabbitMQ library is was fine as it was default behaviour, but with MassTransit seems to make this is difficult?
I found out how to configure custom exchange name:
...
serviceCollection.AddMassTransit(x =>
{
x.UsingRabbitMq((rabbitContext, rabbitConfig) =>
{
rabbitConfig.Host(config.GetConnectionString("RabbitMq"));
Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Message<TestMessage>(x => x.SetEntityName(""));
});
rabbitConfig.ConfigureEndpoints(rabbitContext);
rabbitConfig.Durable = true;
});
});
Then I get:
RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Peer, code=403, text='ACCESS_REFUSED - operation not permitted on the default exchange', classId=40, methodId=10
Operation is not permitted? I'm a little confused here, why is this so toublesum, am I doing something wrong? I only want one consumer to ever pick up one message from the queue, so the default exchange Masstransit created which is fanout doesn't work for me here.

Related

How to set unique queue name for ActiveMQ in MassTransit?

In StartUp of the project, I make the following settings for MassTransit.ActiveMQ. But when I run, it creates two queues for me, one is event-listener and the other is called Generation.
When I publish information, the information goes into the queues generated by the system.
But I want the information to be published inside queue event-listener that I set.
Please guide me
services.AddMassTransit(x =>
{
x.AddConsumer<EventConsumer>();
x.UsingActiveMq((context, cfg) =>
{
cfg.Host("localhost", h =>
{
h.Username("admin");
h.Password("admin");
});
cfg.ReceiveEndpoint("event-listener", e =>
{
e.ConfigureConsumer<EventConsumer>(context);
});
});
});
MassTransit will only create queues for configured consumers, or explicitly configured receive endpoints. In the code above, the only queue created would be called event-listener. For each message type consumed by the consumer, a topic is created and a virtual topic consumer is created so that the receive endpoint can consume messages of each type.
When messages are published, a topic is created for each published message type.
If you want to send a message directly to a queue, instead of publishing:
var provider = serviceProvider.GetRequiredService<ISendEndpointProvider>();
var endpoint = await provider.GetSendEndpoint(new Uri("queue:event-listener"));
await endpoint.Send(...);

Logstash RabbitMQ input plugin - How to define multiple routing_key

I'm using RabbitMQ plugin of Logstash to pull from RabbitMQ and push to Elasticsearch using this pipeline:
input {
rabbitmq {
queue => "Elasticsearch_Queue"
host => "rabbitmq"
exchange => "my_event_bus"
key => "SomeIntegrationEvent"
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
}
stdout { codec => rubydebug }
}
It works fine and creates a queue named Elasticsearch_Queue and binds to my_event_bus exchange using routing_key SomeIntegrationEvent.
I need to subscribe to multiple events but there is no clear solution on the plugin docs.
We have to add multiple entries with same queue and exchange but different key, as below:
input {
rabbitmq {
queue => "Elasticsearch_Queue"
host => "rabbitmq"
exchange => "my_event_bus"
key => "SomeIntegrationEvent1"
}
rabbitmq {
queue => "Elasticsearch_Queue"
host => "rabbitmq"
exchange => "my_event_bus"
key => "SomeIntegrationEvent2"
}
}
I got the answer from elastic forums.
Since the key must be a string, you can't explicitly bind the queue to the exchange using multiple keys.
I suggest to make my_event_bus a fanout exchange. This will route every message delivered into this exchange to the bound queue.
Then define a second direct exchange that you manually bind to the first exchange multiple times using the routing keys (= events) you are interested in.

Keep messages in queue while consumer is offline

I use Masstransit in C# project.
I have a publisher and consumer services, and when both of them are up, then there are no problems. But if the consumer goes offline, published messages don't go to the queue. They just disappear.
The expected behavior is to keep messages in the queue until the consumer is started, and then send them to it. I've found several topics in google groups with same questions, but it wasn't clear for me how to solve that problem.
It seems strange to me that this functionality isn't provided out of the box because, in my understanding, it is the main purpose of RabbitMQ and MT.
The way I create publisher bus:
public static IBusControl CreateBus()
{
return Bus.Factory.CreateUsingRabbitMq(sbc =>
{
var host = sbc.Host(new Uri("rabbitmq://RMQ-TEST"), h =>
{
h.Username("test");
h.Password("test");
});
sbc.ReceiveEndpoint(host, "test_queue", ep =>
{
ep.Handler<IProductDescriptionChangedEvent>(
content => content.CompleteTask);
});
});
}
And the consumer:
public static void StartRmqBus()
{
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://RMQ-TEST"), h =>
{
h.Username("test");
h.Password("test");
});
cfg.ReceiveEndpoint(host, "test_queue", ep =>
{
ep.Consumer<ProductChangedConsumer>();
});
});
bus.Start();
}
EDIT:
Here is one more interesting feature: if I stop both services and manually put a message to the queue via admin interface of MT, the message is waiting in test_queue. But when I start publisher or consumer service, it falls to test_queue_error queue.
You use the same queue for published and consumer, plus publisher has a consumer for this message type, as you pointed out in your own answer.
If your publisher does not consume messages, it is better to remove the receiving endpoint from it at all and then your service will be send-only.
If you have several services, where each of them need to have their own consumers for the same message type - this is how pub-sub works and you must have different queues per service. This is described in the Common Gotchas section of the documentation. In such scenario, each service will get it's own copy of the published message.
If you have one queue - you get competing consumers and this scenario is only valid for horizontal scalability, where you run several instance of the same services to increase the number of processed messages if the processing is too slow. In such case all these instances will consume messages from the same queue. In this scenario only one instance will get a message.
It seems like my publisher was set up incorrectly. After removing this part:
sbc.ReceiveEndpoint(host, "test_queue", ep =>
{
ep.Handler<IProductDescriptionChangedEvent>(
content => content.CompleteTask);
});
it started to work as expected. Looks like it consumed its own messages, that's why I didn't see messages in the queue when the consumer was down.

CreateQueues and Exchanges - MassTransit

I'm using massTransit with RabbitMQ. Publishing messages with massTransit will create an exchange for my message type. And a masstransit consumer will create queues and bindings to an exchange. Great, make things easy.
Before looking at massTransit I used rabbitMQ's api to create queues, exchanges and binding. I would get both publisher and consumers to run the same setup code. So no matter who ran first all queues, exchanges and binding will be created no matter which part of the application ran first. This was great when running in a development environment.
I was wondering if something similar could be achieved with massTransit?
With MassTransit should be the same: consumers will create queues bound to the exchanges of the messages they consume (with names equal to the messages types).
Publishers will create the exchanges with same names of the types of the messages they publish.
Remember that if the messages published or consumed have super classes or implement interfaces, MassTransit will create the same hierarchy, creating and binding as many exchanges as your message class hierarchy has.
You could use HareDu 2 to achieve this with the below code. This works with both Autofac and .NET Core DI. Check the docs here: https://github.com/ahives/HareDu2
// Create a queue
var result = _container.Resolve<IBrokerObjectFactory>()
.Object<Queue>()
.Create(x =>
{
x.Queue("fake_queue");
x.Configure(c =>
{
c.IsDurable();
c.AutoDeleteWhenNotInUse();
c.HasArguments(arg =>
{
arg.SetQueueExpiration(1000);
arg.SetPerQueuedMessageExpiration(2000);
});
});
x.Targeting(t =>
{
t.VirtualHost("fake_vhost");
t.Node("fake_node");
});
});
// Create an exchange
var result = _container.Resolve<IBrokerObjectFactory>()
.Object<Exchange>()
.Create(x =>
{
x.Exchange("fake_exchange");
x.Configure(c =>
{
c.IsDurable();
c.IsForInternalUse();
c.HasRoutingType(ExchangeRoutingType.Fanout);
c.HasArguments(arg =>
{
arg.Set("fake_arg", "fake_arg_value");
});
});
x.Targeting(t => t.VirtualHost("fake_vhost"));
});
// Create a binding
var result = _container.Resolve<IBrokerObjectFactory>()
.Object<Binding>()
.Create(x =>
{
x.Binding(b =>
{
b.Source("fake_exchange");
b.Destination("fake_queue");
b.Type(BindingType.Exchange);
});
x.Configure(c =>
{
c.HasRoutingKey("your_routing_key");
c.HasArguments(arg =>
{
arg.Set("your_arg", "your_arg_value");
});
});
x.Targeting(t => t.VirtualHost("fake_vhost"));
});

How to tell which amqp message was not routed from basic.return response?

I'm using RabbitMQ with node-amqp lib. I'm publishing messages with mandatory flag set, and when there is no route to any queue, RabbitMQ responds with basic.return as in specification.
My problem is that, as far as I can tell, basic.return is asynchronous and does not contain any information about for which message no queue was found. Even when exchange is in confirm mode). How the hell am I supposed to tell which message was returned?
node-amqp emits 'basic-return' event on receiving the basic.return from amqp. Only thing of any use there is routing key. Since all messages with the same routing key are routed the same way. I assumed that once I get a basic.return about a specific routing key, all messages with this routing key can be considered undelivered
function deliver(routing_key, message, exchange, resolve, reject){
var failed_delivery = function(ret){
if(ret.routingKey == routing_key){
exchange.removeListener('basic-return', failed_delivery);
reject(new Error('failed to deliver'));
}
};
exchange.on('basic-return', failed_delivery);
exchange.publish(
routing_key,
message,
{ deliveryMode: 1, //non-persistent
mandatory: true
}, function(error_occurred, error){
exchange.removeListener('basic-return', failed_delivery);
if(error_occurred){
reject(error);
} else {
resolve();
}
});
}
I read the AMQP spec, because I've used the Basic Return without a problem before, but I'm also using the .NET client. I looked through the documentation on node-amqp, and I can't even see that it implements Basic.Return.
In any event, the server does respond with the full message when it could not be published. You may consider switching to a different Node.js library (for example, amqplib does have this feature (marked as Channel#on('return', function(msg) {...})).