I'm looking to be able to bind a queue to multiple exchanges utilizing the RabbitListener annotation but so far have been unsuccessful.
What I have right now is:
#RabbitListener(bindings = #QueueBinding(value =
#Queue(
value = "${subscriber.queueInbound}", durable = "true", autoDelete = "false", exclusive = "false"),
exchange = #Exchange(value = "all", durable = "true")
),
containerFactory = "subscriberRabbitListenerContainerFactory"
)
public void onMessage(Message message, Channel channel) {
// do something
}
This will on start/re-connect auto create the queue defined as subscriber.queueInbound and bind this queue to a default all exchange.
I then have a Job that runs in the background that will then properly configure this queue and bind it to the multiple exchanges it needs to be configured for.
I'm looking for a more elegant way of doing this either through the #RabbitListener or somehow adjusting it so that upon re-connection have it configure the queue appropriately before re-listening.
Originally I was doing the queue configuration through Beans however this prevented startup of the application if RabbitMQ was not available which I resolved but would then result in it starting up and the queue configuration steps not be performed.
#RabbitListener(bindings = {
#QueueBinding(value =
#Queue(value = "foo"), exchange = #Exchange("ex1"), key="foo"),
#QueueBinding(value =
#Queue(value = "foo"), exchange = #Exchange("ex2"), key="bar")
})
public void listen(String in) {
}
Originally I was doing the queue configuration through Beans however this prevented startup of the application if RabbitMQ was not available which I resolved but would then result in it starting up and the queue configuration steps not be performed.
That implies you were doing something "illegal" during context initialization. You should not try to talk to RabbitMQ until the context is fully built.
Beans are only declared on the broker when the connection is first opened.
Related
Am trying to consume the message from exiting queue which is of type Direct Exchange(created with the help of exchange and routing key). I have only the exchange name and routing key and not the queue name. There were support for plain Java, but there was no place where I can find it for Spring boot.
#RabbitListener
#RabbitHandler
public void consumeMessage(Object message) {
LOGGER.debug("Message Consumed.... : {}", message.toString());
}
How can I consume messages with routing key and exchange name not the queue name as #RabbitListener asks for queue.
Consumers consume from queues not exchanges. You must bind a queue to the exchange with the routing key.
EDIT
There are several ways to automatically declare a queue on the broker.
#RabbitListener(bindings =
#QueueBinding(exchange = #Exchange("myExchange"),
key = "myRk", value = #Queue("")))
public void listen(String in) {
System.out.println(in);
}
This will bind an anonymous queue (auto-delete) which will be deleted when the application is stopped.
#RabbitListener(bindings =
#QueueBinding(exchange = #Exchange("myExchange"),
key = "myRk", value = #Queue("foo")))
public void listen(String in) {
System.out.println(in);
}
Will bind a permanent queue foo to the exchange with the routing key.
You can also simply declare #Bean s for the queue, exchange and binding.
See Configuring the broker.
I'm working with 2 .NET Core console applications in a producer/consumer scenario with MassTransit/RabbitMQ. I need to ensure that even if NO consumers are up-and-running, the messages from the producer are still queued up successfully. That didn't seem to work with Publish() - the messages just disappeared, so I'm using Send() instead. The messages at least get queued up, but without any consumers running the messages all end up in the "_skipped" queue.
So that's my first question: is this the right approach based on the requirement (even if NO consumers are up-and-running, the messages from the producer are still queued up successfully)?
With Send(), my consumer does indeed work, but still many messages are falling through the cracks and getting dumped into to the "_skipped" queue. The consumer's logic is minimal (just logging the message at the moment) so it's not a long-running process.
So that's my second question: why are so many messages still getting dumped into the "_skipped" queue?
And that leads into my third question: does this mean my consumer needs to listen to the "_skipped" queue as well?
I am unsure what code you need to see for this question, but here's a screenshot from the RabbitMQ management UI:
Producer configuration:
static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder()
.ConfigureServices((hostContext, services) =>
{
services.Configure<ApplicationConfiguration>(hostContext.Configuration.GetSection(nameof(ApplicationConfiguration)));
services.AddMassTransit(cfg =>
{
cfg.AddBus(ConfigureBus);
});
services.AddHostedService<CardMessageProducer>();
})
.UseConsoleLifetime()
.UseSerilog();
}
static IBusControl ConfigureBus(IServiceProvider provider)
{
var options = provider.GetRequiredService<IOptions<ApplicationConfiguration>>().Value;
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(options.RabbitMQ_ConnectionString), h =>
{
h.Username(options.RabbitMQ_Username);
h.Password(options.RabbitMQ_Password);
});
cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName, e =>
{
EndpointConvention.Map<CardMessage>(e.InputAddress);
});
});
}
Producer code:
Bus.Send(message);
Consumer configuration:
static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder()
.ConfigureServices((hostContext, services) =>
{
services.AddSingleton<CardMessageConsumer>();
services.Configure<ApplicationConfiguration>(hostContext.Configuration.GetSection(nameof(ApplicationConfiguration)));
services.AddMassTransit(cfg =>
{
cfg.AddBus(ConfigureBus);
});
services.AddHostedService<MassTransitHostedService>();
})
.UseConsoleLifetime()
.UseSerilog();
}
static IBusControl ConfigureBus(IServiceProvider provider)
{
var options = provider.GetRequiredService<IOptions<ApplicationConfiguration>>().Value;
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(options.RabbitMQ_ConnectionString), h =>
{
h.Username(options.RabbitMQ_Username);
h.Password(options.RabbitMQ_Password);
});
cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName, e =>
{
e.Consumer<CardMessageConsumer>(provider);
});
//cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName + "_skipped", e =>
//{
// e.Consumer<CardMessageConsumer>(provider);
//});
});
}
Consumer code:
class CardMessageConsumer : IConsumer<CardMessage>
{
private readonly ILogger<CardMessageConsumer> logger;
private readonly ApplicationConfiguration configuration;
private long counter;
public CardMessageConsumer(ILogger<CardMessageConsumer> logger, IOptions<ApplicationConfiguration> options)
{
this.logger = logger;
this.configuration = options.Value;
}
public async Task Consume(ConsumeContext<CardMessage> context)
{
this.counter++;
this.logger.LogTrace($"Message #{this.counter} consumed: {context.Message}");
}
}
In MassTransit, the _skipped queue is the implementation of the dead letter queue concept. Messages get there because they don't get consumed.
MassTransit with RMQ always delivers a message to an exchange, not to a queue. By default, each MassTransit endpoint creates (if there's no existing queue) a queue with the endpoint name, an exchange with the same name and binds them together. When the application has a configured consumer (or handler), an exchange for that message type (using the message type as the exchange name) also gets created and the endpoint exchange gets bound to the message type exchange. So, when you use Publish, the message is published to the message type exchange and gets delivered accordingly, using the endpoint binding (or multiple bindings). When you use Send, the message type exchange is not being used, so the message gets directly to the destination exchange. And, as #maldworth correctly stated, every MassTransit endpoint only expects to get messages that it can consume. If it doesn't know how to consume the message - the message is moved to the dead letter queue. This, as well as the poison message queue, are fundamental patterns of messaging.
If you need messages to queue up to be consumed later, the best way is to have the wiring set up, but the endpoint itself (I mean the application) should not be running. As soon as the application starts, it will consume all queued messages.
When the consumer starts the bus bus.Start(), one of the things it does is create all exchanges and queues for the transport. If you have a requirement that publish/send happens before the consumer, your only option is to run DeployTopologyOnly. Unfortunately this feature is not documented in official docs, but the unit tests are here: https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.RabbitMqTransport.Tests/BuildTopology_Specs.cs
The skipped queue happens when messages are sent to a consumer that doesn't know how to process.
For example if you have a consumer that can process IConsumer<MyMessageA> which is on receive endpoint name "my-queue-a". But then your message producer does Send<MyMessageB>(Uri("my-queue-a")...), Well this is a problem. The consumer only understands the A, it doesn't know how to process B. And so it just moves it to a skipped queue and continues on.
In my case, the same queue listens to multiple consumers at the same time
I have four exact replicas of a service that among other things catch messages from a certain queue using Apache Camel RabbitMQ endpoints. Each route looks like this:
//Start Process from RabbitMQ queue
from("rabbitmq://" +
System.getenv("ADVERTISE_ADDRESS") +
"/" +
System.getenv("RABBITMQ_EXCHANGE_NAME") +
"?routingKey=" +
System.getenv("RABBITMQ_ROUTING_KEY") +
"&autoAck=true")
.process(exchange -> exchange.getIn().setBody(exchange.getIn().getBody()))
.unmarshal().json(JsonLibrary.Jackson, TwitterBean.class)
.transform().method(ResponseTransformer.class, "transformtwitterBean")
.marshal().json(JsonLibrary.Jackson)
.setHeader(Exchange.HTTP_METHOD, constant("POST"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.to("http4://" + System.getenv("ADVERTISE_ADDRESS") + ":" + System.getenv("CAMUNDA_PORT") + "/rest/process-definition/key/MainProcess/start")
.log("Response: ${body}");
Right now each endpoint processes the message.
Even though the "concurrent consumers"-option by default is one.
I assumed that maybe my messages weren't acknowledged,
so I set the autoAck option to true.
This didn't help, how can I make these services competing consumers?
EDIT:
A code snippet from the configuration of my publisher app:
#Configuration
public class RabbitMqConfig {
#Bean
Queue queue() {
return new Queue(System.getenv("RABBITMQ_QUEUE_NAME"), true);
}
#Bean
DirectExchange exchange() {
return new DirectExchange(System.getenv("RABBITMQ_EXCHANGE_NAME"), true, true);
}
#Bean
Binding binding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(System.getenv("RABBITMQ_ROUTING_KEY"));
}
#Bean
public MessageConverter jsonMessageConverter(){
return new Jackson2JsonMessageConverter();
}
public AmqpTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jsonMessageConverter());
return rabbitTemplate;
}
}
The issue you have is that you're not naming your queue on the service side
Based on the camel apache rabbitmq documentation, this means that a random name is generated for the queue.
So:
you have a publisher that sends a message to an exchange
then each of your service creates a queue with a random name, and binds it to the exchange
Each service having it's own queue, bound to the same exchange, will get the same messages.
To avoid this you need to provide a queue name,
so that each service will connect to the same queue, which will mean they will share the message consumption with the other service instances.
Sounds like you don't have a Queue, but a Topic. See here for a comparison.
The message broker is responsible to give a queue message to only one consumer, no matter how much of them are present.
As I know ActiveMQ has a feature called AUTO Acknowledge that actually inform the broker that message has been received (not acknowledging the producer).
I want to know if it is possible to send acknowledgement from consumer to producer in ActiveMQ or RabbitMQ. then I want to handle the acknowledgment message in producer and if it wouldn't receive acknowledge then sending the message again to the consumer.
You want to perform a synchronous usecase over an asynchronous medium.
In RabbitMQ's case you can use RPC, as described here - https://www.rabbitmq.com/tutorials/tutorial-six-python.html
and
https://www.rabbitmq.com/direct-reply-to.html
Please notice that even authors advise to avoid it:
When in doubt avoid RPC. If you can, you should use an asynchronous pipeline - instead of RPC-like blocking, results are asynchronously pushed to a next computation stage.
RabbitMQ Java client provides auto-acking through com.rabbitmq.client.Channel.basicConsume.
At least for ActiveMQ - this is built in. You have to turn it on in activemq.xml
<policyEntry queue=">" advisoryForConsumed="true"/>
Simply listen the advisory topic for the queue you want to monitor consumed messages for. Then you can extract message id:s and what not to "tick off" outstanding requests.
For a complete end-to-end acknowledgement, I recommend something more custom. I.e. your producer-app should listen to some "response" queue that receives responses about the status of the produced message. I.e. if processing failed - you may want to know why etc..
Anyway, here is some code with a producer that also listens to acknowledgements from ActiveMQ.
public void run() throws Exception {
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616");
conn = cf.createConnection();
sess = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
Destination dest = sess.createQueue("duck");
MessageConsumer mc = sess.createConsumer(AdvisorySupport.getMessageConsumedAdvisoryTopic(dest));
mc.setMessageListener(this);
conn.start();
MessageProducer mp = sess.createProducer(sess.createQueue("duck"));
mp.send(sess.createTextMessage("quack"));
}
public void onMessage(Message msg) {
try {
String msgId = msg.getStringProperty("orignalMessageId");
System.out.println("Msg: " + msgId + " consumed");
} catch ( Exception e) {
e.printStackTrace();
}
}
i'm get the following error when deploying my application with a JMS producer and consumer
com.sun.enterprise.connectors.ConnectorRuntimeException: JMS resource not created : QueueName
I used the annotations below:
Producer
#Resource(name = "jms/EmailNotificationQueue", mappedName = "EmailNotificationQueue")
private Destination destination;
#Resource(name = "jms/QueueConnectionFactory")
private ConnectionFactory connectionFactory;
I then create the connection and start it before sending the message
Consumer
#MessageDriven(name = "EmailNotificationBean", activationConfig = {
#ActivationConfigProperty(
propertyName="destinationType",
propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(
propertyName="destinationName",
propertyValue="EmailNotificationQueue"),
#ActivationConfigProperty(
propertyName="acknowledgeMode",
propertyValue="CLIENT_ACKNOWLEDGE")
}
,mappedName = "EmailNotificationQueue"
)
Have you manually created the Destination?
Log into the admin console, expand Resource, JMS Resources, then Destination Resources. You'll probably need to create a connection factory as well.