I have four exact replicas of a service that among other things catch messages from a certain queue using Apache Camel RabbitMQ endpoints. Each route looks like this:
//Start Process from RabbitMQ queue
from("rabbitmq://" +
System.getenv("ADVERTISE_ADDRESS") +
"/" +
System.getenv("RABBITMQ_EXCHANGE_NAME") +
"?routingKey=" +
System.getenv("RABBITMQ_ROUTING_KEY") +
"&autoAck=true")
.process(exchange -> exchange.getIn().setBody(exchange.getIn().getBody()))
.unmarshal().json(JsonLibrary.Jackson, TwitterBean.class)
.transform().method(ResponseTransformer.class, "transformtwitterBean")
.marshal().json(JsonLibrary.Jackson)
.setHeader(Exchange.HTTP_METHOD, constant("POST"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.to("http4://" + System.getenv("ADVERTISE_ADDRESS") + ":" + System.getenv("CAMUNDA_PORT") + "/rest/process-definition/key/MainProcess/start")
.log("Response: ${body}");
Right now each endpoint processes the message.
Even though the "concurrent consumers"-option by default is one.
I assumed that maybe my messages weren't acknowledged,
so I set the autoAck option to true.
This didn't help, how can I make these services competing consumers?
EDIT:
A code snippet from the configuration of my publisher app:
#Configuration
public class RabbitMqConfig {
#Bean
Queue queue() {
return new Queue(System.getenv("RABBITMQ_QUEUE_NAME"), true);
}
#Bean
DirectExchange exchange() {
return new DirectExchange(System.getenv("RABBITMQ_EXCHANGE_NAME"), true, true);
}
#Bean
Binding binding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(System.getenv("RABBITMQ_ROUTING_KEY"));
}
#Bean
public MessageConverter jsonMessageConverter(){
return new Jackson2JsonMessageConverter();
}
public AmqpTemplate rabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jsonMessageConverter());
return rabbitTemplate;
}
}
The issue you have is that you're not naming your queue on the service side
Based on the camel apache rabbitmq documentation, this means that a random name is generated for the queue.
So:
you have a publisher that sends a message to an exchange
then each of your service creates a queue with a random name, and binds it to the exchange
Each service having it's own queue, bound to the same exchange, will get the same messages.
To avoid this you need to provide a queue name,
so that each service will connect to the same queue, which will mean they will share the message consumption with the other service instances.
Sounds like you don't have a Queue, but a Topic. See here for a comparison.
The message broker is responsible to give a queue message to only one consumer, no matter how much of them are present.
Related
I'd like to know what is the canonical way to handle errors in the following situation (code is a minimal working example):
Messages are sent through a messaging gateway which defines its defaultRequestChannel and a #Gateway method:
#MessagingGateway(name = MY_GATEWAY, defaultRequestChannel = INPUT_CHANNEL)
public interface MyGateway
{
#Gateway
public void sendMessage(String message);
Messages are read from the channel and sent through an AMQP outbound adapter:
#Bean
public IntegrationFlow apiMutuaInputFlow()
{
return IntegrationFlows
.from(INPUT_CHANNEL)
.handle(Amqp.outboundAdapter(rabbitConfig.myTemplate()))
.get();
}
The RabbitMQ configuration is skeletal:
#Configuration
public class RabbitMqConfiguration
{
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public RabbitTemplate myTemplate()
{
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(INPUT_QUEUE_NAME);
r.setConnectionFactory(rabbitConnectionFactory);
return r;
}
}
I generally include a bean to define the RabbitMQ configuration I'm relying upon (exchange, queues and bindings), and it actually works fine. But while testing for failure scenarios, I found a situation I don't know how to properly handle using Spring Integration. The steps are:
Remove the beans that configure RabbitMQ
Run the flow against an unconfigured, vanilla RabbitMQ instance.
What I would expect is:
The message cannot be delivered because the exchange cannot be found.
Either I find some way to get an exception from the messaging gateway on the caller thread.
Either I find some way to otherwise intercept this error.
What I find:
The message cannot be delivered because the exchange cannot be found, and indeed this error message is logged every time the #Gateway method is called.
2020-02-11 08:18:40.746 ERROR 42778 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'my.exchange' in vhost '/', class-id=60, method-id=40)
The gateway is not failing, nor have I find a way to configure it to do so (e.g.: adding throws clauses to the interface methods, configuring a transactional channel, setting wait-for-confirm and a confirm-timeout).
I haven't found a way to otherwise catch that CachingConectionFactory error (e.g.: configuring a transactional channel).
I haven't found a way to catch an error message on another channel (specified on the gateway's errorChannel), or in Spring Integration's default errorChannel.
I understand such a failure may not be propagated upstream by the messaging gateway, whose job is isolating callers from the messaging API, but I definitely expect such an error to be interceptable.
Could you point me in the right direction?
Thank you.
RabbitMQ is inherently async, which is one reason that it performs so well.
You can, however, block the caller by enabling confirms and returns and setting this option:
/**
* Set to true if you want to block the calling thread until a publisher confirm has
* been received. Requires a template configured for returns. If a confirm is not
* received within the confirm timeout or a negative acknowledgment or returned
* message is received, an exception will be thrown. Does not apply to the gateway
* since it blocks awaiting the reply.
* #param waitForConfirm true to block until the confirmation or timeout is received.
* #since 5.2
* #see #setConfirmTimeout(long)
* #see #setMultiSend(boolean)
*/
public void setWaitForConfirm(boolean waitForConfirm) {
this.waitForConfirm = waitForConfirm;
}
(With the DSL .waitForConfirm(true)).
This also requires a confirm correlation expression. Here's an example from one of the test cases
#Bean
public IntegrationFlow flow(RabbitTemplate template) {
return f -> f.handle(Amqp.outboundAdapter(template)
.exchangeName("")
.routingKeyFunction(msg -> msg.getHeaders().get("rk", String.class))
.confirmCorrelationFunction(msg -> msg)
.waitForConfirm(true));
}
#Bean
public CachingConnectionFactory cf() {
CachingConnectionFactory ccf = new CachingConnectionFactory(
RabbitAvailableCondition.getBrokerRunning().getConnectionFactory());
ccf.setPublisherConfirmType(CachingConnectionFactory.ConfirmType.CORRELATED);
ccf.setPublisherReturns(true);
return ccf;
}
#Bean
public RabbitTemplate template(ConnectionFactory cf) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(cf);
rabbitTemplate.setMandatory(true); // for returns
rabbitTemplate.setReceiveTimeout(10_000);
return rabbitTemplate;
}
Bear in mind this will slow down things considerably (similar to using transactions) so you may want to reconsider whether you want to do this on every send (unless performance is not an issue).
Am trying to consume the message from exiting queue which is of type Direct Exchange(created with the help of exchange and routing key). I have only the exchange name and routing key and not the queue name. There were support for plain Java, but there was no place where I can find it for Spring boot.
#RabbitListener
#RabbitHandler
public void consumeMessage(Object message) {
LOGGER.debug("Message Consumed.... : {}", message.toString());
}
How can I consume messages with routing key and exchange name not the queue name as #RabbitListener asks for queue.
Consumers consume from queues not exchanges. You must bind a queue to the exchange with the routing key.
EDIT
There are several ways to automatically declare a queue on the broker.
#RabbitListener(bindings =
#QueueBinding(exchange = #Exchange("myExchange"),
key = "myRk", value = #Queue("")))
public void listen(String in) {
System.out.println(in);
}
This will bind an anonymous queue (auto-delete) which will be deleted when the application is stopped.
#RabbitListener(bindings =
#QueueBinding(exchange = #Exchange("myExchange"),
key = "myRk", value = #Queue("foo")))
public void listen(String in) {
System.out.println(in);
}
Will bind a permanent queue foo to the exchange with the routing key.
You can also simply declare #Bean s for the queue, exchange and binding.
See Configuring the broker.
Today, our application have a vhost, a ConnectionFactory and a RabbitAdmin followed by multiple queue and exchange declaration. Now we have a requirement where I need a new vhost, hence a ConnectionFactory & RabbitAdmin
After creating the new vhost, the problem I'm facing is, all existing queue and exchange is getting created in both vhost. To address this issue, I used declare-by="RabbitAdminName" property which I can use in both queue and exchange level. As my application is having multiple queue and exchange hence I prefer not to disturb all existing config by adding declare-by in each queue definition.
Is there a way(global config to change the default behavior) to tell rabbit that only the intended new queue will go to the new vhost/ConnectoinFactory/RabbitAdmin and not the already existing queue. Any help is highly appreciable(I'm looking for the xml way of deceleration)
By default, all queues, exchanges, and bindings are declared by all
RabbitAdmin instances (that have auto-startup="true") in the
application context.
Reference: spring.io
There is currently no global setting for this; you have to configure each queue etc and set the declare-by property to limit the declaration to an explicit admin.
So you would need to do this for your old queues to only declare those queues on the old vhost.
We could add a flag to the admin to exclude any beans that do not explicitly request declaration by this admin.
Please open a new feature issue.
I solved it as follows:
#Bean
public RabbitAdmin admin() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(cf1());
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean
public RabbitAdmin admin2() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(cf2());
rabbitAdmin.afterPropertiesSet();
return rabbitAdmin;
}
#Bean
public Queue queue() {
Queue queue = new Queue("foo");
queue.setAdminsThatShouldDeclare(admin());
return queue;
}
#Bean
public Exchange exchange() {
DirectExchange exchange = new DirectExchange("bar");
exchange.setAdminsThatShouldDeclare(admin());
return exchange;
}
#Bean
public Binding binding() {
Binding binding = new Binding("foo", DestinationType.QUEUE, exchange().getName(), "foo", null);
binding.setAdminsThatShouldDeclare(admin());
return binding;
}
By: https://docs.spring.io/spring-amqp/docs/1.4.5.RELEASE/reference/html/amqp.html#conditional-declaration
I'm working with 2 .NET Core console applications in a producer/consumer scenario with MassTransit/RabbitMQ. I need to ensure that even if NO consumers are up-and-running, the messages from the producer are still queued up successfully. That didn't seem to work with Publish() - the messages just disappeared, so I'm using Send() instead. The messages at least get queued up, but without any consumers running the messages all end up in the "_skipped" queue.
So that's my first question: is this the right approach based on the requirement (even if NO consumers are up-and-running, the messages from the producer are still queued up successfully)?
With Send(), my consumer does indeed work, but still many messages are falling through the cracks and getting dumped into to the "_skipped" queue. The consumer's logic is minimal (just logging the message at the moment) so it's not a long-running process.
So that's my second question: why are so many messages still getting dumped into the "_skipped" queue?
And that leads into my third question: does this mean my consumer needs to listen to the "_skipped" queue as well?
I am unsure what code you need to see for this question, but here's a screenshot from the RabbitMQ management UI:
Producer configuration:
static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder()
.ConfigureServices((hostContext, services) =>
{
services.Configure<ApplicationConfiguration>(hostContext.Configuration.GetSection(nameof(ApplicationConfiguration)));
services.AddMassTransit(cfg =>
{
cfg.AddBus(ConfigureBus);
});
services.AddHostedService<CardMessageProducer>();
})
.UseConsoleLifetime()
.UseSerilog();
}
static IBusControl ConfigureBus(IServiceProvider provider)
{
var options = provider.GetRequiredService<IOptions<ApplicationConfiguration>>().Value;
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(options.RabbitMQ_ConnectionString), h =>
{
h.Username(options.RabbitMQ_Username);
h.Password(options.RabbitMQ_Password);
});
cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName, e =>
{
EndpointConvention.Map<CardMessage>(e.InputAddress);
});
});
}
Producer code:
Bus.Send(message);
Consumer configuration:
static IHostBuilder CreateHostBuilder(string[] args)
{
return Host.CreateDefaultBuilder()
.ConfigureServices((hostContext, services) =>
{
services.AddSingleton<CardMessageConsumer>();
services.Configure<ApplicationConfiguration>(hostContext.Configuration.GetSection(nameof(ApplicationConfiguration)));
services.AddMassTransit(cfg =>
{
cfg.AddBus(ConfigureBus);
});
services.AddHostedService<MassTransitHostedService>();
})
.UseConsoleLifetime()
.UseSerilog();
}
static IBusControl ConfigureBus(IServiceProvider provider)
{
var options = provider.GetRequiredService<IOptions<ApplicationConfiguration>>().Value;
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(options.RabbitMQ_ConnectionString), h =>
{
h.Username(options.RabbitMQ_Username);
h.Password(options.RabbitMQ_Password);
});
cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName, e =>
{
e.Consumer<CardMessageConsumer>(provider);
});
//cfg.ReceiveEndpoint(host, typeof(CardMessage).FullName + "_skipped", e =>
//{
// e.Consumer<CardMessageConsumer>(provider);
//});
});
}
Consumer code:
class CardMessageConsumer : IConsumer<CardMessage>
{
private readonly ILogger<CardMessageConsumer> logger;
private readonly ApplicationConfiguration configuration;
private long counter;
public CardMessageConsumer(ILogger<CardMessageConsumer> logger, IOptions<ApplicationConfiguration> options)
{
this.logger = logger;
this.configuration = options.Value;
}
public async Task Consume(ConsumeContext<CardMessage> context)
{
this.counter++;
this.logger.LogTrace($"Message #{this.counter} consumed: {context.Message}");
}
}
In MassTransit, the _skipped queue is the implementation of the dead letter queue concept. Messages get there because they don't get consumed.
MassTransit with RMQ always delivers a message to an exchange, not to a queue. By default, each MassTransit endpoint creates (if there's no existing queue) a queue with the endpoint name, an exchange with the same name and binds them together. When the application has a configured consumer (or handler), an exchange for that message type (using the message type as the exchange name) also gets created and the endpoint exchange gets bound to the message type exchange. So, when you use Publish, the message is published to the message type exchange and gets delivered accordingly, using the endpoint binding (or multiple bindings). When you use Send, the message type exchange is not being used, so the message gets directly to the destination exchange. And, as #maldworth correctly stated, every MassTransit endpoint only expects to get messages that it can consume. If it doesn't know how to consume the message - the message is moved to the dead letter queue. This, as well as the poison message queue, are fundamental patterns of messaging.
If you need messages to queue up to be consumed later, the best way is to have the wiring set up, but the endpoint itself (I mean the application) should not be running. As soon as the application starts, it will consume all queued messages.
When the consumer starts the bus bus.Start(), one of the things it does is create all exchanges and queues for the transport. If you have a requirement that publish/send happens before the consumer, your only option is to run DeployTopologyOnly. Unfortunately this feature is not documented in official docs, but the unit tests are here: https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.RabbitMqTransport.Tests/BuildTopology_Specs.cs
The skipped queue happens when messages are sent to a consumer that doesn't know how to process.
For example if you have a consumer that can process IConsumer<MyMessageA> which is on receive endpoint name "my-queue-a". But then your message producer does Send<MyMessageB>(Uri("my-queue-a")...), Well this is a problem. The consumer only understands the A, it doesn't know how to process B. And so it just moves it to a skipped queue and continues on.
In my case, the same queue listens to multiple consumers at the same time
I am trying to use a RPC AMQP RabbitMQ queue to send and receive messages. The problem is that I have set a setReplyTimeout value. When that happens I get a "org.springframework.amqp.AmqpRejectAndDontRequeueException: Reply received after timeout". I have a DLQ set up on the incoming queue, but it appears that the exception is received when spring tries to return the message on its queue that is automatically created. Thus how can I handle exceptions when sending messages back to a producer? Ideally I would want any message that gets an exception while being sent to a producer sent to a DLQ.
I am using
#RabbitListener(queues = QueueConfig.QUEUE_ALL, containerFactory = "containerFactoryQueueAll")
It requires a SimpleRabbitListenerContainerFactory which does not have setQueues. Also rabbitTemplate does not have a rabbitTemplate.setReplyQueue
Thanks,
Brian
Instead of using the default built-in reply listener container with the direct reply-to pseudo queue, use a Reply Listener Container with a named queue that is configured to route undeliverable messages to a DLQ.
The RabbitTemplate is configured as the container's listener:
#Bean
public RabbitTemplate amqpTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setMessageConverter(msgConv());
rabbitTemplate.setReplyQueue(replyQueue());
rabbitTemplate.setReplyTimeout(60000);
rabbitTemplate.setUseDirectReplyToContainer(false);
return rabbitTemplate;
}
#Bean
public SimpleMessageListenerContainer replyListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.setQueues(replyQueue());
container.setMessageListener(amqpTemplate());
return container;
}
#Bean
public Queue replyQueue() {
return new Queue("my.reply.queue");
}
Note that the documentation needs to be updated, but you also need
rabbitTemplate.setUseDirectReplyToContainer(false);
IMPORTANT
If you have multiple instances of the client, each needs its own reply queue.