Can Spring RabbitMQ component be configured to guarantee "at least once" delivery when I consume the message from one rabbit cluster and publish it to another one?
For example for rabbitMQ component we have the following scope of parameters:
auto-ack: false
mandatory: true
guaranteed-deliveries: true
publisher-acknowledgments: true
reQueue: true
How I can configure Spring RabbitMQ component to reach the same guarantee?
Is only the way to reach this is to add custom ExceptionHandler to send "nack" and send "ack" for a successful case on some processor at the end of my route?
Related
rabbitMQ version: 3.11.8 , MassTransit: 8.0.1.
I have a queue with this config:
x-queue-type:quorum, x-single-active-consumer:true, durable:true
sometimes I need to do the action: GetMessage(s) in the Management panel.
but now with this queue I got this exeption:
405 RESOURCE_LOCKED - cannot obtain access to locked queue 'myQueue' in vhost 'xxx'. basic.get operations are not supported by quorum queues with single active consumer
usaully I need to read messages from errpr_queue that Masstransit created.
I've searched for that, and I found just some solutions for exclusive queues- for example issue 1 and issue 2.
but I couldn't find any solution for 'cannot obtain access to locked queue'
So, you've requested a single active consumer on the queue. And when you try to get messages in the console, it reports that the queue is locked.
Seems like that would be expected behavior, and it's telling you as much in the error message.
I have a server which publishes rabbitmq messages on a exchange, so I tried to create following async api specs for this -
asyncapi: 2.3.0
info:
title: Hello World
version: 1.0.0
description: Get Hello World Messages
contact: {}
servers:
local:
url: amqp://rabbitmq
description: RabbitMQ
protocol: amqp
protocolVersion: 0.9.1
defaultContentType: application/json
channels:
hellow_world:
subscribe:
operationId: HelloWorldSubscriber
description:
message:
$ref: '#/components/messages/HellowWorldEvent'
bindings:
amqp:
ack: true
cc: ["hello_world_routing_key"]
bindingVersion: 0.2.0
bindings:
amqp:
is: routingKey
exchange:
name: hello_world_exchange
type: direct
durable: true
vhost: /
bindingVersion: 0.2.0
components:
messages:
HellowWorldEvent:
payload:
type: object
properties: []
Based on my understanding what it means is that MyApp will publish helloworldevent message on hello_world_exchange exchange using routing key hello_world_routing_key
Question -
How can consumer/subscriber can define which queue he will be using for consuming this message ?
Do I need to define new schema for subscriber and define queue element there ?
I can define another queue.** elements in channel element, but that can only specify 1 queue element, what if there are more than 1 subscriber/consumer, so how we can specify different queues for them ?
Reference -
https://github.com/asyncapi/bindings/tree/master/amqp
I see you have not yet approved any of the responses as a solution. Is this still an issue? Are you using the AsyncAPI generator to generate your code stubs?
If so the generator creates a consumer/subscriber. If you want different processing/business logic you would generate new stubs and configure the queues they listen from. The queue is an implementation detail. I had an issue with the node.js generator for AMQP and RabbitMQ and so I decided to test the spec against Python to see if it was me or the generator.
Try the generator and you can try my gist: https://gist.github.com/adrianvolpe/27e9f02187c5b31247aaf947fa4a7360. I did do this for version 2.2.0 so hopefully it works for you.
I also did a test with the Python pika library however I did not assign a binding to the queue.
I noticed in the above spec you are setting your exchange type to Direct. You can have the same binding with multiple consumers with both Direct and Topic exchanges however you may want Topic as quoted from the RabbitMQ docs:
https://www.rabbitmq.com/tutorials/tutorial-five-python.html
Topic exchange is powerful and can behave like other exchanges.
When a queue is bound with "#" (hash) binding key - it will receive all the messages, regardless of the routing key - like in fanout exchange.
When special characters "*" (star) and "#" (hash) aren't used in bindings, the topic exchange will behave just like a direct one.
Best of luck!
When a certain endpoint is not available (500 for instance) my queue file is moved to .error directory. I am using the parameter: moveFailed for this.
<from uri="file:inbox?autoCreate=true&readLockTimeout=2000&charset=utf-8&preMove=.processing&delete=true&moveFailed=.error&maxMessagesPerPoll=50&delay=1000"/>
According to: http://camel.apache.org/file2.html
When moving the files to the “fail” location Camel will handle the
error and will not pick up the file again.
What is the best approach to implement a redelivery policy/strategy so that the files get picked up again when failed?
Setup a retry by redelivering to that certain endpoint component, not to the whole route.
You can do this by specifying number of retries, a delay between retries, and a backoff multiplier if you so wish using an error handler.
onException(RestException.class)
.maximumRedeliveries(3)
.redeliveryDelay(100L)
.backOffMultiplier(1.5)
Or setting this in your camel context:
<errorHandler id="errorhandler" redeliveryPolicyRef="redeliveryPolicy"/>
<redeliveryPolicyProfile id="redeliveryPolicy" maximumRedeliveries="3" redeliveryDelay="100" backOffMultiplier="1.5" retryAttemptedLogLevel="WARN"/>
This way, the file is only delivered to the error folder once it has run out of redelivery attempts.
You could also look at using the dead letter handler, and putting the file into a queue to be processed later.
is it possible to make the ActiveMQ broker distribute messages received on one transportConnector to other transportConnectors as well?
The concrete use case is this: I have a Java client sending messages using the openwire transportConnector and I would like to be able to read them on the mqtt transportConnector.
I use the sample jndi.properties file that is on the ActiveMQ page http://activemq.apache.org/jndi-support.html:
java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory
# use the following property to configure the default connector
java.naming.provider.url = tcp://localhost:61616
# use the following property to specify the JNDI name the connection factory
# should appear as.
#connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry
# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue
# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
I had to replace the default 'vm' transportConnector with the 'tcp' one because it did not execute using 'vm'.
The messages are pushed to my Java MessageListener instance but my mqtt client does not show them. I tried with different topics, started with 'example.MyTopic' up to '/example/MyTopic'.
Any help would be much appreciated.
Many thanks,
Roman
The broker does that by default so you are not doing something right, check the admin console for producers and consumer registered on the given destinations to see what is going on. You must remember that a Topic consumer will not receive messages sent to that Topic unless it was online at the time they were sent or you had previously created a durable topic subscription.
I have wrote server-client application.
Server Side
server will initilise a queue queue1 with routing key key1 on direct exchange.
After initilise and declaration it consume data whenever someone write on it.
Client Side
client will publish some data on that exchange using routing key key1 .
Also i have set mandotory flag to true before i publish.
Problem
everything is fine when i start server first .but i got problem when i start client first and it publish data with routing key. When client published data there is no exception from broker.
Requirement
I want exception or error when i published data on non existing queue.
If you will publish messages with mandatory flag set to true, then that message will returned back in case it cannot be routed to any queue.
As to nonexistent exchanges, it is forbidden to publish messages to non-existent exchanges, so you'll have to get an error about that, something like NOT_FOUND - no exchange 'nonexistent_exchange' in vhost '/'.
You can declare exchanges an queues and bind them as you need on client side too. These operations are idempotent.
Note, that creating and binding exchanges and queues on every publish may have negative performance impact, so do that on client start, not every publish.
P.S.: if you use rabbitmq-c, then it is worth to cite basic_publish documentation
Note that at the AMQ protocol level basic.publish is an async method:
this means error conditions that occur on the broker (such as publishing to a non-existent exchange) will not be reflected in the return value of this function.
I spend a lot time to find do that. I have a example code in python using pika lib to show how to send messsage with delivery mode to prevent waiting response when send message to noneexist queue(broker will ignore meessage so that do not need receive response message)
import pika
# Open a connection to RabbitMQ on localhost using all default parameters
connection = pika.BlockingConnection()
# Open the channel
channel = connection.channel()
# Declare the queue
channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False)
# Enabled delivery confirmations
channel.confirm_delivery()
# Send a message
if channel.basic_publish(exchange='test',
routing_key='test',
body='Hello World!',
properties=pika.BasicProperties(content_type='text/plain',
delivery_mode=1),
mandatory=True):
print('Message was published')
else:
print('Message was returned')
Reference:
http://pika.readthedocs.org/en/latest/examples/blocking_publish_mandatory.html