Confusion on AsyncAPI AMQP binding for subscribe operation - rabbitmq

I have a server which publishes rabbitmq messages on a exchange, so I tried to create following async api specs for this -
asyncapi: 2.3.0
info:
title: Hello World
version: 1.0.0
description: Get Hello World Messages
contact: {}
servers:
local:
url: amqp://rabbitmq
description: RabbitMQ
protocol: amqp
protocolVersion: 0.9.1
defaultContentType: application/json
channels:
hellow_world:
subscribe:
operationId: HelloWorldSubscriber
description:
message:
$ref: '#/components/messages/HellowWorldEvent'
bindings:
amqp:
ack: true
cc: ["hello_world_routing_key"]
bindingVersion: 0.2.0
bindings:
amqp:
is: routingKey
exchange:
name: hello_world_exchange
type: direct
durable: true
vhost: /
bindingVersion: 0.2.0
components:
messages:
HellowWorldEvent:
payload:
type: object
properties: []
Based on my understanding what it means is that MyApp will publish helloworldevent message on hello_world_exchange exchange using routing key hello_world_routing_key
Question -
How can consumer/subscriber can define which queue he will be using for consuming this message ?
Do I need to define new schema for subscriber and define queue element there ?
I can define another queue.** elements in channel element, but that can only specify 1 queue element, what if there are more than 1 subscriber/consumer, so how we can specify different queues for them ?
Reference -
https://github.com/asyncapi/bindings/tree/master/amqp

I see you have not yet approved any of the responses as a solution. Is this still an issue? Are you using the AsyncAPI generator to generate your code stubs?
If so the generator creates a consumer/subscriber. If you want different processing/business logic you would generate new stubs and configure the queues they listen from. The queue is an implementation detail. I had an issue with the node.js generator for AMQP and RabbitMQ and so I decided to test the spec against Python to see if it was me or the generator.
Try the generator and you can try my gist: https://gist.github.com/adrianvolpe/27e9f02187c5b31247aaf947fa4a7360. I did do this for version 2.2.0 so hopefully it works for you.
I also did a test with the Python pika library however I did not assign a binding to the queue.
I noticed in the above spec you are setting your exchange type to Direct. You can have the same binding with multiple consumers with both Direct and Topic exchanges however you may want Topic as quoted from the RabbitMQ docs:
https://www.rabbitmq.com/tutorials/tutorial-five-python.html
Topic exchange is powerful and can behave like other exchanges.
When a queue is bound with "#" (hash) binding key - it will receive all the messages, regardless of the routing key - like in fanout exchange.
When special characters "*" (star) and "#" (hash) aren't used in bindings, the topic exchange will behave just like a direct one.
Best of luck!

Related

How to specify JMS username and password within URL

I have an application which connects to ActiveMQ using a "failover" URL string. The admins are adding authentication to the brokers. Is it possible to put jms.userName and jms.password into the URL string? An example with dummy values would be most helpful.
Yes, exactly how you specified it works. The jms. prefix configures any of the setters on the ActiveMQConnectionFactory.
failover:(tcp://127.0.0.1:61616)?jms.userName=admin&jms.password=admin
Log confirmation:
09:41:53.429 INFO [ActiveMQ Task-1] Successfully connected to tcp://127.0.0.1:61616
09:41:53.481 INFO [Blueprint Event Dispatcher: 1] Route: route1 started and consuming from: amq://queue:VQ.ORDER.VT.ORDER.EVENT

Spring AMQP to Spring Cloud Stream migration - existing queue

I'm trying to add Spring Cloud Stream to the existing project that uses Spring AMQP with RabbitMQ.
I have the following rabbit configuration:
Producer exchange name: producer.mail-sent.exchange
Consumer queue name: consumer.mail-sent.queue
On the producer's side I configure like this:
spring:
cloud:
stream:
bindings:
output:
contentType: application/json
destination: producer.mail-sent.exchange
And using the following code:
#Autowired
private Source source;
...
source.output().send(MessageBuilder.withPayload(someStuff).build());
...
On the consumer's side I have the following config:
spring:
cloud:
stream:
bindings:
input:
contentType: application/json
destination: producer.mail-sent.exchange
group: consumer.mail-sent.queue
With the following code:
#EnableBinding(Sink.class)
...
#StreamListener(Sink.INPUT)
public void handle(String someStuff) {
log.info("some stuff is received: " + someStuff);
}
And it seems that it works. :)
But! On the rabbit's side I have a new queue named producer.mail-sent.exchange.consumer.mail-sent.queue, but I want it to use the existing queue named consumer.mail-sent.queue.
Is there any way to achieve this?
It's not currently supported; while many properties are configurable (routing key etc), the queue name is always <destination>.<group>.
If you want to consume from an existing application, consider using a #RabbitListner instead of a #StreamListener.
Feel free to open a GitHub Issue referencing this post - many other "opinionated" configuration settings (such as routing key) are configurable, but not the queue name itself. Perhaps we could add a boolean includeDestInQueueName. Reference this question in the issue.

ActiveMQ; how to make a broker distribute messages among several transportConnectors

is it possible to make the ActiveMQ broker distribute messages received on one transportConnector to other transportConnectors as well?
The concrete use case is this: I have a Java client sending messages using the openwire transportConnector and I would like to be able to read them on the mqtt transportConnector.
I use the sample jndi.properties file that is on the ActiveMQ page http://activemq.apache.org/jndi-support.html:
java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory
# use the following property to configure the default connector
java.naming.provider.url = tcp://localhost:61616
# use the following property to specify the JNDI name the connection factory
# should appear as.
#connectionFactoryNames = connectionFactory, queueConnectionFactory, topicConnectionFactry
# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue
# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
I had to replace the default 'vm' transportConnector with the 'tcp' one because it did not execute using 'vm'.
The messages are pushed to my Java MessageListener instance but my mqtt client does not show them. I tried with different topics, started with 'example.MyTopic' up to '/example/MyTopic'.
Any help would be much appreciated.
Many thanks,
Roman
The broker does that by default so you are not doing something right, check the admin console for producers and consumer registered on the given destinations to see what is going on. You must remember that a Topic consumer will not receive messages sent to that Topic unless it was online at the time they were sent or you had previously created a durable topic subscription.

How to abort code when publish message on non exist queue in rabbitmq

I have wrote server-client application.
Server Side
server will initilise a queue queue1 with routing key key1 on direct exchange.
After initilise and declaration it consume data whenever someone write on it.
Client Side
client will publish some data on that exchange using routing key key1 .
Also i have set mandotory flag to true before i publish.
Problem
everything is fine when i start server first .but i got problem when i start client first and it publish data with routing key. When client published data there is no exception from broker.
Requirement
I want exception or error when i published data on non existing queue.
If you will publish messages with mandatory flag set to true, then that message will returned back in case it cannot be routed to any queue.
As to nonexistent exchanges, it is forbidden to publish messages to non-existent exchanges, so you'll have to get an error about that, something like NOT_FOUND - no exchange 'nonexistent_exchange' in vhost '/'.
You can declare exchanges an queues and bind them as you need on client side too. These operations are idempotent.
Note, that creating and binding exchanges and queues on every publish may have negative performance impact, so do that on client start, not every publish.
P.S.: if you use rabbitmq-c, then it is worth to cite basic_publish documentation
Note that at the AMQ protocol level basic.publish is an async method:
this means error conditions that occur on the broker (such as publishing to a non-existent exchange) will not be reflected in the return value of this function.
I spend a lot time to find do that. I have a example code in python using pika lib to show how to send messsage with delivery mode to prevent waiting response when send message to noneexist queue(broker will ignore meessage so that do not need receive response message)
import pika
# Open a connection to RabbitMQ on localhost using all default parameters
connection = pika.BlockingConnection()
# Open the channel
channel = connection.channel()
# Declare the queue
channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False)
# Enabled delivery confirmations
channel.confirm_delivery()
# Send a message
if channel.basic_publish(exchange='test',
routing_key='test',
body='Hello World!',
properties=pika.BasicProperties(content_type='text/plain',
delivery_mode=1),
mandatory=True):
print('Message was published')
else:
print('Message was returned')
Reference:
http://pika.readthedocs.org/en/latest/examples/blocking_publish_mandatory.html

ActiveMQ to Apollo transition, Openwire to Stomp protocol configuration

I'm trying to switch from ActiveMQ 5.6 to Apollo 1.5.
I have two soft that are exchanging messages, using publish/subscribe on topics.
The first one is c++ and use openwire with tcp
The second one is Javascript and use stomp with websockets
With ActiveMQ everything worked fine, and the messages I sent could be read and write on both softs, and I didn't changed the clients since.
Now, I send messages from the c++ soft (using openwire), and try to read them with the JS soft, and I get errors. In fact I receive message with header content-type: "protocol/openwire", but I expect stomp.
this is how I configured apollo.xml connector section :
<connector id="tcp" bind="tcp://0.0.0.0:61613">
<openwire max_inactivity_duration="-1" max_inactivity_duration_delay="-1" />
<stomp max_header_length="10000" die_delay="-1" />
</connector>
<connector id="ws" bind="tcp://0.0.0.0:61623">
<stomp max_header_length="10000" die_delay="-1" />
</connector>
I also tried with <detect /> in tcp and ws connector, that is supposed to auto detect client protocol, but dosen't work either.
Does someone can help me to figure this out ?
Thank you,
edit :
I found out that I do receive stomp protocol messages, but they are very weirdly formated, and even contains non text char that make stomp.js fail to parse the message and correctly fill the message body.
here are the same message received once from activemq openwire and then apollo openwire in with the same c++ publisher and js subscriber :
activemq
"MESSAGE
message-id:ID:myID-61443-1352999572576-0:0:0:0:0
class:Message.PointToPoint
destination:/topic/my-topic
timestamp:1352999626186
expires:0
subscription:sub-0
priority:4
<PointToPoint xmlns="Message" ><SourceId>u_23</SourceId><TargetId>u_75</TargetId></PointToPoint>"
apollo
"MESSAGE
subscription:sub-0
destination:
content-length:331
content-type:protocol/openwire
message-id:xps-broker-291
Eç{#ID:myID-61463-1352999939140-0:0emy-topicn{#ID:myID-61463-1352999939140-0:0; Å??<PointToPoint xmlns="Message" ><SourceId>u_23</SourceId><TargetId>u_75</TargetId></PointToPoint>(class Message.PointToPoint
"
Do you think it could be a problem in Apollo ?
ActiveMQ 5.6 handles translating the logical OpenWire messages into a text representation for STOMP clients. Apollo, currently does not support that feature yet! :( See:
https://issues.apache.org/jira/browse/APLO-267
It just takes the full openwire message and uses it as the body of the STOMP message. BTW using binary data in a STOMP message is totally valid as long as the content-length header is properly set.