I am new to Rabbitmq and Spring. I want to know how to manage the number of connections and channels.
In my architecture there are 2 queues where messages are published from single producer based on routing key on direct exchange. As per my understanding I would need a single connection with 2 channels which will be persistent and messages will be published through them. I assumed this is managed by Spring automatically. But a connection, consisting of single channel, is created every time a message is published.
- How do I manage the channels and connections? Is it the right approach to create a single channel for each queue in a connection? If the queue size increases to 10 then 10 channels should be used in a single connection?
Configuration File:
<bean id="connectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<property name="username" value="test"/>
<property name="password" value="test"/>
<property name="host" value="50.16.11.22"/>
<property name="port" value="5672"/>
</bean>
<bean id="publisher" class="com.test.code.Publisher">
<constructor-arg ref="amqpTemplate"></constructor-arg>
</bean>
<bean id="amqpTemplate" class="org.springframework.amqp.rabbit.core.RabbitTemplate">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="mandatory" value="true"></property>
<property name="exchange" value="x.direct"></property>
</bean>
<rabbit:admin connection-factory="connectionFactory" />
<rabbit:queue name="q.queue1" />
<rabbit:queue name="q.queue2" />
<rabbit:direct-exchange name="x.direct">
<rabbit:bindings>
<rabbit:binding queue="q.queue1" key="key1" />
<rabbit:binding queue="q.queue2" key="key2" />
</rabbit:bindings>
</rabbit:direct-exchange>
</beans>
This is my Publisher class
public class Publisher {
public Publisher(RabbitTemplate rabbitTemplate) {
this.rabbitTemplate = rabbitTemplate;
}
public void messageToQueue1(JSONObject message) {
amqpTemplate.convertAndSend("key1", message.toString());
}
public void messageToQueue2(JSONObject message) {
amqpTemplate.convertAndSend("key2", message.toString());
}
}
But a connection, consisting of single channel, is created every time a message is published.
That is not true. There is also no dedicated channel for each routing key.
The CachingConnectionFactory maintains a single persistent connection (by default) and channels are cached.
The first publish creates a channel and puts it in the cache. The next publish gets it from the cache. Only if the cache is empty is a new channel created (and then you'll end up with 2 cached channels).
You'll only get as many channels as you need concurrently.
Related
I want to implement a solution in Spring-JMS with activeMQ where I want to create a durable subscription to a topic. The purpose is that if a subscriber closes the subscription for a while and once again recreates the durablesubscription with same client id and subscription name, the subscriber should receive all the messages which were delivered during the time subscription was closed.
I want to implement the following logic mentioned in the ORACLE URL for durable subscriptions: https://docs.oracle.com/cd/E19798-01/821-1841/bncgd/index.html
But I am unable to perform this using spring-jms. As per the URL I need to get messageConsumer instance and call close() on that method to stop receiving message temporarily from the topic. But I am not sure how to get it.
Following is my configuration. Kindly let me know how to modify the configuration to perform this.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:jms="http://www.springframework.org/schema/jms"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms.xsd">
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory"
p:userName="admin"
p:password="admin"
p:brokerURL="tcp://127.0.0.1:61616"
primary="true"
></bean>
<bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer" p:durableSubscriptionName="gxaa-durable1" p:clientId="gxaa-client1">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="destination" ref="adiTopic"/>
<property name="messageListener" ref="adiListener"/>
</bean>
<bean id="configTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="connectionFactory"
p:defaultDestination-ref="adiTopic" primary="true"
p:pubSubDomain="true">
</bean>
<bean id="adiTopic" class="org.apache.activemq.command.ActiveMQTopic" p:physicalName="gcaa.adi.topic"></bean>
<bean id="adiListener" class="com.gcaa.asset.manager.impl.AdiListener"></bean>
why not calling DefaultMessageListenerContainer.stop(); to stop the container and consumers ?
you can inject jmsContainer to another bean and close it when you want and call start() later.
all messages sent to the broker when your durable consumer is offline will be stored until it reconnect.
to make subscription durables you need to add this to jmsContainer bean
<property name="subscriptionDurable" value="true" />
<property name="cacheLevel" value="1" />
you can add a subscriptionName or the class name of the specified message listener will be used.
You can add a clientID to the connectionFactory
<property name="clientID" value="${jms.clientId}" />
or use
<bean class="org.springframework.jms.connection.SingleConnectionFactory"
id="singleConnectionFactory">
<constructor-arg
ref="connectionFactory" />
<property name="reconnectOnException" value="true" />
<property name="clientId" value="${jms.clientId}" />
</bean>
and update jmsContainer
<bean id="jmsContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:durableSubscriptionName="gxaa-durable1" p:clientId="gxaa-client1">
<property name="connectionFactory" ref="singleConnectionFactory" />
<property name="destination" ref="adiTopic" />
<property name="messageListener" ref="adiListener" />
<property name="subscriptionDurable" value="true" />
<property name="cacheLevel" value="1" />
</bean>
UPDATE :
if your adiListener implements org.springframework.jms.listener.SessionAwareMessageListener it have to define method onMessage(M message, Session session) and when you have the session you can call javax.jms.Session.unsubscribe(String subscriptionName)
subscriptionName is defined above and can be injected to this bean or the class name of the specified message listener can be used.
I am facing an issue while using RabbitMQ in the fanout exchange which due to some unknown reason is behaving like a direct exchange.
I am using a following binding and queue configuration
<bean id="testfanout"
class="com.test">
<constructor-arg name="exchange" ref="test" />
<constructor-arg name="routingKey" value="test" />
<constructor-arg name="queue" value="testQ" />
<constructor-arg name="template">
<bean class="org.springframework.amqp.rabbit.core.RabbitTemplate">
<constructor-arg ref="connectionFactory" />
</bean>
</constructor-arg>
<constructor-arg value="true"/>
</bean>
<rabbit:fanout-exchange name="test" id="test">
<rabbit:bindings>
<rabbit:binding queue="test"/>
</rabbit:bindings>
</rabbit:fanout-exchange>
Now we have a same code listening to same testQ on two different VM's but somehow message is send to one VM listener using the round robin algorithm
Sender code
channel = ...
RabbitTemplate template = null;
if(channel != null){
template = channel.getTemplate();
if(template != null){
template.setQueue(channel.getQueue());
template.setExchange(channel.getExchange().getName());
template.convertAndSend(channel.getRoutingKey(), txtMsg);
The routing key is ignored for a fanout exchange.
Are you sure it's actually a fanout exchange in rabbitmq? I don't see a RabbitAdmin in your configuration (which would attempt to declare the exchange and binding).
Look at your exchange in the RabbitMQ UI and check the type/bindings.
We are using an 2-Node active-active RabbitMQ cluster with mirrored queue. With the mirroring policy being :
"policies":[{"vhost":"/","name":"ha-all","pattern":"","apply->to":"all","definition":{"ha-mode":"all","ha-sync-mode":"automatic"},"priority":0}]
Versions : RabbitMQ 3.5.4, Erlang 17.4 , spring-amqp/spring-rabbit :1.4.5.RELEASE
Now,we are trying to achieve consumer cancellation,as mentioned in Highly Available Queues.
However,since we have not used channel,we can't use {{basicConsumer}} method as given in the above link.
How do I set,"x-cancel-on-ha-failover" to true in the configuration,itself?
With the beans xml being thus :
<rabbit:connection-factory id="connectionFactory"
addresses="localhost:5672"
username="guest"
password="guest"
channel-cache-size="5" />
<!-- CREATE THE JsonMessageConverter BEAN -->
<bean id="jsonMessageConverter" class="org.springframework.amqp.support.converter.JsonMessageConverter" />
<!-- Spring AMQP Template -->
<rabbit:template id="amqpTemplate" connection-factory="connectionFactory" retry-template="retryTemplate" message-converter="jsonMessageConverter" />
<!-- in case connection is broken then Retry based on the below policy -->
<bean id="retryTemplate" class="org.springframework.retry.support.RetryTemplate">
<property name="backOffPolicy">
<bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
<property name="initialInterval" value="500" />
<property name="multiplier" value="2" />
<property name="maxInterval" value="30000" />
</bean>
</property>
</bean>
<rabbit:queue name="testQueue" durable="true">
<rabbit:queue-arguments>
<entry key="x-max-priority">
<value type="java.lang.Integer">10</value>
</entry>
</rabbit:queue-arguments>
</rabbit:queue>
<bean id="messsageConsumer" class="consumer.RabbitConsumer">
</bean>
<rabbit:listener-container
connection-factory="connectionFactory" concurrency="5" max-concurrency="5" message-converter="jsonMessageConverter">
<rabbit:listener queues="testQueue" ref="messsageConsumer" />
</rabbit:listener-container>
The <rabbit:listener-container> actually populates a SimpleMessageListenerContainer bean on background. And the last one supports public void setConsumerArguments(Map<String, Object> args) on the matter.
So, to fix your requirements you just need to build the raw SimpleMessageListenerContainer <bean> for your messsageConsumer.
Meanwhile you are fixing that for your application, I'd ask you for the JIRA regarding adding <consumer-arguments> component. And we may be able to address it with the current GA deadline.
I'm new to RabbitMQ and Spring Integration.
I have a use case to consume JSON message from a channel, convert it to an object. One of the field that I need to set in the object is the message Id(delivery.getEnvelope().getDeliveryTag()) of the message that we receive from rabbitMQ which we need for ack handling after all the business logic.
How to do it using spring integration?
Here is my xml configuration.
<bean id="devRabbitmqConnectionFactory" class="com.rabbitmq.client.ConnectionFactory">
<property name="brokerURL" value="#{props[rabbitmq_inputjms_url]}" />
<property name="redeliveryPolicy" ref="redeliveryPolicy" />
</bean>
<bean id="devJMSCachingConnectionFactory"
class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="devRabbitmqConnectionFactory" />
<property name="sessionCacheSize" value="10" />
<property name="cacheProducers" value="false" />
</bean>
<int-jms:channel id="devJMSChannel" acknowledge="transacted"
connection-factory="devJMSCachingConnectionFactory" message-driven="false"
queue-name="devJMSChannel">
</int-jms:channel>
<bean id="redeliveryPolicy" class="org.apache.activemq.RedeliveryPolicy">
<property name="initialRedeliveryDelay" value="5000" />
<property name="maximumRedeliveries" value="5" />
</bean>
<int:transformer id="devObjectTransformer" input-channel="devJMSChannel" ref="devService" method="readEventFromRabbitMQ"
output-channel="devPacketChannel">
<int:poller fixed-rate="10" task-executor="devObjectTransformerExecutor" />
</int:transformer>
The transformer method "readEventFromRabbitMQ" gets the message String from msg.getPayload() converts it into object and sends it to the output channel. But not sure how to get the message Id in the transformer class. Can somebody help me with this?
public List<DevEventRecord> readEventFromRabbitMQ(Message<EventsDetail> msg){
DevEventRecord[] eventRecords=null;
EventsDetail expEvent = null;
long receivedTime =System.currentTimeMillis();
int packetId = -1;
try{
monitorBean.incrementDeviceExceptionPacketCount();
expEvent = msg.getPayload();
LogUtil.debug("readExceptionEvent :: consumed JMS Q "+expEvent);
eventRecords = dispatchPacket(expEvent);
}
catch(ProcessingException pe){
notifyAck(expEvent.getUniqueId(),,,,);
}
catch(Exception ex){
notifyAck(expEvent.getUniqueId(),,,,);
LogUtil.error("Exception occured while reading object in readEvent , "+ex.toString());
}
return getEventRecordList(eventRecords);
}
The deliveryTag is presented as message header after an <int-amqp:inbound-channel-adapter> under the key AmqpHeaders.DELIVERY_TAG.
I don't understand why you mix AMQP and JMS, but anyway those channel implementations don't populate headers from received message. It is out of their responcibity.
Please, use <int-amqp:inbound-channel-adapter> and here is a sample how to ack message manually using deliveryTag header.
We're trying to set up ActiveMQ 5.9.0 as a message broker using JMS topics, but we're having some issues with the consumption of the messages.
For testing purposes, we have a simple configuration of 1 topic, 1 event producer, and 1 consumer. We send 10 messages one after the other, but every time we run the application, 1-3 of these messages are not consumed! The other messages are consumed and proceesed fine.
We can see that all the messages we're published to the topic in the ActiveMQ managment console, but they never reach the consumer, even if we reastart the application (we can see that the numbers in the "Enqueue" and "Dequeue" columns are different).
EDIT: I should also mention that when using queues instead of topic, this problem does not occur.
Why is this happening? Could it have something to do with atomikos (which is the transaction manger)? Or maybe something else in the configuration? Any ideas/suggestions are welcome. :)
This is the ActiveMQ/JMS spring configuration:
<bean id="connectionFactory" class="com.atomikos.jms.AtomikosConnectionFactoryBean"
init-method="init" destroy-method="close">
<property name="uniqueResourceName" value="amq" />
<property name="xaConnectionFactory">
<bean class="org.apache.activemq.spring.ActiveMQXAConnectionFactory"
p:brokerURL="${activemq_url}" />
</property>
<property name="maxPoolSize" value="10" />
<property name="localTransactionMode" value="false" />
</bean>
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
</bean>
<!-- A JmsTemplate instance that uses the cached connection and destination -->
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="cachedConnectionFactory" />
<property name="sessionTransacted" value="true" />
<property name="pubSubDomain" value="true"/>
</bean>
<bean id="testTopic" class="org.apache.activemq.command.ActiveMQTopic">
<constructor-arg value="test.topic" />
</bean>
<!-- The Spring message listener container configuration -->
<jms:listener-container destination-type="topic"
connection-factory="connectionFactory" transaction-manager="transactionManager"
acknowledge="transacted" concurrency="1">
<jms:listener destination="test.topic" ref="testReceiver"
method="receive" />
</jms:listener-container>
The producer:
#Component("producer")
public class EventProducer {
#Autowired
private JmsTemplate jmsTemplate;
#Transactional
public void produceEvent(String message) {
this.jmsTemplate.convertAndSend("test.topic", message);
}
}
The consumer:
#Component("testReceiver")
public class EventListener {
#Transactional
public void receive(String message) {
System.out.println(message);
}
}
The test:
#Autowired
private EventProducer eventProducer;
public void testMessages() {
for (int i = 1; i <= 10; i++) {
this.eventProducer.produceEvent("message" + i);
}
That's the nature of JMS topics - only current subscribers receive messages by default. You have a race condition and are sending messages before the consumer has established its subscription, after the container is started. This is a common mistake with unit/integration tests with topics where you are sending and receiving in the same application.
With newer versions of Spring, there is a method you can poll to wait until the subscriber is established (since 3.1, I think). Or, you can just wait a little while before starting to send, or you can make your subscriptions durable.