I'm using Jboss 7.1.1.final and I would like to define 2 different DLQ's, one for a certain queue and the other for all the rest of the queues.
I found this configuration :
<address-settings>
<address-setting match="jms.queue.exampleQueue">
<dead-letter-address>jms.queue.deadLetterQueue</dead-letter-address>
<max-delivery-attempts>3</max-delivery-attempts>
<redelivery-delay>5000</redelivery-delay>
<expiry-address>jms.queue.expiryQueue</expiry-address>
<last-value-queue>true</last-value-queue>
<max-size-bytes>100000</max-size-bytes>
<page-size-bytes>20000</page-size-bytes>
<redistribution-delay>0</redistribution-delay>
<send-to-dla-on-no-route>true</send-to-dla-on-no-route>
<address-full-policy>PAGE</address-full-policy>
</address-setting>
</address-settings>
The match attribute can be used to match a certain queue, I have a couple of questions regarding this configuration:
If I define 2 address-setting, one with a wild card to match all and one that matches only one queue, does the one queue definition take precedence? Do i need to put it before the match all definition or it does not matter?
In the example they match a queue jms.queue.exampleQueue, i have a queue defined as:
<jms-queue name="MissionResult">
<entry name="queue/MissionResult"/>
</jms-queue>
what should i put in the match attribute in order to match it?
Found the answer:
The 2 definitions can co-exist. Jboss will find the best match.
You need to define a queue like:
<jms-queue name="exampleQueue">
<entry name="queue/exampleQueue" />
</jms-queue>
and then to match this queue, use jms.queue.exampleQueue.
Related
I am trying to understand Spring RabbitMQ codes when RabbitMQ is configured in XML files.
In the receiver xml file, I have
<rabbit:queue id="springQueue" name="spring.queue" auto-delete="true" durable="false"/>
<rabbit:queue name="springQueue" auto-delete="true" durable="false"/>
<rabbit:listener-container connection-factory="connectionFactory">
<rabbit:listener queues="springQueue" ref="messageListener"/>
</rabbit:listener-container>
<bean id="messageListener" class="com.ndpar.spring.rabbitmq.MessageHandler"/>
<!-- Bindings -->
<rabbit:fanout-exchange name="amq.fanout">
<rabbit:bindings>
<rabbit:binding queue="springQueue"/>
</rabbit:bindings>
</rabbit:fanout-exchange>
My question is - To which queue is the exchange bound to?? springQueue or spring.queue ?? I mean to ask, in the tag - , is it referring to queue id or queue name ?? also in the tag , the attribute 'queues' refers to queue id or queue name?? Please help. I looked at the schemas (xsd) but couldn't get clarity. Please help.
queues (in the listener) and queue (in the binding) should refer to the queue id attribute.
In the listener, you can use the queue name in the queue-names attribute but the binding always needs the id.
i need to optimize the CQ5 lucene indexing configuration for my application.
I want to provide a custom search configuration but i struggle to really understand the default configuration.
Source: https://helpx.adobe.com/experience-manager/kb/SearchIndexingConfig.html)
First question:
Are the "include"-tags used in the default configuration correct?
For example:
The default configuration uses the tag "include" to include the Property "jcr:content/jcr:lastModified" for the nt:file-Aggregate
<aggregate primaryType="nt:file">
<include>jcr:content</include>
<include>jcr:content/jcr:lastModified</include>
</aggregate>
Compare this to the Jackrabbit wiki which uses the "include-property" for the exact same case. Source: http://wiki.apache.org/jackrabbit/IndexingConfiguration
<aggregate primaryType="nt:file">
<include>jcr:content</include>
<include-property>jcr:content/jcr:lastModified</include-property>
</aggregate>
I only can assume it doesn't matter but i can't find any source to confirm this.
Second question: for the nodeType "cq:PageContent" all properties of four levels are aggregated.
<aggregate primaryType="cq:PageContent">
<include>*</include>
<include>*/*</include>
<include>*/*/*</include>
<include>*/*/*/*</include>
</aggregate>
I assume that because of the aggregation all properties are indexed which are contained within these 4 levels.
Or do i must consider the index rules for the nodeType nt:base which basicly only includes properties which are matching the pattern ".:.".
<index-rule nodeType="nt:base">
<property nodeScopeIndex="false">analyticsProvider</property>
<property nodeScopeIndex="false">analyticsSnippet</property>
...
<property isRegexp="true">.*:.*</property>
</index-rule>
Best regards
The default configuration is ideed incorrect as confirmed by the Adobe CQ5 Support.
For the aggegate to work correctly properties must be included by the "include-property"-Tag
So the default search configuration (or atleast the documentation) is not correct https://helpx.adobe.com/experience-manager/kb/SearchIndexingConfig.html)
I need to implement 2 filters that fit in the amq:discardingDLQBrokerPlugin category, and I need one to be executed before the other.
I can implement the two filter's logic in one class, but since the business logic is very different, I would prefer two.
I add the filters using two different plugins: com.filter.FilterAPlugin and com.filter.FilterBPlugin. The filter execution order follows a "last defined first executed" logic.
Example: In this broker configuration
<amq:broker useJmx="false" persistent="false" schedulerSupport="true">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
<amq:plugins>
<amq:discardingDLQBrokerPlugin dropAll="true" dropTemporaryTopics="true" dropTemporaryQueues="true" />
<bean xmlns="http://www.springframework.org/schema/beans" class="com.filter.FilterAPlugin" />
<bean xmlns="http://www.springframework.org/schema/beans" class="com.filter.FilterBPlugin" />
</amq:plugins>
</amq:broker>
Filter added in com.filter.FilterBPlugin is executed first.
Does the order in which the beans are declared defines the order of execution of the filters? I can't find documentation about this in the apache MQ web
BrokerService uses Chain of Responsibility pattern, so execution order is defined by the object initialization order.
I'm using Apache Camel 2.13.1 to poll a database table which will have upwards of 300k rows in it. I'm looking to use the Idempotent Consumer EIP to filter rows that have already been processed.
I'm wondering though, whether the implementation is really scalable or not. My camel context is:-
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="main">
<from
uri="sql:select * from transactions?dataSource=myDataSource&consumer.delay=10000&consumer.useIterator=true" />
<transacted ref="PROPAGATION_REQUIRED" />
<enrich uri="direct:invokeIdempotentTransactions" />
<!-- Any processors here will be executed on all messages -->
</route>
<route id="idempotentTransactions">
<from uri="direct:invokeIdempotentTransactions" />
<idempotentConsumer
messageIdRepositoryRef="jdbcIdempotentRepository">
<ognl>#{request.body.ID}</ognl>
<!-- Anything here will only be executed for non-duplicates -->
<log message="non-duplicate" />
<to uri="stream:out" />
</idempotentConsumer>
</route>
</camelContext>
It would seem that the full 300k rows are going to be processed every 10 seconds (via consumer.delay parameter) which seems very inefficient. I would expect some sort of feedback loop as part of the pattern so that the query that feeds the filter could take advantage of the set of rows already processed.
However, the messageid column in the CAMEL_MESSAGEPROCESSED table has the pattern of
{1908988=null}
where 1908988 is the request.body.ID I've set the EIP to key on so this doesn't make it easy to incorporate into my query.
Is there a better way of using the CAMEL_MESSAGEPROCESSED table as a feedback loop into my select statement so that the SQL server is performing most of the load?
Update:
So, I've since found out that it was my ognl code that was causing the odd message id column value. Changing it to
<el>${in.body.ID}</el>
has fixed it. So, now that I have a usable messageId column, I can now change my 'from' SQL query to
select * from transactions tr where tr.ID IN (select cmp.messageid from CAMEL_MESSAGEPROCESSED cmp where cmp.processor = 'transactionProcessor')
but I still think I'm corrupting the Idempotent Consumer EIP.
Does anyone else do this? Any reason not to?
Yes, it is. But you need to use scalable storage for holding sets of already processed messages. You can use either Hazelcast - http://camel.apache.org/hazelcast-idempotent-repository-tutorial.html or Infinispan - http://java.dzone.com/articles/clustered-idempotent-consumer - depending on which solution is already in your stack. Of course, JDBC repository would work, but only if it meets performance criteria selected.
I want to connect all topics that follow some string pattern like : testTopics*.raw
I tried doing the following:
<networkConnectors>
<networkConnector uri="static:(tcp://localhost:62616)"
name="bridge"
conduitSubscriptions="true"
decreaseNetworkConsumerPriority="false"
destinationFilter="NO_DESTINATION">
<staticallyIncludedDestinations>
<topic physicalName="testTopics*.raw"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
But this didnt work. I tried looking online but havent been able to find a way to do this.
Wildcards in destination names can only be used between seperators (the . character). So while you can't do testTopics*.raw you can do testTopics.*.raw, which will match topics with names like testTopics.foo.raw.`