I'm trying to apply a TTL in one of my queue with broker configuration using following plugin : timeStampingBrokerPlugin
It's the first time I have to use plugin, I tried to add the following code into a policyEntry section :
<policyEntry queue="test1">
<plugins>
<timeStampingBrokerPlugin ttlCeiling="10400" />
</plugins>
</policyEntry>
But the plugins tag is not accepted here. What is the way to apply a plugin only on some queues or topics?
It's not possible to do so using the default timestamp plugin. You can of course use the plugin code as a template for writing your own plugin that looks at the message's destination value to see where it's headed and then apply a TTL or whatever based on that or some other criteria.
Related
With spring cloud stream, you can avoid redundant properties for each individual channel, by specifying "default" properties.
For example, if I have 2 channels bound to the same destination/exchange, I can do:
spring.cloud.stream.default.destination=myExchange
spring.cloud.stream.bindings.myChannel1.group=queue1
spring.cloud.stream.bindings.myChannel2.group=queue2
And queue1 and queue2 will both by bound to myExchange.
That works as documented, and I do it for some properties.
But....I'd like to do the same for RabbitMQ binding properties.
For example, if I want DLQ for all of my consumers/queues, do something like:
spring.cloud.stream.rabbit.default.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.default.consumer.dlq-ttl=10000
spring.cloud.stream.rabbit.default.consumer.dlq-dead-letter-exchange=
Otherwise, I have to specify those same 3 lines for every channel.
Is there any way to do this? I've tried several different permutations to no avail.
BTW, I'm on version 1.2.1.RELEASE of spring-cloud-starter-stream-rabbit.
Thanks.
It is supported. Please see https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#binding-properties section of the user guide
To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of spring.cloud.stream.default.<property>=<value>
According to Spring Cloud Stream documentation, it is possible since version 2.1.0.RELEASE.
See 9.2 Binding Properties.
When it comes to avoiding repetitions for extended binding properties,
this format should be used
spring.cloud.stream.<binder-type>.default.<producer|consumer>.<property>=<value>
Unfortunately so far I couldn't make it work. Did anyone get it?
It is not supported yet.
See 3.2. RabbitMQ Consumer Properties
The following properties are available for Rabbit consumers only and must be prefixed with spring.cloud.stream.rabbit.bindings..consumer..
including ttl/dlq*
I have this enabled in my database config. Which enabled to fetch Audit logs with the help of JPA methods. (spring-data-envers) is being used in POM for this
#EnableJpaRepositories(
repositoryFactoryBeanClass = EnversRevisionRepositoryFactoryBean.class)
Now I want to use jquery datatable's back-end processing. For this I will be using (spring-data-jpa-datatables) in my POM.
#EnableJpaRepositories(repositoryFactoryBeanClass = DataTablesRepositoryFactoryBean.class)
How can I use both of them in one Single Project.
DataTablesRepositoryFactoryBean seems to be rather simple.
It performs a simple check and then does it's own thing or invokes super, i.e. JpaRepositoryFactoryBean
By reimplementing that but inheriting from EnversRevisionRepositoryFactoryBean instead you should be able to use both in one project.
I've been playing with Spring integration, and I can't see how best to solve the following problem.
Say I have XML messages arriving onto a channel. These messages may have arbitrary structure, and I want to convert them to my canonical form, so I think I want to write custom converters for each type of structure, so that I can do whatever processing and error-checking I want.
The obvious thing is to wire up a router to have a look at the messages and route to an appropriate converter, but I think this means I need to hard-code the processing flow onto a channel pointing at each converter.
I'd like to avoid hard-configuring in the different converters and routing logic, and the alternative that springs to mind is to have a set of converters that implement some kind of boolean canHandle(message), so that we just show the message to each converter until one 'claims' the message or we run out. This way, it seems like I might be able to annotate the converters into the configuration without actually modifying the processing flow.
I'm new to Spring integration and I may well be mis-thinking this. Is there a stock way to do this in Spring integration, have I missed something or am I going about it all wrong?
There are a number of ways to do this. The first one that came to mind is a recipient list router with selector expressions:
<recipient-list-router id="simpleDynamicRouter" input-channel="simpleDynamicInput">
<recipient selector-expression="#handler1.canHandle(payload)" channel="toHandler1"/>
<recipient selector-expression="#handler2.canHandle(payload)" channel="toHandler2"/>
<recipient selector-expression="#handler3.canHandle(payload)" channel="toHandler3"/>
</recipient-list-router>
<transformer ... ref="handler1" />
<transformer ... ref="handler2" />
<transformer ... ref="handler3" />
Where handler1 etc are <bean/>s with your implementation, and canHandle() method.
Another option is to write your own custom dynamic router; there's an example of how to do that here https://github.com/SpringSource/spring-integration-samples/tree/master/advanced/dynamic-ftp
I've got an outstanding issue in jasmine-maven-plugin and I can't figure it out.
You're welcome to try this out yourself, but the gist is that when one runs:
mvn jasmine:test
The properties configured in the pom.xml for the plugin are not set on the Mojo bean.
Upon inspection it's pretty clear that each property on the bean is falling back on its default value. However, when you run the test phase itself (which jasmine:test is bound to), like:
mvn test
It works fine.
Any ideas? The preamble at the top of the TestMojo looks like:
/**
* #component
* #goal test
* #phase test
* #execute lifecycle="jasmine-lifecycle" phase="process-test-resources"
*/
Update: Now I'm even more confused. Upon further reading, it seems this behavior is really unexpected, since the configuration that I'm seeing as missing is done in a <configuration> element right under the plugin, not under an <execution/>, per this document:
Note: Configurations inside the tag differ from those that are outside in that they cannot be used from a direct command line invocation. Instead they are only applied when the lifecycle phase they are bound to are invoked. Alternatively, if you move a configuration section outside of the executions section, it will apply globally to all invocations of the plugin.
And of course I'm an idiot. I was looking at the wrong POM, and sure enough the configuration was inside an <execution> block.
So I'll try to feed Google by answering my own question in big bold letters:
When you invoke a Maven goal from the command line, it will only pick up your pom.xml's configuration element if that configuration was made directly under the <plugin/> element, and not under any <execution/> element.
I'm looking for a way to create meta-profiles that just activate sub-profiles in Maven. Let's take a very concrete example. I have the following profiles:
"server-jboss"
"server-tomcat"
"database-hsql"
"database-oracle"
To build the project, you have to choose one profile for the server and one for the database. I want to create two "meta-profiles":
"dev" => "server-tomcat","database-hsql"
"prod" => "server-jboss","database-oracle"
The first idea that comes is to activate the subprofiles by a property:
<profile>
<id>database-oracle</id>
<activation>
<property>
<name>prod</name>
</property>
</activation>
</profile>
But this way, I cannot share subprofiles between meta-profiles. For example, I want my profile "database-oracle" to be activated by both "pre-prod" and "prod" meta-profiles.
Note: my sub-profiles just contain properties. They are used for filtering resources and in the child poms. This is why I think there could be a solution for this particular situation.
The ideal situation for me would be to have them externalized in external properties files, but one issue at a time ;)
Activating profiles from another profile is not possible (this has been discussed in this previous question). Your first idea, using identical properties to activate different profiles, is the best thing you can implement but has indeed limitations.
Have you tried a solution using the maven-properties-plugin? Some possibilities are discussed in this question and here.