ActiveMQ autodiscovery all deployed queues and topics using JNDI - activemq

Is it possible to discover all ActiveMQ queues and topics via JNDI? It was possible to fetch them all with HornetQ using the "list" method. I would like to implement a JMS client for multiple brokers and don't like to pre-configure all queues in jndi.properties.
Properties props = new Properties();
props.setProperty("java.naming.factory.initial","org.apache.activemq.jndi.ActiveMQInitialContextFactory");
props.setProperty("java.naming.provider.url", "tcp://localhost:61616");
Context context = new InitialContext(props);
NamingEnumeration<NameClassPair> names = ctx.list(jndiPrefix);

The activemq initialContext factory implements a simple hashmap that is unrelated to the broker because the broker will create any destination on demand unless authorization prevents that behaviour.
You can use the dynamic contexts - dynamicQueues/FOO.BAR or dynamicTopics/FOO.BAR to access a destination named FOO.BAR without any additional configuration.
see: jndi-support in the doc1 for some more detail.

Related

Using Multiple Topic Config in on Producer Spring Reactive Kafka

I am new to Kafka and We are using Spring Web Flux in the application. We have a requirement to push two different messages to two different Topics say T1 and T2. Kafka Broker is the same.
We are using ReactiveKafkaProducerTemplate and it working fine.
#Bean
public ReactiveKafkaProducerTemplate<String, Object> reactiveKafkaProducerTemplate(
KafkaProperties properties) {
final Map<String, Object> props = properties.buildProducerProperties();
return new ReactiveKafkaProducerTemplate<String, Object>(SenderOptions.create(props));
}
Now we have requirement to compress only one Topic[T1] content alone as the message size is more on the Topic T1.
Do we have something like RoutingKafkaTemplate support in Reactive Kafka or Project Reactor where we can modify the Producer Config as per Topic needs?
No; there is no equivalent; you need to configure two templates with different producer configs.

Sleuth traceId and spanId not logged in activeMQ Listener

I'm trying to configure a microservice with Sleuth and ActiveMQ.
When starting a request I can properly see appName, traceId and spanId in logs of producer, but after dequeuing the message in listener I find only appName, without traceId and spanId.
How can I get this fields filled?
Right now I'm working with spring.sleuth.messaging.jms.enabled=false to avoid this exception at startup:
Bean named 'connectionFactory' is expected to be of type 'org.apache.activemq.ActiveMQConnectionFactory' but was actually of type 'org.springframework.cloud.sleuth.instrument.messaging.LazyConnectionFactory'
My dependencies:
org.springframework.boot.spring-boot-starter-activemq 2.5.1
org.springframework.cloud.spring-cloud-sleuth 3.0.3
Thank you all!
My understanding is that the properties you're looking for are set on the JMS message when the message is sent and then retrieved from the message when it is consumed. Since you're setting spring.sleuth.messaging.jms.enabled=false you're disabling this functionality. See the documentation which states:
We instrument the JmsTemplate so that tracing headers get injected into the message. We also support #JmsListener annotated methods on the consumer side.
To block this feature, set spring.sleuth.messaging.jms.enabled to false.
You'll need to find an alternate solution for the connection factory problem if you want to use Sleuth with Spring JMS. If you're injecting org.apache.activemq.ActiveMQConnectionFactory somewhere then you should almost certainly be using javax.jms.ConnectionFactory instead. Using the concrete type is bad for portability and use-cases like this where wrapper implementations are used dynamically.

How can we use #RabbitListener and #JMSListener alternatively based on env?

Actually, I have on premises spring boot application which consumes rabbitMQ messages using #RabbitListener and I have migrated the same application to azure which consumes AzureServiceBus messages using #JMSListener.
We are maintaining same code for both on premises and Azure . So, because of these two listeners, I'm planning to replicate the same consumer code in two different classes with same content with two different Listeners
consumer with JMSListener:
#JmsListener(destination = "${queue}", concurrency = "${threads}", containerFactory = "defaultContainer")
public Message processMessage(#Payload final String message) {
//do stuff with same content
}
consumer with RabbitListener:
#RabbitListener(queues = "${app.rabbitmq.queue}")
public Message processMessage(#Payload final String message) {
//do stuff with same content
}
Is there any possibility of avoiding the duplicate code in two classes ? How can we handle listeners on a fly with only one consumer? Can any one please suggest me out ?
You can add both annotations to the same method with the autoStartup property set according to which Spring profile is active.
For #RabbitListener there is an autoStartup property on the annotation itself but, in both cases, there are Spring Boot properties auto-startup to control whether the container starts or not.

WebClient instrumentation in spring sleuth

I'm wondering whether sleuth has reactive WebClient instrumentation supported.
I did't find it from the document:
Instruments common ingress and egress points from Spring applications (servlet filter, async endpoints, rest template, scheduled actions, message channels, Zuul filters, and Feign client).
My case:
I may use WebClient in either a WebFilter or my rest resource to produce Mono.
And I want:
A sub span auto created as child of root span
trace info propagated via headers
If the instrumentation is not supported at the moment, Am I supposed to manually get the span from context and do it by myself like this:
OpenTracing instrumentation on reactive WebClient
Thanks
Leon
Even though this is an old question this would help others...
WebClient instrumentation will only work if new instance is created via Spring as a Bean. Check Spring Cloud Sleuth reference guide.
You have to register WebClient as a bean so that the tracing instrumentation gets applied. If you create a WebClient instance with a new keyword, the instrumentation does NOT work.
If you go to Sleuth's documentation for the Finchley release train, and you do find and you search for WebClient you'll find it - https://cloud.spring.io/spring-cloud-static/Finchley.RC2/single/spring-cloud.html#__literal_webclient_literal . In other words we do support it out of the box.
UPDATE:
New link - https://docs.spring.io/spring-cloud-sleuth/docs/current/reference/html/integrations.html#sleuth-http-client-webclient-integration
let me paste the contents
3.2.2. WebClient
This feature is available for all tracer implementations.
We inject a ExchangeFilterFunction implementation that creates a span
and, through on-success and on-error callbacks, takes care of closing
client-side spans.
To block this feature, set spring.sleuth.web.client.enabled to false.
You have to register WebClient as a bean so that the tracing
instrumentation gets applied. If you create a WebClient instance with
a new keyword, the instrumentation does NOT work.

How to use a dynamic URI in From()

As mentioned in Apache Camel, it allows to write dynamic URI in To(), does it allows to write dynamic URI in From().
Cause I need to call the multiple FTP locations to download the files on the basis of configuration which I am going to store it in database.
(FTPHost, FTPUser, FTPPassword, FTPSourceDir, FTPDestDir)
I will read these configuration from the DB and will pass it to the Camel route dynamically at runtime.
Example:
This is the camel route example that I have to write dynamically
<Route>
<from uri="ftp://${ftpUser}#${ftpHost}:${ftpPort}/${FTPSourceDir}?password=${ftpPassword}&delete=true"/>
<to uri="${ftpDestinationDir}"/>
</Route>
As you see in example, I need to pass these mentioned parameters dynamically.
So how to use dynamic uri in From()
You can read it from property file as follows,
<bean id="bridgePropertyPlaceholder" class="org.apache.camel.spring.spi.BridgePropertyPlaceholderConfigurer">
<property name="location" value="classpath:/config/Test.properties"/>
</bean>
<Route>
<from uri="ftp://{{ftpUser})#${{ftpHost}}:{{ftpPort}}/${{FTPSourceDir}}?password={{ftpPassword}}&delete=true"/>
<to uri="{{ftpDestinationDir}}"/>
</Route>
ftpUser, ftpHost.... - all are keys declared in Test.properties
If you want to get those variables from your exchange dynamically, you cannot do it in regular way as you mentioned in your example. You have to use consumer template as follows,
Exchange exchange = consumerTemplate.receive("ftp:"+url);
producerTemplate.send("direct:uploadFileFTP",exchange );
You have to do that from a spring bean or camel producer. Consumer template will consume from given component, and that producer template will invoke direct component declared in your camel-context.xml
Note: Consumer and Producer templates are bit costly. you can inject both in spring container and let the spring handle the life cycle.
From camel 2.16 on-wards, we can use pollenrich component to define polling consumer like file, ftp..etc with dynamic url/parameter value like below
<route>
<from uri="direct:start"/>
<pollEnrich>
<simple>file:inbox?fileName=${body.fileName}</simple>
</pollEnrich>
<to uri="direct:result"/>
</route>
Its awesomeeee!!!
Refer: http://camel.apache.org/content-enricher.html
I help a team who operates a message broker switching about a million message per day. There are over 50 destinations from which we have to poll files over all file sharing brands (FTP, SFTP, NFS/file: ...). Maintaining up to 50 deployments that each listen to a different local/remote directory is indeed an overhead compared with a single FILE connector capable of polling files at the 50 places according to the specific schedule and security settings of each... Same story for getting e-mail from pop3 and IMAP mailboxes.
In Camel, the outline of a solution is as follows:
you have no choice but use the java DSL to configure at least the from() part of your routes with an URI that you can indeed read/build from a database or get from an admin request to initiate a new route. The XML DSL only allows injecting properties that are resolved once when the Camel context is built and never again afterwards.
the basic idea is to start routes, let them run (listen or poll a precise resource), and then shutdown & rebuild them on demand using the Camel context APIs to manage the state of RouteDefinitions, Routes, and possibly Endpoints
personally, I like to implement such dynamic from() instantiation on minimalist routes with just the 'from' part of the route, i.e. from(uri).to("direct:inboundQueue").routeId("myRoute"), and then define - in java or XML - a common route chunk that handles the rest of the process: from("direct:inboundQueue").process(..).etc... .to(outUri)
I'll advise strongly to combine Camel with the Spring framework, and in particular Spring MVC (or Spring Integration HttpGateway) so that you will enjoy the ability to quickly build REST, SOAP, HTTP/JSP, or JMX bean interfaces to administer route creation, destruction, and updates within a Spring + Camel container, both nicely integrated.
You can then declare in the Spring application context a bean that extends SpringRouteBuilder, as usual when building Camel routes with the java DSL in Spring; in the compulsory #Override configure() method implementation, you shall save your routeDefinition object built by the from(uri) method, and assign it a known String route-id with the .routeId(route-id) method; you may for instance use the route-id as a key in a Map of your route definition objects already created and started, as well as a key in your DB of URI's.
then you extend the SpringRouteBuilder bean you have declared with new methods createRoute(route-id), updateRoute(route-id), and removeRoute(route-id); The associated route-id parameters needed for create or update will be fetched from the database or another registry, and the relevant method, running within the RouteBuilder bean, will take advantage from the getContext() facility to retrieve the current ModelCamelContext, which in turn is used to stopRoute(route-id), removeRoute(route-id), and then addRouteDefinition(here is where you need the routeDefinition object), and finally startRoute(route-id) (Note: beware of possible ghost Endpoints that would not be removed, as explained in the removeRoute() javadoc)
your administrative interface (which typically takes the form of a Spring #Controller component/bean that handles the HTTP/REST/SOAP traffic) will indeed have an easy job to get the previously created SpringRouteBuilder extension Bean injected by Spring in the controller bean, and thus access all the necessary createRoute(route-id), updateRoute(route-id), and removeRoute(route-id) methods that you have added to the SpringRouteBuilder extension Bean.
And that works nicely. The exact implementation with all the error handling and validation code that applies is a bit too much code to be posted here, but you have all the links to relevant "how to's" in the above.
I think you can implement your requirement within a Camel route.
Because you want to poll multiple FTP sites you'll have to somehow trigger this process. Maybe you could do this based on a Quartz2 timer. Once triggered you could read the configured FTP sites from your database.
In order to poll the given FTP sites you can use the Content Enricher pattern to poll (see: pollEnrich) a dynamically evaluated URI.
Your final basic route may look something like this (pseudocode):
from("quarz...")
to("sql...")
pollEnrich("ftp...")
...
Use Camel endpoint with spring spel expression.
Set up a Camel endpoint in the context so it can be accessed from any bean:
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
<endpoint id="inventoryQueue" uri="#{config.jms.inventoryQueueFromUri}"/>
</camelContext>
Now you can reference the inventoryQueue endpoint within the `#Consume` annotation as follows:
#org.apache.camel.Consume(ref = "inventoryQueue")
public void updateInventory(Inventory inventory) {
// update
}
Or:
<route>
<from ref="inventoryQueue"/>
<to uri="jms:incomingOrders"/>
</route>