How to create multiple ManagedScheduledExecutorService resources in Jaxrs - jax-rs

I'm developing a service using WebSphere Liberty (wlp) and JAX-RS.
I want to run multiple schedulers in my service to do different tasks periodically.
I have installed concurrent-1.0 feature and defined an instance of
#Resource(name ="DefaultManagedScheduledExecutorService")
private ManagedScheduledExecutorService myScheduler;
in my init class which implements ServletContextListener
How I can create some more instances in some other classes?
I can find pointers for ManagedExecutorService like:
http://www.adam-bien.com/roller/abien/entry/injecting_an_executorservice_with_java
I have tried same with ManagedScheduledExecutorService but it didn't work.
But I'm unable to get much info for ManagedScheduledExecutorService resource.
Please provide any links or pointers which could be useful here.

The injection example you have currently is making use of the Default ManagedScheduledExecutorService which is available once you turn on the concurrent-1.0 feature.
To configure additional ManagedScheduledExecutorService's, you can simply define more in your server.xml configuration like this:
<managedScheduledExecutorService jndiName="concurrent/exec1"/>
<managedScheduledExecutorService jndiName="concurrent/exec2"/>
<managedScheduledExecutorService jndiName="concurrent/exec3"/>
However, there is really no reason you should need additional ManagedScheduledExecutorService's unless they are going to have different context service configurations applied for different tasks. For example:
<managedScheduledExecutorService jndiName="concurrent/classloaderExec">
<contextService>
<classloaderContext/>
</contextService>
</managedScheduledExecutorService>
<managedScheduledExecutorService jndiName="concurrent/jeeMetadataExec">
<contextService>
<jeeMetadataContext/>
</contextService>
</managedScheduledExecutorService>
If you simply want to schedule different tasks, say myHourlyTask and myDailyTask, you can still do that with the same ManagedScheduledExecutorService resource:
myScheduler.scheduleAtFixedRate(myHourlyTask, 0, 1, TimeUnit.HOURS);
myScheduler.scheduleAtFixedRate(myDailyTask, 0, 1, TimeUnit.DAYS);
To declare and use ManagedScheduledExecutorService default resource instance in any non-resource class:
/** The scheduler. */
private ManagedScheduledExecutorService monkeyScheduler;
try {
monkeyScheduler = (ManagedScheduledExecutorService)
new InitialContext().lookup("java:comp/DefaultManagedScheduledExecutorService");
} catch (NamingException e) {
e.printStackTrace();
}

Check this page Configuring managed scheduled executors for details how to configure managed executor.
You should be able to use one executor, just call multiple executor.schedule* methods for your different tasks.

Related

Execute Spring Integration flows in parallel

If I have a simple IntegrationFlow like this:
#Bean
public IntegrationFlow downloadFlow() {
return IntegrationFlows.from("rabbitQueue1")
.handle(longRunningMessageHandler)
.channel("rabbitQueue2")
.get();
}
... and if the rabbitQueue1 is filled with messages,
what should I do to handle multiple messages at the same time? Is that possible?
It seems that by default, the handler executes one message at a time.
Yes, that's true, by default endpoints are wired with DirectChannel. That's like performing a plain Java instructions one by one. So, to do some parallel job in Java you need an Executor to shift a call to the separate thread.
The same is possible with Spring Integration via an ExecutiorChannel. You can make that rabbitQueue1 as an ExecutorChannel bean or use this instead of that plain name:
IntegrationFlows.from(MessageChannels.executor("rabbitQueue1", someExecturorBean)
and all the messages arriving to this channel are going to be paralleled in the threads provided by an executor. That longRunningMessageHandler are going to process your messages in parallel.
See more info in the Reference Manual: https://docs.spring.io/spring-integration/docs/current/reference/html/#channel-implementations

Camel sql component cron schedule customization

As I have looked , camel sql-component is not supporting cron expressions but fixed delays etc. I have checked source code of the component but I could not find an easy way to customize it. Is there any other way to make it or should I extend all component, endpoint , consumer and producer in order to make it?
Thanks
See the documentation about polling consumer: http://camel.apache.org/polling-consumer.html at the section further below for scheduled poll consumers.
You can configure to use a different scheduler such as spring/quartz2 that has cron capabilities.
I blogged about how to do this: http://www.davsclaus.com/2013/08/apache-camel-212-even-easier-cron.html but it should work with the sql component also.
I second #Neron's comment above. I believe this is a bug in the camel-sql compnent. I am currently using version 2.16.2, but I don't see any changes in a higher version that would have resolved this.
For those interested, you can work around this by creating a subclass of the SQLComponent like this.
public class SQLComponentPatched extends SqlComponent {
#Override
protected Endpoint createEndpoint(String uri, String remaining, Map<String, Object> parameters) throws Exception {
Endpoint endpoint = super.createEndpoint(uri, remaining, parameters);
setProperties(endpoint, parameters);
return endpoint;
}
}

advice handler on aws outbound channel adapter

In the past I have been able to apply advice chain handlers on different outbound channel adapters. I am trying to do the same on int-aws:s3-outbound-channel-adapter but its not allowing that. Does this component not allows this behavior. Basically I am interested in finding out when the adapter completes the upload of a file to S3.
<int-aws:s3-outbound-channel-adapter
id="s3-outbound" channel="files" bucket="${s3.bucket}"
multipart-upload-threshold="5192" remote-directory="${s3.remote.dir}"
accessKey="${accessKey}" secretKey="${secretKey}">
THIS DOESNT WORKS - throws an error !!!
<int:request-handler-advice-chain>
</int:request-handler-advice-chain>
</int-aws:s3-outbound-channel-adapter>
Right, that isn't allowed by the XSD. Feel free to raise a JIRA on the matter.
But that doesn't matter that it doesn't work at all.
If you are on Spring Integration 4.x already you can move that <int-aws:s3-outbound-channel-adapter> to the Java & Annotation configuration using #Bean and #ServiceActivator for the AmazonS3MessageHandler.
Where #ServiceActivator has adviceChain attribute to specify bean references to your Advices.
... or you can do that using generic <int:outbound-channel-adapter> and specify AmazonS3MessageHandler as raw <bean> for the ref of the first one.
HTH

#ValidateConnection method is failing to be called when using "#Category component"

I have an issue in a new devkit Project where the following #ValidateConnection method is failing to be called (but my #processor methods are called fine when requested in the flows)
#ValidateConnection
public boolean isConnected() {
return isConnected;
}
I thought that the above should be called to check whether to call the #Connect method.
I think it is because I am using a non default category (Components) for the connector
#Category(name = "org.mule.tooling.category.core", description = "Components")
And the resulting Behavoir is different to what I am used to with DevKit in Cloud connector mode.
I guess I will need to do checks in each #processor for now to see if the initialization logic is done, as there doesn't seem to be an easy way to run a one time config.
EDIT_________________
I actually tried porting it back to a cloud connector #cat and the same behaviour, maybe its an issue with devkit -DarchetypeVersion=3.4.0, I used 3.2.x somthing before and things worked a bit better
The #ValidateConnection annotated method in the #Connector is called at the end of the makeObject() method of the generated *ConnectionFactory class. If you look for references of who is calling your isConnected() you should be able to confirm this.
So no, you should not need to perform the checks, it should be done automatically for you.
There must be something else missing... do you have a #ConnectionIdentifier annotated method?
PS. #Category annotation is purely for cosmetic purposes in Studio.

Consdirations for making multiple handlers in same apache module or having seperate modules for each handler

I am writing an application where there are a bunch of handlers.
I am trying to see if i should package these handlers within the same apache module or have a seperate module for each handler.
I agree this is a generic question and would depend on my app, but i would like to know the general considerations that i have to make and also the trade-offs in each of the approach.
It will be really good if somebody can tell me the advantages/disdvantages of both approaches.
You have not specified if all these handlers need to perform some interrelated tasks or are they going to work independently of each other.
I would go for keeping the related handlers in the same module, and rest of them in their own module. I believe it makes the configuration of the server easy (we can easily load/unload a module as required) and the code base remains well managed too.
For instance suppose we needed two handlers that share some data, then we could keep then in the same module:
static int my_early_hook(request_rec
*r) {
req_cfg *mycfg = apr_palloc(r->pool, sizeof(req_cfg));
ap_set_module_config(r->request_config,
&my_module, mycfg);
/* set module data */ }
static int my_later_hook(request_rec
*r) {
req_cfg *mycfg = ap_get_module_config(r->request_config,
&my_module);
/* access data */ }