While migrating a JBoss 5 application to JBoss AS 7 (7.1.1.FINAL) I have a problem with a new JMS message driven EJB. Within message processing, some master data fields have to be checked. To enhance performance, this master data shall be preloaded into a cache structure using a #Singleton #Startup EJB, which needs about 30 seconds to load the data.
My problem is that the queue message processing starts even if the cache has not been fully initialized, causing message validation errors.
I tried to define a dependency between the MDB and the startup EJB, but as far as I understood the #DependsOn annotation works only with #Singleton EJBs. So it's clear that my solution does not work ;-)
Startup bean code:
#Singleton
#Startup
public class StartupBean {
#PostConstruct
void atStartup() {
// TODO load master data cache (takes about 30 seconds)
}
#PreDestroy()
void atShutdown() {
// TODO free master data cache
}
}
Note: I stripped the real code from the example to make it easier to read :-)
Message driven bean code:
#MessageDriven(name="SampleMessagingBean", activationConfig = {
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="destination", propertyValue="jms/SampleQueue"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge")
})
#DependsOn("StartupBean")
public class SampleMessagingBean implements MessageListener {
public void onMessage(Message message) {
// TODO validate message using master data cache
}
}
Question: How can I suspend message processing until the startup bean has finished loading the cache?
Any suggestions greatly appreciated :-)!
First i thought inject singleton EJB in mdb would be enough to delay message consumption
But no, sometimes it would start consuming the message before #PostConstruct of Singleton-ejb completed. So added a method invocation also and it started working
This worked on glassfish, but i dont see a reason why it shouldnt work on jboss
Singleton-Ejb:
#Singleton
#Startup
public class SingletonBean {
private Logger logger = Logger.getLogger(getClass().getName());
private boolean init = false;
public boolean isInit() {
return init;
}
#PostConstruct
public void init() {
logger.error("singleton init start");
//Do something that takes time here
init = true;
logger.error("singleton init end ");
}
}
and mdb:
#MessageDriven(...)
public class SomeMdb implements MessageListener {
private Logger logger = Logger.getLogger(getClass().getName());
#EJB
SingletonBean sb;
#PostConstruct
public void init() {
logger.error("mdb init start");
if (!sb.isInit()) {
logger.error("never happens");
}
logger.error("mdb init complete");
}
public void onMessage(Message message) {
logger.error("onMessage start");
}
}
Now it always waits for SingletonBean to complete init before mdb completes init (as seen in log)
19:51:51,980 [ad-pool-1; w: 3] ERROR SomeMdb - mdb init start
19:51:52,122 [ad-pool-4848(4)] ERROR SingletonBean - singleton init start
19:51:56,316 [ad-pool-4848(4)] ERROR SingletonBean - singleton init end
19:51:56,317 [ad-pool-1; w: 3] ERROR SomeMdb - mdb init complete
19:51:56,317 [ad-pool-1; w: 3] ERROR SomeMdb - onMessage start
Related
I have this flow that I am trying to test but nothing works as expected. The flow itself works well but testing seems a bit tricky.
This is my flow:
#Configuration
#RequiredArgsConstructor
public class FileInboundFlow {
private final ThreadPoolTaskExecutor threadPoolTaskExecutor;
private String filePath;
#Bean
public IntegrationFlow fileReaderFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(this.filePath))
.filterFunction(...)
.preventDuplicates(false),
endpointConfigurer -> endpointConfigurer.poller(
Pollers.fixedDelay(500)
.taskExecutor(this.threadPoolTaskExecutor)
.maxMessagesPerPoll(15)))
.transform(new UnZipTransformer())
.enrichHeaders(this::headersEnricher)
.transform(Message.class, this::modifyMessagePayload)
.route(Map.class, this::channelsRouter)
.get();
}
private String channelsRouter(Map<String, File> payload) {
boolean isZip = payload.values()
.stream()
.anyMatch(file -> isZipFile(file));
return isZip ? ZIP_CHANNEL : XML_CHANNEL; // ZIP_CHANNEL and XML_CHANNEL are PublishSubscribeChannel
}
#Bean
public SubscribableChannel xmlChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(XML_CHANNEL);
return channel;
}
#Bean
public SubscribableChannel zipChannel() {
var channel = new PublishSubscribeChannel(this.threadPoolTaskExecutor);
channel.setBeanName(ZIP_CHANNEL);
return channel;
}
//There is a #ServiceActivator on each channel
#ServiceActivator(inputChannel = XML_CHANNEL)
public void handleXml(Message<Map<String, File>> message) {
...
}
#ServiceActivator(inputChannel = ZIP_CHANNEL)
public void handleZip(Message<Map<String, File>> message) {
...
}
//Plus an #Transformer on the XML_CHANNEL
#Transformer(inputChannel = XML_CHANNEL, outputChannel = BUS_CHANNEL)
private List<BusData> xmlFileToIngestionMessagePayload(Map<String, File> xmlFilesByName) {
return xmlFilesByName.values()
.stream()
.map(...)
.collect(Collectors.toList());
}
}
I would like to test multiple cases, the first one is checking the message payload published on each channel after the end of fileReaderFlow.
So I defined this test classe:
#SpringBootTest
#SpringIntegrationTest
#ExtendWith(SpringExtension.class)
class FileInboundFlowTest {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#TempDir
static Path localWorkDir;
#BeforeEach
void setUp() {
copyFileToTheFlowDir(); // here I copy a file to trigger the flow
}
#Test
void checkXmlChannelPayloadTest() throws InterruptedException {
Thread.sleep(1000); //waiting for the flow execution
PublishSubscribeChannel xmlChannel = this.getBean(XML_CHANNEL, PublishSubscribeChannel.class); // I extract the channel to listen to the message sent to it.
xmlChannel.subscribe(message -> {
assertThat(message.getPayload()).isInstanceOf(Map.class); // This is never executed
});
}
}
As expected that test does not work because the assertThat(message.getPayload()).isInstanceOf(Map.class); is never executed.
After reading the documentation I didn't find any hint to help me solved that issue. Any help would be appreciated! Thanks a lot
First of all that channel.setBeanName(XML_CHANNEL); does not effect the target bean. You do this on the bean creation phase and dependency injection container knows nothing about this setting: it just does not consult with it. If you really would like to dictate an XML_CHANNEL for bean name, you'd better look into the #Bean(name) attribute.
The problem in the test that you are missing the fact of async logic of the flow. That Files.inboundAdapter() works if fully different thread and emits messages outside of your test method. So, even if you could subscribe to the channel in time, before any message is emitted to it, that doesn't mean your test will work correctly: the assertThat() will be performed on a different thread. Therefore no real JUnit report for your test method context.
So, what I'd suggest to do is:
Have Files.inboundAdapter() stopped in the beginning of the test before any setup you'd like to do in the test. Or at least don't place files into that filePath, so the channel adapter doesn't emit messages.
Take the channel from the application context and if you wish subscribe or use a ChannelInterceptor.
Have an async barrier, e.g. CountDownLatch to pass to that subscriber.
Start the channel adapter or put file into the dir for scanning.
Wait for the async barrier before verifying some value or state.
I want to schedule a task on Hazelcast that runs at a fixed interval and updates the IMap with some data that I get after hitting a rest endpoint. Below is a sample code:
// Main class
IScheduledExecutorService service = hazelcast.getScheduledExecutorService("default");
service.scheduleAtFixedRate(TaskUtils.named("my-task", myTask), 30, 1);
// Task
#Singleton
public class MyTask implements Runnable, Serializable {
RestClient restClient;
IMap<String, JsonObject> map;
#Inject
MyTask() { // Inject hazelcast and restclient
map = hazelcastInstace.getMap("my-map");
this.restClient = restClient;
}
#Override
public void run() {
Collection<JSONObject> values = map.values(new MyCustomFilter());
for(JSONObject obj : values) {
// query endpoint based on id
map.submitToKey(key, response);
}
}
private static class MyCustomFilter implements Predicate<String, JSONObject> {
public boolean apply(Map.Entry<String, JSONObject> map) {
// logic to filter relevant keys
}
}
}
When I try to execute this on the cluster, I get:
java.io.NotSerializableException: com.hazelcast.map.impl.proxy.MapProxyImpl
Now I need the IMap to selectively query only some keys based on PredicateFilter and this needs to be a background scheduled job so stuck here on how to take this forward. Any help appreciated. TIA
Try making your task also implement HazelcastInstanceAware
When you submit your task, it is serialized, sent to the grid to run, deserialized when it is received, and the run() method is called.
If your task implements HazelcastInstanceAware, then between deserialization and run(), Hazelcast will call the method setHazelcastInstance(HazelcastInstance instance) to pass your code a reference to the particular Hazelcast instance it is running in. From there you can just do instance.getMap("my-map") and store the map reference in a transient field that the run() method can use.
1. Is it possible to put non-POJO class instances as the value of a cache?
For example, I have a QueryThread class which is a subclass of java.lang.Thread and I am trying to put this instance in a cache. It looks like the put operation is failing because this cache is always empty.
Consider the following class:
public class QueryThread extends Thread {
private IgniteCache<?, ?> cache;
private String queryId;
private String query;
private long timeIntervalinMillis;
private volatile boolean running = false;
public QueryThread(IgniteCache<?, ?> dataCache, String queryId, String query, long timeIntervalinMillis) {
this.queryId = queryId;
this.cache = dataCache;
this.query = query;
this.timeIntervalinMillis = timeIntervalinMillis;
}
public void exec() throws Throwable {
SqlFieldsQuery qry = new SqlFieldsQuery(query, false);
while (running) {
List<List<?>> queryResult = cache.query(qry).getAll();
for (List<?> list : queryResult) {
System.out.println("result : "+list);
}
System.out.println("..... ");
Thread.sleep(timeIntervalinMillis);
}
}
}
This class is not a POJO. How do I store an instance of this class in the cache?
I tried implementing Serializable (didn't help).
I need to be able to do this:
queryCache.put(queryId, queryThread);
Next I tried broadcasting the class using the IgniteCallable interface. But my class takes multiple arguments in the constructor. I feel PeerClassLoading is easy if the class takes a no-arg constructor:
IgniteCompute compute = ignite.compute(ignite.cluster().forServers());
compute.broadcast(new IgniteCallable<MyServiceImpl>() {
#Override
public MyServiceImpl call() throws Exception {
MyServiceImpl myService = new MyServiceImpl();
return myService;
}
});
2. How do I do PeerClassLoading in the case of a class with multi-arg constructor?
It's restricted to put Thread instances to the cache, Thread instance cannot be serialized due to call to Native Methods. Thats why you always get empty value.
PeerClassLoading is a special distributed ClassLoader in Ignite for inter-node byte-code exchange. So, it's only about sharing classes between nodes. It doesn't make sense how many arguments in constructor class have.
But, on the other hand, object, that you created, will be serialised and sent to other nodes and for deserialisation it will need a default(non-arg) constructor.
I have been trying to get my ignite continuous query code to work without setting the peer class loading to enabled. However I find that the code does not work.I tried debugging and realised that the call to cache.query(qry) errors out with the message "Failed to marshal custom event" error. When I enable the peer class loading , the code works as expected. Could someone provide guidance on how I can make this work without peer class loading?
Following is the code snippet that calls the continuous query.
public void subscribeEvent(IgniteCache<String,String> cache,String inKeyStr,ServerWebSocket websocket ){
System.out.println("in thread "+Thread.currentThread().getId()+"-->"+"subscribe event");
//ArrayList<String> inKeys = new ArrayList<String>(Arrays.asList(inKeyStr.split(",")));
ContinuousQuery<String, String> qry = new ContinuousQuery<>();
/****
* Continuous Query Impl
*/
inKeys = ","+inKeyStr+",";
qry.setInitialQuery(new ScanQuery<String, String>((k, v) -> inKeys.contains(","+k+",")));
qry.setTimeInterval(1000);
qry.setPageSize(1);
// Callback that is called locally when update notifications are received.
// Factory<CacheEntryEventFilter<String, String>> rmtFilterFactory = new com.ccx.ignite.cqfilter.FilterFactory().init(inKeyStr);
qry.setLocalListener(new CacheEntryUpdatedListener<String, String>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends String, ? extends String>> evts) {
for (CacheEntryEvent<? extends String, ? extends String> e : evts)
{
System.out.println("websocket locallsnr data in thread "+Thread.currentThread().getId()+"-->"+"key=" + e.getKey() + ", val=" + e.getValue());
try{
websocket.writeTextMessage("key=" + e.getKey() + ", val=" + e.getValue());
}
catch (Exception e1){
System.out.println("exception local listener "+e1.getMessage());
qry.setLocalListener(null) ; }
}
}
} );
qry.setRemoteFilterFactory( new com.ccx.ignite.cqfilter.FilterFactory().init(inKeys));
try{
cur = cache.query(qry);
for (Cache.Entry<String, String> e : cur)
{
System.out.println("websocket initialqry data in thread "+Thread.currentThread().getId()+"-->"+"key=" + e.getKey() + ", val=" + e.getValue());
websocket.writeTextMessage("key=" + e.getKey() + ", val=" + e.getValue());
}
}
catch (Exception e){
System.out.println("exception cache.query "+e.getMessage());
}
}
Following is the remote filter class that I have made into a self contained jar and pushed into the libs folder of ignite, so that this can be picked up by the server nodes
public class FilterFactory
{
public Factory<CacheEntryEventFilter<String, String>> init(String inKeyStr ){
System.out.println("factory init called jun22 ");
return new Factory <CacheEntryEventFilter<String, String>>() {
private static final long serialVersionUID = 5906783589263492617L;
#Override public CacheEntryEventFilter<String, String> create() {
return new CacheEntryEventFilter<String, String>() {
#Override public boolean evaluate(CacheEntryEvent<? extends String, ? extends String> e) {
//List inKeys = new ArrayList<String>(Arrays.asList(inKeyStr.split(",")));
System.out.println("inside remote filter factory ");
String inKeys = ","+inKeyStr+",";
return inKeys.contains(","+e.getKey()+",");
}
};
}
};
}
}
Overall logic that I'm trying to implement is to have a websocket client subscribe to an event by specifying a cache name and key(s) of interest.
The subscribe event code is called which creates a continuous query and registers a local listener callback for any update event on the key(s) of interest.
The remote filter is expected to filter the update event based on the key(s) passed to it as a string and the local listener is invoked if the filter event succeeds. The local listener writes the updated key value to the web socket reference passed to the subscribe event code.
The version of ignite Im using is 1.8.0. However the behaviour is the same in 2.0 as well.
Any help is greatly appreciated!
Here is the log snippet containing the relevant error
factory init called jun22
exception cache.query class org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.configuration.CacheConfiguration$IgniteAllNodesPredicate#269707de, clsName=null, depInfo=null, hnd=CacheContinuousQueryHandlerV2 [rmtFilterFactory=com.ccx.ignite.cqfilter.FilterFactory$1#5dc301ed, rmtFilterFactoryDep=null, types=0], bufSize=1, interval=1000, autoUnsubscribe=true], keepBinary=false, routineId=b40ada9f-552d-41eb-90b5-3384526eb7b9]
From FilterFactory you are returning an instance of an anonymous class which in turn refers to the enclosing FilterFactory which is not serializable.
Just replace the returned anonymous CacheEntryEventFilter based class with a corresponding nested static class.
You need to explicitly deploy you CQ classes (remote filters specifically) on all nodes in topology. Just create a JAR file with them and put into libs folder prior to starting nodes.
I have the following #Singleton bean that I am using to perform some scheduled tasks:
#Singleton
#Startup
public class SqsScheduler {
// Logger-------------------------------------------------------------------
private static final Logger LOG = Logger.getLogger(SqsScheduler.class.getName());
// Variables----------------------------------------------------------------
Timer timer;
StoredDynamoQueries storedDynamoQueries = new StoredDynamoQueries();
// Constructors-------------------------------------------------------------
public SqsScheduler() {
timer = new Timer();
timer.scheduleAtFixedRate(new ScheduledTask(), 0, 180 * 1000);
}
// Methods------------------------------------------------------------------
class ScheduledTask extends TimerTask {
#Override
public void run() {
// The scheduled tasks to perform
}
}
}
Everything works fine EXCEPT when I undeploy/redeploy the the application the TimerTasks don't get removed and the redeployed application then starts producing errors. If I undeploy the application, restart the server (Glassfish 3.1.2.2) and then deploy the application from scratch it works perfectly.
How would I go about removing the timers when the application is redeployed?
As per perissf comment:
With EJBs it's recommended to use the Java EE Timer Service