I need to visit all steps contained in Kettle's .ktr file using Java.
I'm using
KettleEnvironment.init();
JobMeta jobMeta = new JobMeta("file.kjb", null);
Job job = new Job(null, jobMeta);
but not Job nor JobMeta seem to provide any method for visiting the job to the basic steps.
According to this answer, apparently it is not possible.
So, I have managed to do it using a combination of DelegationListener and TransListener, however, I am not sure if DelegationListener is required here. Maybe there is another way to add TransListeners to Transformation Job Entries:
package testTransformationsInJob;
import java.util.List;
import org.pentaho.di.core.KettleEnvironment;
import org.pentaho.di.core.exception.KettleException;
import org.pentaho.di.job.DelegationListener;
import org.pentaho.di.job.Job;
import org.pentaho.di.job.JobExecutionConfiguration;
import org.pentaho.di.job.JobMeta;
import org.pentaho.di.trans.Trans;
import org.pentaho.di.trans.TransExecutionConfiguration;
import org.pentaho.di.trans.TransListener;
import org.pentaho.di.trans.step.StepMetaDataCombi;
import org.pentaho.di.trans.step.StepInterface;
public class MainClass {
public MainClass() {
// TODO Auto-generated constructor stub
}
private static class MyTransListener implements TransListener {
#Override
public void transActive(Trans arg0) {
// TODO Auto-generated method stub
}
#Override
public void transFinished(Trans arg0) throws KettleException {
// TODO Auto-generated method stub
}
#Override
public void transStarted(Trans delegatedTrans) throws KettleException {
List<StepMetaDataCombi> stepCombis = delegatedTrans.getSteps();
if(stepCombis == null) {
return;
}
for(StepMetaDataCombi stepCombi: stepCombis) {
StepInterface step = stepCombi.step;
//
// Do some useful work here.
//
System.out.println("\t" + step.getStepname());
}
}
}
private static class MyDelegationListener implements DelegationListener {
private TransListener transListener;
MyDelegationListener(TransListener transListener) {
this.transListener = transListener;
}
#Override
public void jobDelegationStarted(Job delegatedJob,
JobExecutionConfiguration jobExecutionConfiguration) {
// TODO Auto-generated method stub
}
#Override
public void transformationDelegationStarted(Trans delegatedTrans,
TransExecutionConfiguration transExecutionConfiguratioStep) {
System.out.println("transformationDelegationStarted");
System.out.println(delegatedTrans.getName());
// transformationDelegationStarted() is called after Trans object is constructed
// but before it is executed.
// However, we can't access steps at this point using delegatedTrans.getSteps()
// since steps are constructed somewhere in execute method.
// However, we can add TransListener here, which will be able to iterate steps.
delegatedTrans.addTransListener(this.transListener);
}
}
public static void main(String[] args) throws KettleException {
KettleEnvironment.init();
JobMeta jobMeta = new JobMeta("d:\\test_job.kjb", null);
Job job = new Job(null, jobMeta);
// Here I add a DelegationListener, which will add a TransListener to every Trans in job.
// Not sure though if using DelegationListener is a right way to access Transformation Job Entries
// Maybe there is a more elegant way to do it.
MyTransListener myTransListener = new MyTransListener();
DelegationListener delegationListener = new MyDelegationListener(myTransListener);
job.addDelegationListener(delegationListener);
job.start();
job.waitUntilFinished();
}
}
And here is the output I get:
2016/07/07 09:47:37 - test_job - Start of job execution
2016/07/07 09:47:37 - test_job - Starting entry [Transformation]
2016/07/07 09:47:37 - Transformation - Loading transformation from XML file [file:///d://test.ktr]
transformationDelegationStarted
test
2016/07/07 09:47:37 - test - Dispatching started for transformation [test]
Detect empty stream
User Defined Java Class
2016/07/07 09:47:37 - Detect empty stream.0 - Finished processing (I=0, O=0, R=0, W=1, U=0, E=0)
2016/07/07 09:47:38 - User Defined Java Class.0 - Finished processing (I=0, O=0, R=1, W=1, U=0, E=0)
2016/07/07 09:47:38 - test_job - Finished job entry [Transformation] (result=[true])
2016/07/07 09:47:38 - test_job - Job execution finished
Related
I have processor like class, which internally uses sink. I have made extremely simplified one to showcase my question:
import reactor.core.publisher.Sinks;
import reactor.test.StepVerifier;
import java.time.Duration;
public class TestBed {
public static void main(String[] args) {
class StringProcessor {
public final Sinks.Many<String> sink = Sinks.many().multicast().directBestEffort();
public void httpPostWebhookController(String inputData) {
sink.emitNext(
inputData.toLowerCase() + " " + inputData.toUpperCase(),
(signalType, emitResult) -> {
System.out.println("error, signalType=" + signalType + "; emitResult=" + emitResult);
return false;
}
);
}
}
final StringProcessor stringProcessor = new StringProcessor();
final StepVerifier stepVerifier = StepVerifier.create(stringProcessor.sink.asFlux())
.expectSubscription()
.expectNext("asdf ASDF")
.expectNext("qw QW")
.thenCancel();
stringProcessor.httpPostWebhookController("asdf");
stringProcessor.httpPostWebhookController("Qw");
stepVerifier.verify(Duration.ofSeconds(2));
}
}
My stepVerified does not subscribe and when it subscribe (upon verify(Duration) call), it misses testing signals. I cannot move verify call before httpPostWebhookController method call, because, it is blocking and will fail because no signal comes.
How to use StepVerifier in such scenario?
As I have asked on udemy course (instructor Vinoth Selvaraj), solution is to use verifyLater call. It will cause to trigger subscription and it does not block. Fixed test code:
final StringProcessor stringProcessor = new StringProcessor();
final StepVerifier stepVerifier = StepVerifier.create(stringProcessor.sink.asFlux().log())
.expectSubscription()
.expectNext("asdf ASDF")
.expectNext("qw QW")
.thenCancel()
.verifyLater();
stringProcessor.httpPostWebhookController("asdf");
stringProcessor.httpPostWebhookController("Qw");
stepVerifier.verify(Duration.ofSeconds(2));
The following code is based on a combination of Ingite's CacheQueryExample and CacheContinuousQueryExample.
The code starts a fat Ignite client. Three organizations are created in the cache and we are listening to the updates to the cache. The remote filter is set to trigger the continuous query if the organization name is "Google". Peer class loading is enabled by the default examples xml config file (example-ignite.xml), so the expectation is that the remote node is aware of the Organization class.
However the following exceptions are shown in the Ignite server's console (one for each cache entry) and all three records are returned to the client in the continuous query's event handler instead of just the "Google" record. If the filter is changed to check on the key instead of the value, the correct behavior is observed and a single record is returned to the local listener.
[08:28:43,302][SEVERE][sys-stripe-1-#2][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:51,819][SEVERE][sys-stripe-2-#3][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
[08:28:52,692][SEVERE][sys-stripe-3-#4][query] CacheEntryEventFilter failed: class o.a.i.binary.BinaryInvalidTypeException: o.a.i.examples.model.Organization
To run the code
Start an ignite server using examples/config/example-ignite.xml as the configuration file.
Replace the content of ignite's CacheContinuousQueryExample.java with the following code. You may have to change the path to the configuration file to an absolute path.
package org.apache.ignite.examples.datagrid;
import javax.cache.Cache;
import javax.cache.configuration.Factory;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryEventFilter;
import javax.cache.event.CacheEntryUpdatedListener;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.affinity.AffinityKey;
import org.apache.ignite.cache.query.ContinuousQuery;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.ScanQuery;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.examples.ExampleNodeStartup;
import org.apache.ignite.examples.model.Organization;
import org.apache.ignite.examples.model.Person;
import org.apache.ignite.lang.IgniteBiPredicate;
import java.util.Collection;
/**
* This examples demonstrates continuous query API.
* <p>
* Remote nodes should always be started with special configuration file which
* enables P2P class loading: {#code 'ignite.{sh|bat} examples/config/example-ignite.xml'}.
* <p>
* Alternatively you can run {#link ExampleNodeStartup} in another JVM which will
* start node with {#code examples/config/example-ignite.xml} configuration.
*/
public class CacheContinuousQueryExample {
/** Organizations cache name. */
private static final String ORG_CACHE = CacheQueryExample.class.getSimpleName() + "Organizations";
/**
* Executes example.
*
* #param args Command line arguments, none required.
* #throws Exception If example execution failed.
*/
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, Organization> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, Organization>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends Organization>> evts) {
for (CacheEntryEvent<? extends Long, ? extends Organization> e : evts)
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + e.getValue() + ']');
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, Organization>>() {
#Override public CacheEntryEventFilter<Long, Organization> create() {
return new CacheEntryEventFilter<Long, Organization>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends Organization> e) {
//return e.getKey() == 3;
return e.getValue().name().equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
/**
* Populate cache with test data.
*/
private static void initialize() {
IgniteCache<Long, Organization> orgCache = Ignition.ignite().cache(ORG_CACHE);
// Clear cache before running the example.
orgCache.clear();
// Organizations.
Organization org1 = new Organization("ApacheIgnite");
Organization org2 = new Organization("Apple");
Organization org3 = new Organization("Google");
orgCache.put(org1.id(), org1);
orgCache.put(org2.id(), org2);
orgCache.put(org3.id(), org3);
}
}
Here is an interim workaround that involves using and deserializing binary objects. Hopefully, someone can post a proper solution.
Here is the main() function modified to work with BinaryObjects instead of the Organization object:
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
System.out.println();
System.out.println(">>> Cache continuous query example started.");
CacheConfiguration<Long, Organization> orgCacheCfg = new CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED); // Default.
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
// Auto-close cache at the end of the example.
try {
ignite.getOrCreateCache(orgCacheCfg);
// Create new continuous query.
ContinuousQuery<Long, BinaryObject> qry = new ContinuousQuery<>();
// Callback that is called locally when update notifications are received.
qry.setLocalListener(new CacheEntryUpdatedListener<Long, BinaryObject>() {
#Override public void onUpdated(Iterable<CacheEntryEvent<? extends Long, ? extends BinaryObject>> evts) {
for (CacheEntryEvent<? extends Long, ? extends BinaryObject> e : evts) {
Organization org = e.getValue().deserialize();
System.out.println("Updated entry [key=" + e.getKey() + ", val=" + org + ']');
}
}
});
// This filter will be evaluated remotely on all nodes.
// Entry that pass this filter will be sent to the caller.
qry.setRemoteFilterFactory(new Factory<CacheEntryEventFilter<Long, BinaryObject>>() {
#Override public CacheEntryEventFilter<Long, BinaryObject> create() {
return new CacheEntryEventFilter<Long, BinaryObject>() {
#Override public boolean evaluate(CacheEntryEvent<? extends Long, ? extends BinaryObject> e) {
//return e.getKey() == 3;
//return e.getValue().name().equals("Google");
return e.getValue().field("name").equals("Google");
}
};
}
});
ignite.getOrCreateCache(ORG_CACHE).withKeepBinary().query(qry);
// Populate caches.
initialize();
Thread.sleep(2000);
}
finally {
// Distributed cache could be removed from cluster only by #destroyCache() call.
ignite.destroyCache(ORG_CACHE);
}
}
}
Peer class loading is enabled ... so the expectation is that the remote node is aware of the Organization class.
This is the problem. You can't peer class load "model" objects, i.e., objects used to create the table.
Two solutions:
Deploy the model class(es) to the server ahead of time. The rest of the code -- the filters -- can be peer class loaded
As #rgb1380 demonstrates, you can use BinaryObjects, which is the underlying data format
Another small point, to use "autoclose" you need to structure your code like this:
// Auto-close cache at the end of the example.
try (var cache = ignite.getOrCreateCache(orgCacheCfg)) {
// do stuff
}
I am a hadoop newbie. I have got the problem when I run the code in this tutorial:
https://github.com/hortonworks/hadoop-tutorials/blob/master/Community/T09_Write_And_Run_Your_Own_MapReduce_Java_Program_Poll_Result_Analysis.md
The map-reduce process will stop on the step blow:
[main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[main] INFO org.apache.hadoop.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id
[main] INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
[main] WARN org.apache.hadoop.mapreduce.JobResourceUploader - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
[main] WARN org.apache.hadoop.mapreduce.JobResourceUploader - No job jar file set. User classes may not be found. See Job or Job#setJar(String).
[main] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 4
[main] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:4
[main] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local61587531_0001
[main] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8080/
[Thread-19] INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null
[Thread-19] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - File Output Committer Algorithm version is 1
The code of map-reduce application is
public class VoteCountApplication extends Configured implements Tool {
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new VoteCountApplication(), args);
System.exit(res);
}
#Override
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.out.println("usage: [input] [output]");
System.exit(-1);
}
Job job = Job.getInstance(new Configuration());
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(VoteCountMapper.class);
job.setReducerClass(VoteCountReducer.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setJarByClass(VoteCountApplication.class);
job.submit();
return 0;
}
}
But if I use the main method from WordCount example to run this project
public class VoteCountApplication extends Configured implements Tool {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "vote count");
job.setJarByClass(VoteCountApplication.class);
job.setMapperClass(VoteCountMapper.class);
job.setCombinerClass(VoteCountReducer.class);
job.setReducerClass(VoteCountReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
That works prefect! I don't know what is the problem in the code from tutorial. Does anyone can understand the difference between the codes? Thanks
Here is the Map and Reduce code:
public class VoteCountMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
#Override
public void map(Object key, Text value, Context output) throws IOException,
InterruptedException {
//If more than one word is present, split using white space.
String[] words = value.toString().split(" ");
//Only the first word is the candidate name
output.write(new Text(words[0]), one);
}
}
public class VoteCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
#Override
public void reduce(Text key, Iterable<IntWritable> values, Context output)
throws IOException, InterruptedException {
int voteCount = 0;
for(IntWritable value: values){
voteCount+= value.get();
}
output.write(key, new IntWritable(voteCount));
}
}
I know that there are many posts on this topic and i've read them and thought i also understood them. But I am still having a problem with aborting a QThread or rather a worker object in a QThread.
I have an GUI application and a library. The GUI can request the library to exec and abort worker objects (is connected to WorkerHandler slots). The WorkerHandler can create several WorkerObjects that all inherit from a base class. I tried to reduce the code for this example, but it's still some kind of verbose.
Gui.h
class Gui : public QMainWindow
{
Q_OBJECT
public:
Gui(QWidget *parent = 0);
~Gui();
private:
Ui::GuiClass ui;
QThread *workerHandlerThread;
WorkerHandler *workerHandler;
void connectActions();
signals:
void execWorker(WorkerParams _params);
void abortWorker(WorkerType type);
slots:
void buttonExecPressed();
void buttonAbortPressed();
}
Gui.cpp
void Gui::Gui()
{
ui.btnExecA->setProperty("type", QVariant::fromValue(WorkerType::A)); //WorkerType is just a enum, bin type to button
ui.btnExecB->setProperty("type", QVariant::fromValue(WorkerType::B));
ui.btnAbortA->setProperty("type", QVariant::fromValue(WorkerType::A));
ui.btnAbortB->setProperty("type", QVariant::fromValue(WorkerType::B));
connectActions();
workerHandlerThread = new QThread();
workerHandler = new WorkerHandler();
workerHandler->moveToThread(workerHandlerThread); // move worker execution to another thread
workerHandlerThread->start(); //start will call run and run will run the QEventLoop of QThread by calling exec
}
void Gui::~Gui()
{
workerHandlerThread->quit();
workerHandlerThread->wait();
delete workerHandlerThread;
delete workerHandler;
}
void Gui::connectActions()
{
connect(ui.btnExecA, &QPushButton::clicked, this, &Gui::buttonExecPressed);
connect(ui.btnExecB, &QPushButton::clicked, this, &Gui::buttonExecPressed);
connect(ui.btnAbortA, &QPushButton::clicked, this, &Gui::buttonAbortPressed);
connect(ui.btnAbortB, &QPushButton::clicked, this, &Gui::buttonAbortPressed);
connect(this, &Gui::execWorker, workerHandler, &WorkerHandler::execWorker);
connect(this, &Gui::abortWorker, workerHandler, &WorkerHandler::abortWorker);
}
void Gui::buttonExecPressed()
{
QPushButton* button = qobject_cast<QPushButton*>(sender());
if (button)
{
WorkerType type = button->property("type").value<WorkerType>(); //get worker type
WorkerParams params = WorkerParamsFactory::Get()->CreateParams(type); //WorkerParamsFactory cretes default parameters based on type
emit execWorker(params); //tell WorkerHandler to create a workerObject based on these parameters
}
}
void Gui::buttonAbortPressed()
{
QPushButton* button = qobject_cast<QPushButton*>(sender());
if (button)
{
WorkerType type = button->property("type").value<WorkerType>();
emit abortWorker(type); //tell WorkerHandler to abort a specific workerObject
}
}
WorkerHandler.h
class WorkerHandler : public QObject {
Q_OBJECT
public:
WorkerHandler(QObject * parent = Q_NULLPTR);
~WorkerHandler();
public slots:
void execWorker(WorkerParams _params);
void abortWorker(WorkerType type);
private:
QMap<WorkerType, WorkerObjectBase*> workerPool; //contains the workerobjects
};
WorkerHandler.cpp
void WorkerHandler::execWorker(WorkerParams _params)
{
QThread *thread = new QThread();
WorkerObjectBase *worker = WorkerObjectFactory::Get()->CreateWorker(_params); //Factory to create specific Worker Object based on given params
worker->moveToThread(thread);
connect(thread, &QThread::started, workerThread, &WorkerObjectBase::process);
connect(workerThread, &WorkerObjectBase::workerFinished, thread, &QThread::quit); //quit the QThread when worker is finished
connect(thread, &QThread::finished, thread, &QThread::deleteLater); //free resources when thread is finished
connect(thread, &QThread::finished, workerThread, &WorkerObjectBase::deleteLater); //free resources when thread is finished
workerPool.insert(_params.type, worker); //_params.type contains WorkerType
thread->start(); //will call run of qthread which will call exec
}
void WorkerHandler::abortWorker(WorkerType type)
{
WorkerObjectBase *worker = workerPool.value(type);
worker->requestAbort();
QThread *workerThread = worker->thread();
if (workerThread)
{
if (!workerThread->wait(10000)) //will always block the 10 seconds and terminate the thread. using just wait() will block forever
{
workerThread->terminate();
}
}
}
WorkerHandlerBase.h
class WorkerObjectBase : public QObject {
Q_OBJECT
public:
WorkerObjectBase(QObject * parent = Q_NULLPTR);
~WorkerObjectBase();
void requestAbort();
protected:
//some WorkerObject basic parameters
bool abortRequested();
public slots:
virtual void process();
signals:
void workerFinished();
private:
QMutex abortMutex;
bool abort = false;
};
WorkerHandlerBase.cpp
void WorkerObjectBase::requestAbort()
{
abortMutex.lock();
abort = true;
abortMutex.unlock();
}
bool WorkerObjectBase::abortRequested()
{
bool abortRequested;
abortMutex.lock();
abortRequested = abort;
abortMutex.unlock();
return abortRequested;
}
WorkerObjectA.h
class WorkerObjectA : public WorkerObjectBase {
Q_OBJECT
public:
WorkerObjectA(QObject * parent = Q_NULLPTR);
~WorkerObjectA();
protected:
//some WorkerObjectA parameters
public slots:
void process();
};
WorkerObjectA.cpp
void WorkerObjectA::process()
{
while(!abortRequested())
{
//do some stuff
}
emit workerFinished();
}
The problem is, when i use wait, it blocks the signal processing. workerFinished is not handled and QThread does not quit. But I still don't get why. When i create a new worker object, i move it to a different thread. When this thread is started, it runs its own QEventLoop as stated in QThread
5.5 Documentation:
void QThread::run()
The starting point for the thread. After calling start(), the newly
created thread calls this function. The default implementation simply
calls exec().
So even if my WorkerHandler thread is blocking because of calling wait, the QThread of the specific workerObject should still manage to get the workerFinished signal and call the quit slot. If i don't use wait at all, everything is fine. But when something unexpected happens in the worker object process method that keeps it from emitting workerFinished, i want to be able to kill the thread the hard way.
So, what am i doing wrong?
While migrating a JBoss 5 application to JBoss AS 7 (7.1.1.FINAL) I have a problem with a new JMS message driven EJB. Within message processing, some master data fields have to be checked. To enhance performance, this master data shall be preloaded into a cache structure using a #Singleton #Startup EJB, which needs about 30 seconds to load the data.
My problem is that the queue message processing starts even if the cache has not been fully initialized, causing message validation errors.
I tried to define a dependency between the MDB and the startup EJB, but as far as I understood the #DependsOn annotation works only with #Singleton EJBs. So it's clear that my solution does not work ;-)
Startup bean code:
#Singleton
#Startup
public class StartupBean {
#PostConstruct
void atStartup() {
// TODO load master data cache (takes about 30 seconds)
}
#PreDestroy()
void atShutdown() {
// TODO free master data cache
}
}
Note: I stripped the real code from the example to make it easier to read :-)
Message driven bean code:
#MessageDriven(name="SampleMessagingBean", activationConfig = {
#ActivationConfigProperty(propertyName="destinationType", propertyValue="javax.jms.Queue"),
#ActivationConfigProperty(propertyName="destination", propertyValue="jms/SampleQueue"),
#ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge")
})
#DependsOn("StartupBean")
public class SampleMessagingBean implements MessageListener {
public void onMessage(Message message) {
// TODO validate message using master data cache
}
}
Question: How can I suspend message processing until the startup bean has finished loading the cache?
Any suggestions greatly appreciated :-)!
First i thought inject singleton EJB in mdb would be enough to delay message consumption
But no, sometimes it would start consuming the message before #PostConstruct of Singleton-ejb completed. So added a method invocation also and it started working
This worked on glassfish, but i dont see a reason why it shouldnt work on jboss
Singleton-Ejb:
#Singleton
#Startup
public class SingletonBean {
private Logger logger = Logger.getLogger(getClass().getName());
private boolean init = false;
public boolean isInit() {
return init;
}
#PostConstruct
public void init() {
logger.error("singleton init start");
//Do something that takes time here
init = true;
logger.error("singleton init end ");
}
}
and mdb:
#MessageDriven(...)
public class SomeMdb implements MessageListener {
private Logger logger = Logger.getLogger(getClass().getName());
#EJB
SingletonBean sb;
#PostConstruct
public void init() {
logger.error("mdb init start");
if (!sb.isInit()) {
logger.error("never happens");
}
logger.error("mdb init complete");
}
public void onMessage(Message message) {
logger.error("onMessage start");
}
}
Now it always waits for SingletonBean to complete init before mdb completes init (as seen in log)
19:51:51,980 [ad-pool-1; w: 3] ERROR SomeMdb - mdb init start
19:51:52,122 [ad-pool-4848(4)] ERROR SingletonBean - singleton init start
19:51:56,316 [ad-pool-4848(4)] ERROR SingletonBean - singleton init end
19:51:56,317 [ad-pool-1; w: 3] ERROR SomeMdb - mdb init complete
19:51:56,317 [ad-pool-1; w: 3] ERROR SomeMdb - onMessage start