I'm using jBPM 5.4 with MsSql.
It is working fine.
I have simple workflow from START ----> TASK A ----------> TASK B --------> STOP
I'm trying to access such an workflow from Servlets
When i execute such an workflow, i'm able to forward till the Starting of Task B.
onExit of TASK B isnt called.
Hence the workflow isn't reaching the Completed status but the task table is updated to completed status also no exception is logged.
This is my server log,
[stdout] (http-localhost-127.0.0.1-8080-1) ****** Creating EMF
[stdout] (http-localhost-127.0.0.1-8080-1) ****** Creating env
[stdout] (http-localhost-127.0.0.1-8080-1) ****** Reading Properties
[stdout] (http-localhost-127.0.0.1-8080-1) ****** config section
[stdout] (http-localhost-127.0.0.1-8080-1) OnEntrying the First Task ***
[stdout] (http-localhost-127.0.0.1-8080-1) Started Process Output 14
[stdout] (http-localhost-127.0.0.1-8080-1) Completed Process Output 14
[stdout] (Thread-73) OnExiting the First Task ***
[stdout] (Thread-73) OnEntrying the Second Task ***
[stdout] (http-localhost-127.0.0.1-8080-1) Started Process Output 15
[stdout] (http-localhost-127.0.0.1-8080-1) Completed Process Output 15
It's important that you have a ksession connect to your task service when you are completing your task, so that that session can continue the execution of your process. So:
how are you using the task service? a local task service, or remote using HornetQ?
is the session that started the process instance still active? or if not, do you instantiate a new session before completing the task?
did you call connect() on your human task handler after creating it? this actually connects the handler to the task service and registers the necessary listeners
Since it is successfully running through the first task, it seems it might not be persisting the changes after the completion of the first task. Which handler class are you using? Could you turn on sql output (in your persistence.xml) and check whether you see the necessary changes in the process instance info class after completing the first task?
Kris
Related
I start flink(bin/start-cluster.sh)on a single machine, and submit a job by flink web UI.
If there are something wrong with the job, such as sink mysql table does not exist or wrong keyby field, not only this job failure, I have to cancel failed task, but after cancelling ,the taskmanager seems like be "killed", it disappears in flink web ui.
Are there solutions for fault tolerance(taskmanager be killed by failure job) ?
The only way is to run flink on yarn?
A task failure should never cause a TaskManager to be killed. Please check the TaskManager logs for any exceptions.
I Upgraded my project from Mule 2.2 to Mule 3.8,Project is working fine,But during starting of Mule Server i am getting "Several exceptions in logs when logging level is DEBUG".
[WrapperListener_start_runner] SpringRegistry - No bean named 'quartz:-948818277' is defined
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'quartz:-948818277' is defined
[DEBUG] 2017-04-21 06:25:18.994 [WrapperListener_start_runner] SpringRegistry - No bean named 'endpoint.quartz.contestentsPhotoDelivery.task' is defined
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'endpoint.quartz.contestentsPhotoDelivery.task' is defined
[DEBUG] 2017-04-21 06:25:19.024 [WrapperListener_start_runner] SpringRegistry - No bean named 'endpoint:24812436' is defined
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'endpoint:24812436' is defined
[DEBUG] 2017-04-21 06:25:19.376 [WrapperListener_start_runner] SpringRegistry - No bean named 'vm:3767' is defined
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'vm:3767' is defined
There are many more similar "No bean named" Exception, pls let me if you need more Info.Thanks!
During startup mule is looking for default connectors and beans, clustered components, etc. If they are not defined, it exceptions and knows that feature is not available or to use your values instead. This is normally handled in the background and you do not see it. You are getting these reports only because you steps Mule's log reporting up to DEBUG level which is giving you a view into some of the internal workings. This is not a worry and one of the reasons you do not bump Mule up to debug level reporting unless you really need to see what is going on because something is failing. One could argue that Mule should have put this normal execution level logging at TRACE level, but it is debatable.
I'm running a Spark Streaming application on YARN in cluster mode and I'm trying to implement a gracefully shutdown so that when the application is killed it will finish the execution of the current micro batch before stopping.
Following some tutorials I have configured spark.streaming.stopGracefullyOnShutdown to true and I've added the following code to my application:
sys.ShutdownHookThread {
log.info("Gracefully stopping Spark Streaming Application")
ssc.stop(true, true)
log.info("Application stopped")
}
However when I kill the application with
yarn application -kill application_1454432703118_3558
the micro batch executed at that moment is not completed.
In the driver I see the first line of log printed ("Gracefully stopping Spark Streaming Application") but not the last one ("Application stopped").
ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
INFO streaming.MySparkJob: Gracefully stopping Spark Streaming Application
INFO scheduler.JobGenerator: Stopping JobGenerator gracefully
INFO scheduler.JobGenerator: Waiting for all received blocks to be consumed for job generation
INFO scheduler.JobGenerator: Waited for all received blocks to be consumed for job generation
INFO streaming.StreamingContext: Invoking stop(stopGracefully=true) from shutdown hook
In the executors log I see the following error:
ERROR executor.CoarseGrainedExecutorBackend: Driver 192.168.6.21:49767 disassociated! Shutting down.
INFO storage.DiskBlockManager: Shutdown hook called
WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver#192.168.6.21:49767] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
INFO util.ShutdownHookManager: Shutdown hook called
I think the problem is related to how YARN send the kill signal the application. Any idea on how can I make the application stop gracefully?
you should go to the executors page to see where your driver is running ( on which node). ssh to that node and do the following:
ps -ef | grep 'app_name'
(replace app_name with your classname/appname). it will list couple of processes. Look at the process, some will be child of the other. Pick the id of the parent-most process and send a SIGTERM
kill pid
after some time you'll see that your application has terminated gracefully.
Also now you don't need to add those hooks for shutdown.
use spark.streaming.stopGracefullyOnShutdown config to help shutdown gracefully
You can stop spark streaming application by invoking ssc.stop when a customized condition is triggered instead of using awaitTermination. As the following pseudocode shows:
ssc.start()
while True:
time.sleep(10s)
if some_file_exist:
ssc.stop(True, True)
How can I view (in carbon console or in a File) the services that are consumed by subcribers? Do I need to set a log-level? Can I see the total amount of services consumed per service of a period?
In the Carbon Console you could set Enable Message Tracing. If you enable message tracing AND Enable logging, the carbon.log file is filled with messages;
TID: [0] [AM] [2014-04-18 13:32:48,026] INFO {org.wso2.carbon.bam.message.tracer.handler.util.HandlerUtils} -
Massage Info: Transaction id=62078996748661323020575 Message direction=IN Server name=nxt-fon-app01.nl.rsg Timestamp=1397820768026
Service name=__SynapseService Operation Name=mediate {org.wso2.carbon.bam.message.tracer.handler.util.HandlerUtils}
TID: [0] [AM] [2014-04-18 13:32:48,046] INFO {org.wso2.carbon.bam.message.tracer.handler.util.HandlerUtils} -
Massage Info: Transaction id=62078996748661323020575 Message direction=OUT Server name=nxt-fon-app01.nl.rsg Timestamp=1397820768046
Service name=__SynapseService Operation Name=mediate {org.wso2.carbon.bam.message.tracer.handler.util.HandlerUtils}
This information is limited you see in incoming call and outgoing call event, but you do not see which service.
To enable runtime statistics you should enable BAM with API
I noticed that when i kill a resque worker and it's processing something, it won't leave a failed job. It will be simply gone.
Thus, the job will never be finished and jobs place an important role in my application.
This only happens when i kill a worker. If my job raises an exception, i can retry it later.
Is it possible to avoid this behavior?
Thanks.