Triggering non-interrupting boundary event with variables - bpmn

My question is about the Camunda API method RuntimeService#messageEventReceived(java.lang.String, java.lang.String, java.util.Map<java.lang.String,java.lang.Object>). We use this method to trigger a non-interrupting boundary message event (on a receive task that is waiting for a different message), like this. As third parameter in the method call, we are passing some process variables.
As expected, this leaves the receive task active and starts a new execution leaving the boundary event. I would have expected that the process variables passed to the third argument of RuntimeService#messageEventReceived would now be stored in the newly created execution, but it seems to be the case that they are stored in the execution of the receive task. This does not make much sense to me, because this is not the execution that "resulted" from the message.
We can workaround this problem by determining which execution is new after the execution of RuntimeService#messageEventReceived and attaching process variables there manually. But this does not seem very elegant - does anyone know a better solution? Or am I misunderstanding something?

This is expected behavior, see the java doc of the method
void messageEventReceived(String, String, Map)
void messageEventReceived(String messageName,
String executionId,
Map<String,Object> processVariables)
Notifies the process engine that a message event with the name 'messageName' has been received and has been correlated to an execution with id 'executionId'. The waiting execution is notified synchronously. Note that you need to provide the exact execution that is waiting for the message if the process instance contains multiple executions.
Parameters:
messageName - the name of the message event
executionId - the id of the process instance or the execution to deliver the message to
processVariables - a map of variables added to the execution
Since the process variables are set on the existing execution they are also available on the new created child execution.
As alternative you could create a ServiceTask after the Boundary event, which creates the process variables OR you create another ReceiveTask after the Boundary event. This receive task could be completed with your message and the needed variables.

Related

In AHK, what is the difference between Try/Catch and OnError()? When to use one or the other?

I am currently writing an AHK script that reads and writes files.
I would like to handle the possible I/O errors,
but the doc isn't clear to me regarding wether I should use Try/Catch or OnError().
What is the difference between the two? And when to use one or the other?
So, after some more research, here is my understanding:
Try/Catch: Use it to:
Specifically identify certain lines of code over which it will be applied.
Then, if you would like, proceed with the execution.
A Try/Catch allows to proceed with the execution of the command after the Try block that failed.
(But, a Try/Catch does not allow to proceed with the execution of the command after the one that failed within the Try block. For example if 5 commands are wrapped, then if the 2nd one threw, it will not be possible to proceed with the execution from the 3rd after doing something in the Catch-block.)
OnError(): Use it to
Deal with any unhandled error.
Block (or not) the default error dialog.
In any cases the thread execution is stopped after you handled the error.
There can be multiple OnError() handlers active at a time,
and you can decide, in which order they will be executed (or to stop the execution after any one of them) when an error occurs.
If all handlers return 0 all handlers are called one after the other, then the default error message is displayed, then the thread exits.
If any handler returns a non-zero integer, the thread exits without calling the following handlers and without displaying the default error dialog.

Do I have to check the InvokeRequired and use the Invoke function with every control I want to update?

I'm writing a scheduler. It has a single form frmMain, which shows jobs that are currently running, and a history of job steps that have run. It has an object of class Scheduler that manages running new jobs. Scheduler keeps a collection class, List which contains objects of class RunningJob. RunningJob executes each step in turn through a series of sub-classes.
When a job is started, the Scheduler creates a new BackgroundWorker with the DoWork, ProgressChanged and RunWorkerCompleted methods setup with handlers that point back into the instance of RunningJob.
Each time a job/step starts/ends, one of these handers in RunningJob raises an appropriate event into Scheduler and Scheduler raises an appropriate event into frmMain. i.e.:
frmMain (1 instance) <---- Scheduler (1 instance) <---- RunningJob.WorkerProgressChanged (many instances)
The RunningJob executes correctly, but the reporting going up to the interface is not working correctly. Also any logging to files I do is suspect (I'm using a single function: LogInfo to do this). I have a number of questions:
When I use InvokeRequired() and Invoke() within frmMain, do I have to do this with every single control I want to update (there are several). Can I just check InvokeRequired() on one control and use Invoke on all of them based on that result.
Why bother checking InvokeRequired() at all and just use Invoke() every single time? It will make for simpler code.
There is only one instance of Scheduler and I am raising events to get execution back into it from each Job. I think this is part of the problem. How is multithreading handled doing this? Is there some sort of InvokeRequired/Invoke check I can do on the events before raising them? Can I raise events at all in this situation? I like events, rather than calling methods on the owner class, because it improves encapsulation. What is best practice here?
In general, if I'm calling a piece of code from many different threads, not necessarily to update a form, but just to perform some function (e.g. add a line of text to a file for logging purposes), how do I block one thread until the other has completed?

Flow resuming in mulesoft

Problem I am facing is I am using a for each component for iterating the records one by one and then inserting in to some end system.
What exactly is happening is when the data is correct it inserts the data into end system but as there is some exception in the data , exception handling code is executed but the flow do not resumes back in for loop so that all other records gets executed.
I have tried adding sublow and then calling it from flow but while adding a exception handling in sub flow gives me error as "invalid content in custom or catch or choice exception handling"
How to resumes the flow after executing the exception/error handling block.
.
SubFlows can't have endpoints or exception strategies that's why you're getting this error.
Never the less you could just use a normal flow instead of your subflow.
If it has no endpoint then it's call a private flow and can only be referenced from inside your application.
HTH.
You can go with two options here.
1. Use batch processing which can give you list of elements which are processed either successfully and failed.
2. Use until success scope
First, remember that as soon as a processor in ANY scope (flow, try, etc.) raises an error, after the error handler runs (regardless of the level of the handler: app-default, flow-scope, or try-scope) and regardless of whether the error-scope in the error-handler is On Error Propogate or On Error Continue, NO, repeat NO, other processors in that scope are executed.
The pattern that gets you what you want his this:
Wrap a processor (or series of processors) in a given scope into a Try Scope. (note, the easiest way to do this is to select the desired processors, right-click, and in the pop-up, click "Wrap in..." and then in the resulting fly-out - "try".
Drop an On Error Continue error-scope into the Error Handling section of the Try Scope.
Handle the error however you want in the error-scope. This can include NO processors, which treats the error as a no-op.
Thus, when the processor of concern throws the error, no further processors in that Try Scope get called, BUT, after running the Try Scope's On Error Continue error-scope's processor, processing will continue in the parent scope (in this case the flow) and you get to keep processing the elements in your collection.
NOTE: It is correct that a Subflow cannot have an error handler of its own. HOWEVER, it is important to point out that Subflows run in the context of their parent / calling scope (e.g. a Flow / Private Flow), which CAN have error handlers. So, if you have Flows A and B with their error-handlers A' and B', and Subflow C, and Flows A and B each call Subflow C through a flow reference, and further suppose you have processor P1 in C that flows an error, THEN:
When P1 runs being called from Flow A, control passes to A'
When P1 runs being called from Flow B, control passes to B'.
NOTE2: Until Successful will not help the iteration. It will only repeat the attempt to send the data until the processor doing so no longer errors. That does not seem to be what the O.P. was asking, though my interpretation may be wrong.

Ensure a web server's query will complete even if the client browser page is closed

I am trying to write a control panel to
Inform about certain KPIS
Enable the user to init certain requests / jobs by pressing a button that then runs a stored proc on the DB or sets a specific setting etc
So far, so good, except I would like to run some bigger jobs where the length of time that the job is running for is unknown and could run over both the script timeout period AND the time the user is willing to wait for a response.
What I want is a "fire and forget" process so the user hits the button and even if they kill the page or turn off their phone they know the job has been initiated and WILL complete.
I was looking into C# BeginExecuteNonQuery which is an async call to the query so the proc is fired but the control doesn't have to wait for a response from it to carry on. However I don't know what happens when the page/app that fired it is shut.
Also I was thinking of some sort of Ajax command that fires the code in a page behind the scenes so the user doesn't know about it running but then again I believe if the user shuts the page down the script will die and the command will die on the server as well.
The only way for certain I know of is a "queue" table where jobs are inserted into this table then an MS Agent job comes along every minute or two checking for new inserts and then runs the code if there is any. That way it is all on the DB and only a DB crash will destroy it. It won't help with multiple jobs waiting to be run concurrently that both take a long time but it's the only thing I can be sure of that will ensure the code is run at all.
Any ideas?
Any language is okay.
Since web browsers are unconnected, requests from them always take the full amount of time. The governing factor isn't what the browser does, but how long the web site itself will allow an action to continue.
IIS (and in general, web servers) have a timeout period for requests, where if the work being done takes simply too long, the request is terminated. This would involve abruptly stopping whatever is taking so long, such as a database call, running code, and so on.
Simply making your long-running actions asynchronous may seem like a good idea, however I would recommend against that. The reason is that in ASP and ASP.Net, asynchronously-called code still consumes a thread in a way that blocks other legitimate request from getting through (in some cases you can end up consuming two threads!). This could have performance implications in non-obvious ways. It's better to just increase the timeout and allow the synchronously blocking task to complete. There's nothing special you have to do to make such a request complete fully, it will occur even if the sender closes his browser or turns off his phone immediately after (presuming the entire request was received).
If you're still concerned about making certain work finish, no matter what is going on with the web request, then it's probably better to create an out-of-process server/service that does the work and to which such tasks can be handed off. Your web site then invokes a method that, inside the service, starts its own async thread to do the work and then immediately returns. Perhaps it also returns a request ID, so that the web page can check on the status of the requested work later through other methods.
You may use asynchronous method and call the query from this method.
Your simple method can be changed in to a asynch method in the following manner.
Consider that you have a Test method to be called asynchronously -
Class AsynchDemo
{
public string TestMethod(out int threadId)
{
//call your query here
}
//create a asynch handler delegate:
public delegate string AsyncMethodCaller(out int threadId);
}
In your main program /or where you have to call the Test Method:
public static void Main()
{
// The asynchronous method puts the thread id here.
int threadId;
// Create an instance of the test class.
AsyncDemo ad = new AsyncDemo();
// Create the delegate.
AsyncMethodCaller caller = new AsyncMethodCaller(ad.TestMethod);
// Initiate the asychronous call.
IAsyncResult result = caller.BeginInvoke(
out threadId, null, null);
// Call EndInvoke to wait for the asynchronous call to complete,
// and to retrieve the results.
string returnValue = caller.EndInvoke(out threadId, result);
Console.WriteLine("The call executed on thread {0}, with return value \"{1}\".",
threadId, returnValue);
}
From my experience a Classic ASP or ASP.NET page will run until complete, even if the client disconnects, unless you have something in place for checking that the client is still connected and do something if they are not, or a timeout is reached.
However, it would probably be better practice to run these sorts of jobs as scheduled tasks.
On submitting your web page could record in a database that the task needs to be run and then when the scheduled task runs it checks for this and starts the job.
Many web hosts and/or web control panels allow you to create scheduled tasks that call a URL on schedule.
Alternately if you have direct access to the web server you could create a scheduled task on the server to call a URL on schedule.
Or, if ASP.NET, you can put some code in global.asax to run on a schedule. Be aware though, if your website is set to stop after a certain period of inactivity then this will not work unlesss there is frequent continuous activity.

Mule returning null when RequestContext.getEvent()

During org.mule.component.BindingInvocationHandler invoke method Mule is trying to get the Current Event from RequestContext.getEvent();
But the value is coming as Null and we are getting NullPointerException.
What could be the reason that sets CurrentEvent of RequestContext to null?
Update: We are using java.util.concurrent.ExecutorService to invoke a method bound by BindingInvocationHandler.
RequestContext.getEvent() uses ThreadLocal to find out the in-flight event so maybe you're calling it from within a thread that is not the one that processes the MuleEvent?
If that's the case, you can try cloning the event then passing it to your thread and re-establish it as the current event with RequestContext.setEvent(xxx).
Expect turbulence as this is not small feat, though Mule does this internally.
Use the newThreadCopy() on the event to get a copy that can be processed by another Mule thread without throwing an exception.