During org.mule.component.BindingInvocationHandler invoke method Mule is trying to get the Current Event from RequestContext.getEvent();
But the value is coming as Null and we are getting NullPointerException.
What could be the reason that sets CurrentEvent of RequestContext to null?
Update: We are using java.util.concurrent.ExecutorService to invoke a method bound by BindingInvocationHandler.
RequestContext.getEvent() uses ThreadLocal to find out the in-flight event so maybe you're calling it from within a thread that is not the one that processes the MuleEvent?
If that's the case, you can try cloning the event then passing it to your thread and re-establish it as the current event with RequestContext.setEvent(xxx).
Expect turbulence as this is not small feat, though Mule does this internally.
Use the newThreadCopy() on the event to get a copy that can be processed by another Mule thread without throwing an exception.
Related
My question is about sinking COM events from a sub object properly, without creating a circular reference that would lead to memory leak(s).
There is an ActiveX control called CMyControl. This control creates an instance of an embedded web browser (IWebBrowser2) internally to display some HTML-content.
The web browser exposes an event source called DWebBrowserEvents2 that can deliver some interesting progress updates to CMyControl. Such as DocumentComplete when the HTML document has been fully loaded or when an error occurs etc.
And CMyControl will handle these events with the help of IDispEventSimpleImpl.
The issue I'm facing is that instances of CMyControl do not get destroyed when Release is called.
The direct reason for this is that the reference counter always ends up at 1 instead of 0.
Turns out that IDispEventSimpleImpl is indirectly responsible for this. This makes sense to me because the web browser needs the control's interface to sink the events, so it keeps a reference. Until you call the IDispEventSimpleImpl::DispEventUnadvise method, then the interface gets released.
But when Release gets called on IMyControl, the event source won't get disconnected.
I understand that: there is no reason why it would do that: Release doesn't even know about it.
Stumbled upon this post where they advise (pun intended) to create a custom "sink" object:
https://microsoft.public.vc.atl.narkive.com/4MgGRavd/dispeventadvise-dispeventunadvise-problem
The idea is that the sink object would see the events fired by the web browser first, before passing them on to CMyControl.
For this, an instance of this sink object would be stored inside CMyControl.
The sink object the connects to (and gets referenced by) the browser instead of the CMyControl instance itself. This breaks the circular reference.
Furthermore, the sink object gets passed a pointer to the "mothership" (the CMyControl instance) so it can perform a callback whenever an events occurs.
My question is: is this really how it should be done? isn't there a better/proper way to connect the events?
This query is with respect to the example provided in hiredis
Can event_base_dispatch(base) be called from a different thread by creating pthread_create()?
It is a fact that event_base_dispatch() is a loop and it is a blocking call. My idea here is to send all my redis command from the parent thread by invoking redisAsyncCommand(), event base will be run in the other thread.
you need to use add enable_thread_safe_windows function before you create a event base.
My question is about the Camunda API method RuntimeService#messageEventReceived(java.lang.String, java.lang.String, java.util.Map<java.lang.String,java.lang.Object>). We use this method to trigger a non-interrupting boundary message event (on a receive task that is waiting for a different message), like this. As third parameter in the method call, we are passing some process variables.
As expected, this leaves the receive task active and starts a new execution leaving the boundary event. I would have expected that the process variables passed to the third argument of RuntimeService#messageEventReceived would now be stored in the newly created execution, but it seems to be the case that they are stored in the execution of the receive task. This does not make much sense to me, because this is not the execution that "resulted" from the message.
We can workaround this problem by determining which execution is new after the execution of RuntimeService#messageEventReceived and attaching process variables there manually. But this does not seem very elegant - does anyone know a better solution? Or am I misunderstanding something?
This is expected behavior, see the java doc of the method
void messageEventReceived(String, String, Map)
void messageEventReceived(String messageName,
String executionId,
Map<String,Object> processVariables)
Notifies the process engine that a message event with the name 'messageName' has been received and has been correlated to an execution with id 'executionId'. The waiting execution is notified synchronously. Note that you need to provide the exact execution that is waiting for the message if the process instance contains multiple executions.
Parameters:
messageName - the name of the message event
executionId - the id of the process instance or the execution to deliver the message to
processVariables - a map of variables added to the execution
Since the process variables are set on the existing execution they are also available on the new created child execution.
As alternative you could create a ServiceTask after the Boundary event, which creates the process variables OR you create another ReceiveTask after the Boundary event. This receive task could be completed with your message and the needed variables.
I am creating tasks by inheriting from Greenlet. I have a single parent task that calls start() on two children in its _run(). Elsewhere (it happens to be a systemd service) start() and join() are called.
The behavior seems correct. For example the use of a Queue with timeouts achieves the desired effect but I haven't found a good way to shutdown the children from say KeyboardInterrupt or by registering a callback to the parent task for SIGTERM. In the handler I would call child1.kill() and 'child2.kill()but only the first called seemed to raiseGreenletExit`.
I never call join() on the children and I'm not sure how I would do this properly. Am I misusing the library?
My error was that I was handling gevent.greenlet.GreenletExit in the child tasks. If you need to handle the exit you can catch and reraise this exception.
Problem I am facing is I am using a for each component for iterating the records one by one and then inserting in to some end system.
What exactly is happening is when the data is correct it inserts the data into end system but as there is some exception in the data , exception handling code is executed but the flow do not resumes back in for loop so that all other records gets executed.
I have tried adding sublow and then calling it from flow but while adding a exception handling in sub flow gives me error as "invalid content in custom or catch or choice exception handling"
How to resumes the flow after executing the exception/error handling block.
.
SubFlows can't have endpoints or exception strategies that's why you're getting this error.
Never the less you could just use a normal flow instead of your subflow.
If it has no endpoint then it's call a private flow and can only be referenced from inside your application.
HTH.
You can go with two options here.
1. Use batch processing which can give you list of elements which are processed either successfully and failed.
2. Use until success scope
First, remember that as soon as a processor in ANY scope (flow, try, etc.) raises an error, after the error handler runs (regardless of the level of the handler: app-default, flow-scope, or try-scope) and regardless of whether the error-scope in the error-handler is On Error Propogate or On Error Continue, NO, repeat NO, other processors in that scope are executed.
The pattern that gets you what you want his this:
Wrap a processor (or series of processors) in a given scope into a Try Scope. (note, the easiest way to do this is to select the desired processors, right-click, and in the pop-up, click "Wrap in..." and then in the resulting fly-out - "try".
Drop an On Error Continue error-scope into the Error Handling section of the Try Scope.
Handle the error however you want in the error-scope. This can include NO processors, which treats the error as a no-op.
Thus, when the processor of concern throws the error, no further processors in that Try Scope get called, BUT, after running the Try Scope's On Error Continue error-scope's processor, processing will continue in the parent scope (in this case the flow) and you get to keep processing the elements in your collection.
NOTE: It is correct that a Subflow cannot have an error handler of its own. HOWEVER, it is important to point out that Subflows run in the context of their parent / calling scope (e.g. a Flow / Private Flow), which CAN have error handlers. So, if you have Flows A and B with their error-handlers A' and B', and Subflow C, and Flows A and B each call Subflow C through a flow reference, and further suppose you have processor P1 in C that flows an error, THEN:
When P1 runs being called from Flow A, control passes to A'
When P1 runs being called from Flow B, control passes to B'.
NOTE2: Until Successful will not help the iteration. It will only repeat the attempt to send the data until the processor doing so no longer errors. That does not seem to be what the O.P. was asking, though my interpretation may be wrong.