Flow resuming in mulesoft - mule

Problem I am facing is I am using a for each component for iterating the records one by one and then inserting in to some end system.
What exactly is happening is when the data is correct it inserts the data into end system but as there is some exception in the data , exception handling code is executed but the flow do not resumes back in for loop so that all other records gets executed.
I have tried adding sublow and then calling it from flow but while adding a exception handling in sub flow gives me error as "invalid content in custom or catch or choice exception handling"
How to resumes the flow after executing the exception/error handling block.
.

SubFlows can't have endpoints or exception strategies that's why you're getting this error.
Never the less you could just use a normal flow instead of your subflow.
If it has no endpoint then it's call a private flow and can only be referenced from inside your application.
HTH.

You can go with two options here.
1. Use batch processing which can give you list of elements which are processed either successfully and failed.
2. Use until success scope

First, remember that as soon as a processor in ANY scope (flow, try, etc.) raises an error, after the error handler runs (regardless of the level of the handler: app-default, flow-scope, or try-scope) and regardless of whether the error-scope in the error-handler is On Error Propogate or On Error Continue, NO, repeat NO, other processors in that scope are executed.
The pattern that gets you what you want his this:
Wrap a processor (or series of processors) in a given scope into a Try Scope. (note, the easiest way to do this is to select the desired processors, right-click, and in the pop-up, click "Wrap in..." and then in the resulting fly-out - "try".
Drop an On Error Continue error-scope into the Error Handling section of the Try Scope.
Handle the error however you want in the error-scope. This can include NO processors, which treats the error as a no-op.
Thus, when the processor of concern throws the error, no further processors in that Try Scope get called, BUT, after running the Try Scope's On Error Continue error-scope's processor, processing will continue in the parent scope (in this case the flow) and you get to keep processing the elements in your collection.
NOTE: It is correct that a Subflow cannot have an error handler of its own. HOWEVER, it is important to point out that Subflows run in the context of their parent / calling scope (e.g. a Flow / Private Flow), which CAN have error handlers. So, if you have Flows A and B with their error-handlers A' and B', and Subflow C, and Flows A and B each call Subflow C through a flow reference, and further suppose you have processor P1 in C that flows an error, THEN:
When P1 runs being called from Flow A, control passes to A'
When P1 runs being called from Flow B, control passes to B'.
NOTE2: Until Successful will not help the iteration. It will only repeat the attempt to send the data until the processor doing so no longer errors. That does not seem to be what the O.P. was asking, though my interpretation may be wrong.

Related

In AHK, what is the difference between Try/Catch and OnError()? When to use one or the other?

I am currently writing an AHK script that reads and writes files.
I would like to handle the possible I/O errors,
but the doc isn't clear to me regarding wether I should use Try/Catch or OnError().
What is the difference between the two? And when to use one or the other?
So, after some more research, here is my understanding:
Try/Catch: Use it to:
Specifically identify certain lines of code over which it will be applied.
Then, if you would like, proceed with the execution.
A Try/Catch allows to proceed with the execution of the command after the Try block that failed.
(But, a Try/Catch does not allow to proceed with the execution of the command after the one that failed within the Try block. For example if 5 commands are wrapped, then if the 2nd one threw, it will not be possible to proceed with the execution from the 3rd after doing something in the Catch-block.)
OnError(): Use it to
Deal with any unhandled error.
Block (or not) the default error dialog.
In any cases the thread execution is stopped after you handled the error.
There can be multiple OnError() handlers active at a time,
and you can decide, in which order they will be executed (or to stop the execution after any one of them) when an error occurs.
If all handlers return 0 all handlers are called one after the other, then the default error message is displayed, then the thread exits.
If any handler returns a non-zero integer, the thread exits without calling the following handlers and without displaying the default error dialog.

Why in mulesoft "on Error Propagate" rethrows the same error?

I am new to mulesoft and while studing it i strugle to understand why on module "onErrorPropagate" the error is being rethrown after executing the scope.
can you explain the benefits?
An on-error-propagate will rollback any transactions, execute, and use that result to rethrow the existing error––meaning its owner will be considered as “failing.”
The best use is in a layered system, to allow each layer to do its own small part of an error response.
If you are familiar with Java you can think of it as catching the exception and re-throwing it. For example, sometimes you want to do something with the error yourself, but still want to propagate it upwards for higher levels to deal with.
You could add logging in a specific flow for the error but then leave it to the parent flows to actually deal with the exception.
The "onErrorPropagate" propagates (rethrows) the error to the parent flow (or the global error handler if it's already reached the main flow).
This can be usefull in a few cases.
Say you have some flow specific error handling (e.g. if something goes wrong, set a default payload).
Then you propagate this error to the next level where you have your Global error handler that, say, stores some info in a QA database.
You don't want to have that database connector in every single error handler.
In this way you can achieve a 'java inherritance' like structure, for your errors.
Side note: if you want your error to only be handled and do nothing further, you can use "onErrorContinue"

Triggering non-interrupting boundary event with variables

My question is about the Camunda API method RuntimeService#messageEventReceived(java.lang.String, java.lang.String, java.util.Map<java.lang.String,java.lang.Object>). We use this method to trigger a non-interrupting boundary message event (on a receive task that is waiting for a different message), like this. As third parameter in the method call, we are passing some process variables.
As expected, this leaves the receive task active and starts a new execution leaving the boundary event. I would have expected that the process variables passed to the third argument of RuntimeService#messageEventReceived would now be stored in the newly created execution, but it seems to be the case that they are stored in the execution of the receive task. This does not make much sense to me, because this is not the execution that "resulted" from the message.
We can workaround this problem by determining which execution is new after the execution of RuntimeService#messageEventReceived and attaching process variables there manually. But this does not seem very elegant - does anyone know a better solution? Or am I misunderstanding something?
This is expected behavior, see the java doc of the method
void messageEventReceived(String, String, Map)
void messageEventReceived(String messageName,
String executionId,
Map<String,Object> processVariables)
Notifies the process engine that a message event with the name 'messageName' has been received and has been correlated to an execution with id 'executionId'. The waiting execution is notified synchronously. Note that you need to provide the exact execution that is waiting for the message if the process instance contains multiple executions.
Parameters:
messageName - the name of the message event
executionId - the id of the process instance or the execution to deliver the message to
processVariables - a map of variables added to the execution
Since the process variables are set on the existing execution they are also available on the new created child execution.
As alternative you could create a ServiceTask after the Boundary event, which creates the process variables OR you create another ReceiveTask after the Boundary event. This receive task could be completed with your message and the needed variables.

How can I display exception messages from custom functoid errors in the BizTalk Administration Console?

My goal is to influence the error descriptions that appear in BizTalk Administration Console in the Error Information tab of suspended instance windows, after errors occur in my custom functoids. If possible, I would also like the ErrorReport.Description promoted property to display this error description on the failed message.
I've read everything I can find about custom functoid development, but I can't find much about error handling within them. In particular, whenever my functoids throw exceptions, I see the boilerplate "Exception has been thrown at the target of an invocation" message that occurs whenever exceptions occur through reflection, rather than the message on the exception itself.
I had hoped to find something within the BaseFunctoid class framework that would allow me to submit an error string, so as to traverse the reflection boundary. Is there some way to pass error information from within a custom functoid, such that the BizTalk Administration Console will display it?
If I emulate the approach taken by DatabaseLookupFunctoid and DatabaseErrorExtractFunctoid, is there some way I can fail the map with the extracted error, rather than mapping it to a field on the destination schema as is shown in its examples?
The simplest way to do this is using custom C#, writing something like this in your code:
System.Diagnostics.EventLog.WriteEntry("EVENT_LOG_SOURCE", "Error message...", System.Diagnostics.EventLogEntryType.Error);
As Johns-305 mentions, you need to make sure your event source is registered (e.g. System.Diagnostics.EventLog.CreateEventSource("EVENT_LOG_SOURCE", "Application") - but this should really be done as part of your installation steps with an EventLogInstaller or some kind of script to set up the environment). It's certainly true that error handling in BizTalk is just .NET error handling, but one thing to keep in mind is that maps are actually executing as XSLT, and the context in which their executing can have a major impact on how exceptions and errors will be handled, particularly unhandled exceptions.
Orchestrations
If you're executing a transform in an orchestration that has exception handling in it, the exception thrown will be handled and may even fall into additional logging you have in the orchestration - in other words, executing a throw from a C# functiod will work the way you'd think it would work elsewhere in C#. However, I try to avoid this since you never know if a map will at some point be used else where and because exception handling in XSLT doesn't always work the way you'd think (see below).
Send/Receive Ports
Unfortunately, if you're executing a map on a send or receive port and throw an exception within it, you will almost definitely get very unhelpful error message in the event log and a suspended instance in the group hub. There is no easy, straightforward way to simple "cancel" a transform - XSLT 1.0 doesn't have any specified way of doing that (see for example Throwing an exception from XSLT). That leaves you with outputting an error string to a particular node in the output (and/or to the EventLog), or writing lots of completely custom XSLT to try to validate input, or designing your schemas properly and using a validating component where necessary. For example, if you have a node that must match a particular regex, or should never be empty, or should never repeat more than X times, make sure you set those restrictions on the schema and then make sure you pass it through the XmlValidator or similar before attempting to map.
The simplest answer is, you just do.
Keep in mind, there is nothing special at all about error handling in BizTalk apps, it's just regular plain old .Net error handling.
So, what you do is catch the error in you code, write the details to the Windows Event Log (be sure to create a custom Event Source) and...that's it. That is all I do. I don't worry about what appear in BT Admin specifically.strong text

To fault or not to fault

I'm having a discussion with a colleague about when to throw faults and when not to throw faults in a WCF service.
One opinion is, that we only throw faults when the service operation could not do its work due to some error; and something may be in an invalid state because of it. So, some examples:
ValidateMember(string name, string password, string country)
-> would throw a fault if the mandatory parameters are not passed, because the validation itself could not be executed;
-> would throw fault if some internal error occured, like database was down
-> would return a status contract in all other cases, that specifies the result of the validation (MemberValidated, WrongPassword, MemberNotKnown,...)
GetMember(int memberId)
-> would only throw fault if something is down, in all other cases it would return the member or null if not found
The other opinion is that we should also throw faults when GetMember does not find the member, or in the case of ValidateMember the password is wrong.
What do you think?
My take on this...
There are three causes of failure:
The service code threw an exception, e.g. database error, logic error in your code. This is your fault.
The client code failed to use your service properly according to your documentation, e.g. it didn't set a required flag value, it failed to pass in an ID. This is the client software developer's fault.
The end user typed in something silly on screen, e.g. missing date of birth, negative salary. This is the end user's fault.
It's up to you how you choose to map actual fault contracts to each cause of failure. For example, we do this:
For causes 1 and 2, all the client code needs to know is that the service failed. We define a very simple "fatal error" fault contract that contains only a unique error ID. The full details of the error are logged on the server.
For cause 3, the end user needs to know exactly what he/she did wrong. We define a "validation errors" fault contract containing a collection of friendly error messages for the client code to display on screen.
We borrow the Microsoft EntLib class for cause 3, and use exception shielding to handle causes 1 and 2 declaratively. It makes for very simple code.
To Clarify:
We handle the three causes like this inside the service:
An unexpected exception is thrown in the service code. We catch it at the top level (actually exception shielding catches it, but the principle is the same). Log full details, then throw a FaultException<ServiceFault> to the client containing only the error ID.
We validate the input data and deliberately throw an exception. It's normally an ArgumentException, but any appropriate type would do. Once it is thrown, it is dealt with in exactly the same way as (1) because we want to make it appear the same to the client.
We validate the input data and deliberately throw an exception. This time, it's a FaultException<ValidationFault>. We configure exception shielding to pass this one through un-wrapped, so it appears on the client as FaultException<ValidationFault> not FaultException<ServiceFault>.
End result:
No catch blocks at all inside the service (nice clean code).
Client only has to catch FaultException<ValidationFault> if it wants to display messages to the user. All other exception types including FaultException<ServiceFault> are dealt with by the client's global error handler as fatal errors, since a fatal error in the service generally means a fatal error in the client as well.
It it is a common, routine failure, then throwing a fault is a mistake. The software should be written to handle routine items, like entering the wrong password. Fault processing is for exceptional failure which are not considered part of the program's normal design.
For example, if your program was written with the idea that it always has access to a database, and the database is not accessible, that's an issue where the "fix" is well outside of the limits of your software. A fault should be thrown.
Fault processing uses different logical flows through the structure of the programming language, and by using it only when you've "left" the normal processing of the programming problem, you will make your solution leverage the feature of the programming language in a way that seems more natural.
I believe it is good practice to separate error handling and fault handling. Any error case should be dealt by your program - fault handling is reserved for exceptional conditions. As a guide to the separation of the two I found it useful when considering such cases to remember that there are only three types of error (when handling data and messages) and only one type of fault.
The error types are related to different types of validation:
Message validation - you can determine from the message contents that the data is valid or invalid.
Example: content that is intended to be a date of birth - you can tell from the data whether it is valid or not.
Context validation - you can only determine that content is invalid by reference to the message
combined with the system state.
Example: a valid date of joining a company is earlier than that persons date of birth.
Lies to the system - you can only determine that a message was in error when a later message
throws up an anomaly.
Example: Valid date of birth stored and inspection of the person's birth certificate shows this to be incorrect. Correction of lies to the system generally require action outside of the system, for instance invoking legal or disciplinary remedies.
Your system MUST deal with all classes of error - though in case three this may be limited to issuing an alert.
Faults (exceptions) by contrast only have one cause - data corruption (which includes data truncation). Example: validation parameters are not passed.
Here the appropriate mechanism is fault or exception handling - basically handing off the problem to some other part of the system that is capable of dealing with it (which is why there should be an ultimate destination for unhandled faults).
In the old days we used to have a rule that exceptions were only for exceptional and unexpected things. One of the reasons you did not want to use them too much was that they "cost" alot of computing power.
But if you use exceptions you can reduce the amount of code, no need for alot of if else statements, just let the exception bubble up.
It depends on your project. The most important thing is that there is a project standard and everyone does it the same way.
My opinion is that exceptions/fault should be thrown whenever what the method is supposed to do can't be achieved. So validation logic should never raise exception except if the validation can't be made (i.e. for technical reasons) but never just because the data are not valid (in that case it will return validation codes/messages or anything helping the caller to correct the data).
Now the GetMember case is an interesting one because it's all about semantic. The name of the method suggest that a member can be retrieved by passing an id (compare to a TryGetMember method for exemple). Of course the method should not throw the same exception if the id is nowhere to be found or if the database does not respond but a wrong id passed to this method is probably the sign that something is going wrong somewhere before that call. Except if the user can directly enter a member-id from within the interface in which case a validation should occurred before calling the method.
I hear a lot about the performance issue. I just made a simple test using C# and trow/catch 1000 exceptions. The time it took is 23ms for 1K Exeptions. That's 23µ per exception. I think performance is no longer the first argument here except if you plan to raise more than 2000 exception per second in which case you will have a 5% performance down, which I can start considering.
My humble opinion...