Loop exit not working - flowgear

So I have a workflow which is supposed to throw an error after a certain condition is satisfied. (False condition) As you can see in the log directly below, it works: I do a loop exit first for the group 'coms' and an error is thrown. However, Flowgear seems to only read the last executed node and then determine the workflows status from that. Since the loop finishes last and is successful, if you look in the second log, you can see that the workflow has been evaluated as 'successful' although an error was thrown inside.
Any ideas how to make the loop break? Also why does flowgear only consider the last node? There should be an option in the error node to stop all execution.

Iterator nodes (Splitter and Loop) will consume the errors. The only way at this stage to get the workflow to return an error is to cause an error in the AnyError or UnhandledError part of the workflow. I've created a workflow to demonstrate this here: http://flowgear.me/s/UdpGBbd
Hope this helps.

Related

Python: If error occurs anywhere, do specific line of code

I have a script I'm trying to write to process a large amount of data. There are, of course, potential for errors. In the script I need to connect to databases. If the script encounters an error, the code never reaches the point where the connection to the database is terminated. I'd like to have something in my python code that will recognize an error occurs, not matter where, and if nothing else at least close those databases. Does something like this exist? I know I can use try/except, but that would only work if I know exactly where I could get the error? I'm basically looking for a catchall to close my databases in the event an error occurs in a location I didn't anticipate.
To run certain cleanup code even if there is an error, use the finally block:
try:
# do stuff, possible exception
except:
# run this if exception
finally:
# always run this, even if exception
Reference: https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions

Apache Flink - exception handling in "keyBy"

It may happen that data that enters Flink job triggers exception either due to bug in code or lack of validation.
My goal is to provide consistent way of exception handling that our team could use within Flink jobs that won't cause any downtime in production.
Restart strategies do not seem to be applicable here as:
simple restart won't fix issue and we fall into restart loop
we cannot simply skip event
they can be good for OOME or some transient issues
we cannot add custom one
try/catch block in "keyBy" function does not fully help as:
there's no way to skip event in "keyBy" after exception is handled
Sample code:
env.addSource(kafkaConsumer)
.keyBy(keySelector) // must return one result for one entry
.flatMap(mapFunction) // we can skip some entries here in case of errors
.addSink(new PrintSinkFunction<>());
env.execute("Flink Application");
I'd like to have ability to skip processing of event that caused issue in "keyBy" and similar methods that are supposed to return exactly one result.
Beside the suggestion of #phanhuy152 (which seems totally legit to me) why not filter before keyBy?
env.addSource(kafkaConsumer)
.filter(invalidKeys)
.keyBy(keySelector) // must return one result for one entry
.flatMap(mapFunction) // we can skip some entries here in case of errors
.addSink(new PrintSinkFunction<>());
env.execute("Flink Application");
Can you reserve a special value like "NULL" for the keyBy to return in such case? Then your flatMap function can skip when encounter such value?

jmeter stop current iteration

I am facing the following problem:
I have multiple HTTP Requests in my testplan.
I want every request to be repeated 4 times if they fail.
I realized that with a BeanShell Assertion, and its already working fine.
My problem is, that I don't want requests to be executed if a previous Request failed 5 times,
BUT I also dont want the thread to end.
I just want the current thread iteration to end,
so that the next iteration of the thread can start again with the 1st request (if the thread is meant to be repeated).
How do I realize that within the BeanShell Assertion?
Here is just a short extract of my code where i want the solution to have
badResponseCounter is being increased for every failed try of the request, this seems to work so far. Afterwards, the variable gets resetted.
if (badResponseCounter = 5) {
badResponseCounter = 0;
// Stop current iteration
}
I already checked the API, methods like setStopTest() or setStopThread() are given, but nothing for quitting the current iteration. I also need the preference "continue" in the thread group, as otherwise the entire test will stop after 1 single request failed.
Any ideas of how to do this?
In my opinion the easiest way is using following combination:
If Controller to check ${JMeterThread.last_sample_ok} and badResponseCounter variables
Test Action Sampler as a child of If Controller configured to "Go to next loop iteration"
Try this.
ctx.setRestartNextLoop(true);
if the thread number is 2, i tried to skip. I get the below result as i expected (it does not call b-2). It does not kill the thread either.

Best way to detect a "data loss" publish action when calling SSDT's SQLPackage.exe

When calling SQLPackage.exe (syntax described here) with publish action /a:Publish, there are cases when data loss occurs and the execution will be halted; this is specified by setting the parameter /p:BlockOnDataLoss (default to be 'true').
I need to know whether my publish action has succeeded or has failed due to 'data loss'.
Currently when succeeded, the returned exit code will be 0. And when failed, we just have the returned exit code to be 1. We cannot say whether a fail was caused by the data loss or not. How can we identify this?
Somewhere in the console output, we see the line that contain "... is being dropped, data loss could occur." So I intend to scan for every output line is printed but I guess there should be some other better way to do this.
Hope to hear what you think.

Erlang finish or kill process

I have erlang application. In this application i run process with spawn(?MODULE, my_foo, [my_param1, my_param2, my_param3]).
And my_foo:
my_foo(my_param1, my_param2, my_param3) ->
...
some code here
...
ok.
When i open etop i see that this my_foo/3 function status: proc_lib:sync_wait/2
Than i try to put exit(self(), normal) in the end of my function, but i see same behavior: proc_lib:sync_wait/2 in etop.
How can i kill or exit process correctly?
Thank you.
Note that exit(Pid, Reason) and exit(Reason) do NOT do the same thing if Pid is the process itself. exit/1 tells the current process to exit - from the inside if you like - while exit/2 sends an exit signal to the process, even if the process is itself. So when you do exit(self(), normal) you are actually sending the normal exit signal to yourself, which is ignored.
In this case putting the exit call at the end of the function should not make any difference as the process automatically dies (with reason normal) when the function with which it was started ends. It seems like the process is suspended somewhere before that.
proc_lib:sync_wait/2 is called inside proc_lib:start/start_link and sits and waits for the spawned process to do proc_lib:init_ack/1/2 to return the return value for start. It would appear that your process does not call init_ack.
Based on the limited information that you give in the question I would suspect that your process hasn't finished running yet.
Normally you don't need to add exit/2 to your process. It will exit automatically when the function has finished running.
You probably have a long running call in some code here that has not finished running. I recommend that you add logging information and see where you are stuck.