Run a fallback script when liquibase script fails in gradle - liquibase

I'm using Liquibase with gradle in order to apply database changes.
I have three activities in runList:
runList='stop_job, execute_changes, start_job'
It works fine in case that I don't have any exception, but if something fails on the second step (execute_changes) it stops there and does not execute "start_job" activity.
Is it possible to introduce something like a fallback activity or "finally" block?

You could use failOnError:false. It defines whether the migration will fail if an error occurs while executing the changeset. Default value is true.

Related

LoadRunner - exiting login transaction on failure and perform log off

I'm running a LoadRunner test , upon user failure at login /even at any other transaction it has to fail and execute log off portion of the script.
Note: I have put text check and with textcheck count if the transaction fails( using if condition I have handled it) it then ends transaction with fail status .I would need solution to perform log off also at the point where the if condition fails.
Can anyone share me with an example to execute log off when textcheck fails.
Depends upon your language choice.
Assuming you have the default language of C with your HTTP virtual user, then simply implement a logout function which contains your logout code. Call that function upon failure of your condition. A "return 1;" inside of that if/then conditional will also start a new iteration immediately. "return 0;" goes to a new iteration with respected pacing. "return -1;" kills the virtual user altogether.

Using Optaplanner for VRPPD

I am trying to run the example "optaplanner-mixedvrp-experiment" developed by Geoffrey De Smet and when I run it it throws me the following error:
Caused by: java.lang.IllegalStateException: The entity (MY) has a
variable (previousStandstill) with value (MUNO) which has a
sourceVariableName variable (nextVisit) with a value (WERBOMONT) which
is not null. Verify the consistency of your input problem for that
sourceVariableName variable.
I have not made any change, I have only cloned and executed it, I import and solve it and it throws me this error.
Do you know what could be happening?
I am applying it in the development of a variant of VRP with multiple deliveries and collections, but it throws me the same error. I have activated the FULL_ASSERT mode and nextVisit, previousStandstill, visitIndex are always null
It's been a long time since I looked at that code, so it's using an old version of optaplanner. Our goal is still to clean it up and offer an out of the box example for VRPPD (and probably remove some boilerplate along the way, using the upcoming #CollectionPlanningVariabe etc). That being said, we have multiple users&customers who used that optaplanner-mixedvrp-experiment to successfully build VRPPD implementations.
Which dataset did you try?
FWIW, that IllegalStateException says that when A.previous = B, the B.next is not A. So either the dataset importer didn't import it correctly - before calling solve() - especially if it fails before the first CH step in FULL_ASSERT. Or one of the custom moves corrupted the model.

karate.abort() in v0.9.4 results in Failed scenario in cucumber html reports

karate.abort() results in skipped steps. There was a fix previous for this . However, cucumber reporting treats skipped tests as Failed.
Is there any workaround where I can use karate.abort() and not have Failed scenario, as I am using it deliberately to skip some DB checks.
Or is there any alternative to karate.abort()?
Yes we need some community help to resolve how third party reports treat skipped steps, please read this - and maybe you can be the one to find a solution: https://github.com/intuit/karate/issues/755#issuecomment-488710450
A workaround is to split into a second feature and then:
* if (condition) karate.call('second.feature')

Apache Flink - exception handling in "keyBy"

It may happen that data that enters Flink job triggers exception either due to bug in code or lack of validation.
My goal is to provide consistent way of exception handling that our team could use within Flink jobs that won't cause any downtime in production.
Restart strategies do not seem to be applicable here as:
simple restart won't fix issue and we fall into restart loop
we cannot simply skip event
they can be good for OOME or some transient issues
we cannot add custom one
try/catch block in "keyBy" function does not fully help as:
there's no way to skip event in "keyBy" after exception is handled
Sample code:
env.addSource(kafkaConsumer)
.keyBy(keySelector) // must return one result for one entry
.flatMap(mapFunction) // we can skip some entries here in case of errors
.addSink(new PrintSinkFunction<>());
env.execute("Flink Application");
I'd like to have ability to skip processing of event that caused issue in "keyBy" and similar methods that are supposed to return exactly one result.
Beside the suggestion of #phanhuy152 (which seems totally legit to me) why not filter before keyBy?
env.addSource(kafkaConsumer)
.filter(invalidKeys)
.keyBy(keySelector) // must return one result for one entry
.flatMap(mapFunction) // we can skip some entries here in case of errors
.addSink(new PrintSinkFunction<>());
env.execute("Flink Application");
Can you reserve a special value like "NULL" for the keyBy to return in such case? Then your flatMap function can skip when encounter such value?

robot framework: exception handling

Is it possible to handle exceptions from the test case? I have 2 kinds of failure I want to track: a test failed to run, and a test ran but received the wrong output. If I need to raise an exception to fail my test, how can I distinguish between the two failure types? So say I have the following:
*** Test Cases ***
Case 1
Login 1.2.3.4 user pass
Check Log For this log line
If I can't log in, then the Login Keyword would raise an ExecutionError. If the log file doesn't exist, I would also get an ExecutionError. But if the log file does exist and the line isn't in the log, I should get an OutputError.
I may want to immediately fail the test on an ExecutionError, since it means my test did not run and there is some issue that needs to be fixed in the environment or with the test case. But on an OutputError, I may want to continue the test. It may only refer to a single piece of output and the test may be valuable to continue to check the rest of the output.
How can this be done?
Robot has several keywords for dealing with errors, such as Run keyword and ignore error which can be used to run another keyword that might fail. From the documentation:
This keyword returns two values, so that the first is either string
PASS or FAIL, depending on the status of the executed keyword. The
second value is either the return value of the keyword or the received
error message. See Run Keyword And Return Status If you are only
interested in the execution status.
That being said, it might be easier to write a python-based keyword which calls your Login keyword, since it will be easier to deal with multiple exceptions.
You can use something like this
${err_msg}= Run Keyword And Expect Error * <Your keyword>
Should Not Be Empty ${err_msg}
There are couple of different variations you could try like
Run Keyword And Continue On Failure, Run Keyword And Expect Error, Run Keyword And Ignore Error for the first statement above.
Option for the second statement above are Should Be Equal As Strings, Should Contain, Should Match.
You can explore more on Robot keywords