Is it possible to handle exceptions from the test case? I have 2 kinds of failure I want to track: a test failed to run, and a test ran but received the wrong output. If I need to raise an exception to fail my test, how can I distinguish between the two failure types? So say I have the following:
*** Test Cases ***
Case 1
Login 1.2.3.4 user pass
Check Log For this log line
If I can't log in, then the Login Keyword would raise an ExecutionError. If the log file doesn't exist, I would also get an ExecutionError. But if the log file does exist and the line isn't in the log, I should get an OutputError.
I may want to immediately fail the test on an ExecutionError, since it means my test did not run and there is some issue that needs to be fixed in the environment or with the test case. But on an OutputError, I may want to continue the test. It may only refer to a single piece of output and the test may be valuable to continue to check the rest of the output.
How can this be done?
Robot has several keywords for dealing with errors, such as Run keyword and ignore error which can be used to run another keyword that might fail. From the documentation:
This keyword returns two values, so that the first is either string
PASS or FAIL, depending on the status of the executed keyword. The
second value is either the return value of the keyword or the received
error message. See Run Keyword And Return Status If you are only
interested in the execution status.
That being said, it might be easier to write a python-based keyword which calls your Login keyword, since it will be easier to deal with multiple exceptions.
You can use something like this
${err_msg}= Run Keyword And Expect Error * <Your keyword>
Should Not Be Empty ${err_msg}
There are couple of different variations you could try like
Run Keyword And Continue On Failure, Run Keyword And Expect Error, Run Keyword And Ignore Error for the first statement above.
Option for the second statement above are Should Be Equal As Strings, Should Contain, Should Match.
You can explore more on Robot keywords
Related
I have a script I'm trying to write to process a large amount of data. There are, of course, potential for errors. In the script I need to connect to databases. If the script encounters an error, the code never reaches the point where the connection to the database is terminated. I'd like to have something in my python code that will recognize an error occurs, not matter where, and if nothing else at least close those databases. Does something like this exist? I know I can use try/except, but that would only work if I know exactly where I could get the error? I'm basically looking for a catchall to close my databases in the event an error occurs in a location I didn't anticipate.
To run certain cleanup code even if there is an error, use the finally block:
try:
# do stuff, possible exception
except:
# run this if exception
finally:
# always run this, even if exception
Reference: https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions
In Robot Automation, how to re-run the failed test case immediately if it is failed, before going to another test case execution.
For instance,
*** Test Cases ***
Login User And Create Another User
Login User ....
Create Another User ...
Login With New User
Login User..
Test Function ABC
.....
.....
Since one test has a dependency on another test, I need to re-run the failed case immediately after it is failed. Before executing another test.
In one word, you can't, and you shouldn't; a case is a case, with binary outcome. And if you have dependencies between tests, that's a smelly design; try to change it to a pre-condition (env setup) for the second case, so it is atomic.
Disclaimer: this rant is for the automatic re-execution in a single run. After a run has finished, RF has baked-in functionality to re-execute just the failed ones (so flaky tests are given the chance to succeed); but as I understood your question, you are not asking for the latter.
In two words, if you really need to do it, you can; extract the whole test case in a keyword, and call it inside Wait Until Keyword Succeeds, giving it 2 (or more?) attempts:
*** Test Cases ***
Test Function ABC
Wait Until Keyword Succeeds 2 times 100ms The Actual Test For Function ABC
*** Keywords ***
The Actual Test For Function ABC
.....
.....
I forked Codeception/Codeception and added a Command module check that throws an exception.
When testing, this exception causes the test to fail but I'm running a test to force the exception and failure, to make sure it catches it.
Here's the edit to src/Codeception/Command/GenerateSuite.php
if ($this->containsInvalidCharacters ($suite)) {
throw new \Exception("Suite name '{$suite}' contains invalid characters. ([A-Za-z0-9_]).");
}
When updating tests/cli/GenerateSuiteCept.php when I run the command $I->executeCommand('generate:suite invalid-suite'); it fails because I'm throwing the exception in src/Codeception/Command/GenerateSuite.php in the first place.
Here is the addition to tests/cli/GenerateSuiteCept.php:
$I->executeCommand('generate:suite invalid-dash-suite');
When I run it, it says it failed. However, I want it to pass because I'm deliberately adding the dash to verify it catches invalid characters. What is the best way to do that? I'm wary of a try/catch block in these tests. I'm not sure if that is the best way to do that.
After looking at the other Codeception tests, I found this to be a better way than throwing an exception:
if ($this->containsInvalidCharacters ($suite)) {
$output->writeln("<error>Suite name '{$suite}' contains invalid characters. ([A-Za-z0-9_]).</error>");
return;
}
So I modified the test to:
$I->expect ('suite is not created due to dashes');
$I->executeCommand('generate:suite invalid-dash-suite');
$I->seeInShellOutput('contains invalid characters');
When calling SQLPackage.exe (syntax described here) with publish action /a:Publish, there are cases when data loss occurs and the execution will be halted; this is specified by setting the parameter /p:BlockOnDataLoss (default to be 'true').
I need to know whether my publish action has succeeded or has failed due to 'data loss'.
Currently when succeeded, the returned exit code will be 0. And when failed, we just have the returned exit code to be 1. We cannot say whether a fail was caused by the data loss or not. How can we identify this?
Somewhere in the console output, we see the line that contain "... is being dropped, data loss could occur." So I intend to scan for every output line is printed but I guess there should be some other better way to do this.
Hope to hear what you think.
I have a CDash configured to accept posts for automatic builds and tests. However, when any system attempts to post results to the CDash, the following error is produced. The result is that each result gets posted four times (presumably the original posting attempt plus the three retries).
Can anyone give me a hint as to what sets this mysterious build ID? I found some code that seems to produce a similar error, but still no lead on what might be happening.
Build::GetNumberOfErrors(): BuildId not set
Build::GetNumberOfWarnings(): BuildId not set
Submit failed, waiting 5 seconds...
Retry submission: Attempt 1 of 3
Server Response:
The buildid for CDash is computed based on the site name, the build name and the build stamp of the submission. You should have a Build.xml file in a Testing/20110311-* directory in your build tree. Open that up and see if any of those fields (near the top) is empty. If so, you need to set BUILDNAME and SITE with -D args when configuring with CMake. Or, set CTEST_BUILD_NAME and CTEST_SITE in your ctest -S script.
If that's not it, then this is a mystery. I've not seen this error occur before...
I'm having the same issue though Site and Buildname are available in test.xml and are visible on cdash (4 times). I can see the jobs increment by refreshing between retries so it seems that the submission succeeds and reports a timeout.
Update: This seems to have started when I added the -j(nprocs) switch to the ctest command. changing CtestSubmitRetryDelay: 20 (was 5) allowed a server response through that indicates the cdash version may not be able to handle the multi-proc option I'll have to look into that for my issue. Perhaps setting CtestSubmitRetryDelay to a larger number will get you back a server response as it did for me. g'luck!
Out of range value for column 'processorclockfrequency'