Robotframework - get failing keyword / stack trace of failure - error-handling

I have a keyword called "debug teardown" which prints the test status and then runs the debuglibrary Debug keyword, if the test has failed.
I would like to be able to log to console which keyword has caused the failure, so I can more effectively debug my test.
Is it possible to get the stack trace or most recent test keyword, and log it to the console?
Here is my Debug Teardown keyword:
Debug Teardown
Run Keyword If Test Failed Log ${TEST STATUS}: ${TEST MESSAGE} ERROR
Run Keyword If Test Failed Debug

You can get a bit more information if you create a listener and also set the log level to DEBUG. Inside the listener you can save the results of log commands, and then when a keyword fails you can print it out or do whatever else you want.
For example, here's a listener that will print to stdout the last log message when a keyword fails:
from __future__ import print_function
class my_listener():
ROBOT_LISTENER_API_VERSION = 2
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.last_log = None
def _log_message(self, message):
self.last_log = message
def _end_keyword(self, name, attrs):
if attrs['status'] == 'FAIL':
print("\n******\n", self.last_log['message'])
You would use it by importing the listener like a normal library, and also setting the log level to DEBUG (otherwise you'll get the error but no stack trace).
Example:
*** Settings ***
Library my_listener
Suite Setup set log level DEBUG
*** Test cases ***
Example
some keyword

You might be able to utilize set suite variable to update a "global" variable as you go. The last set variable value would be the value that failed.

Related

Why am I getting "java.lang.NoClassDefFoundError: Could not initialize class io.mockk.impl.JvmMockKGateway" when using quarkusDev task in IntelliJ?

I am using Gradle 7.5, Quarkus 2.12.3 and mockk 1.13.3. When I try to run quarkusDev task from command line and then start continuous testing (by pressing r), then all tests pass OK.
However, when I do the same as from IntelliJ (as gradle run configuration), all tests fail with error:
java.lang.NoClassDefFoundError: Could not initialize class io.mockk.impl.JvmMockKGateway
How can I fix that?
Masked thrown exception
After much debugging I found the problem. The thrown exception actually originates in HotSpotVirtualMachine.java and is thrown during attachment of ByteBuddy as a java agent. Here is the relevant code;
// The tool should be a different VM to the target. This check will
// eventually be enforced by the target VM.
if (!ALLOW_ATTACH_SELF && (pid == 0 || pid == CURRENT_PID)) {
throw new IOException("Can not attach to current VM");
}
Turning check off
So the check can be turned off by setting ALLOW_ATTACH_SELF constant to true. The constant is set from a system property named jdk.attach.allowAttachSelf:
String s = VM.getSavedProperty("jdk.attach.allowAttachSelf");
ALLOW_ATTACH_SELF = "".equals(s) || Boolean.parseBoolean(s);
So, in my case, I simply added the following JVM argument to my gradle file and tests started to pass:
tasks.quarkusDev {
jvmArgs += "-Djdk.attach.allowAttachSelf"
}

Robot Framework: How to avoid a variable value from being displayed in the console during execution

I have the API Key to a server available in clear text in our tests.
To avoid this I came up with a new python script Secure.py (This includes 'encrypt' and 'decrypt' functions) and its working fine.
I have a decrypt keyword in my robot tests.
*** Variables ***
${secret_phrase} abcdefghij
${encrypted_Key} ardfvbjgfrtavhdimdbshajakiugbn
I have used a secret phrase and the API key to encrypt and this is the encrypted key.
*** Test Cases ***
decrypt
${APIKEY} Secure.Decrypt ${secret_phrase} ${encrypted_Key}
Should Not Be Empty ${APIKEY} shell=no
Set Suite Variable ${APIKEY} shell=no
In this keyword I am passing in secret phrase and encrypted key so as a result I get the APIKEY.
#APIKEY result is printed in the Console, I don't want it to be printed in the console is there any way to do so?
Console:
${APIKEY}= asdfghjkl
You can use the Set Log Level to disable all logging in your critical sections of the test case. This keyword will return the previous log level, which you could store in a variable and then you can use it later to restore the original log level.
Sets the log threshold to the specified level and returns the old
level.
Messages below the level will not logged. The default logging level is
INFO, but it can be overridden with the command line option
--loglevel.
The available levels: TRACE, DEBUG, INFO (default), WARN, ERROR and NONE (no logging).
An example:
*** Variables ***
${secret_phrase} abcdefghij
${encrypted_Key} ardfvbjgfrtavhdimdbshajakiugbn
*** Test Case ***
decrypt
${old log level}= Set Log Level NONE # disable logging
${APIKEY} Set Variable ${secret_phrase}_${encrypted_Key}
Should Not Be Empty ${APIKEY}
Set Suite Variable ${APIKEY}
Set Log Level ${old log level} # enable logging
${public}= Set Variable This variable can be visible.
Here you can see the between the two Set Log Level call, nothing is logged, but after the original log level (INFO) has been restored logging is working again.

How to run if if else condition based on element visibility in selenium robot framework?

I am automating a user registration form flow, where successful registration shows a success message and any validation error throws alert text. For this, I am writing an if-else flow based on the visibility or presence of an element on the page. I will pass the control to a specific keyword with that condition
${SuccessBreadcrumb} is the element which is visible when the registration is successful
Code Snippet
*** Settings ***
Library SeleniumLibrary
*** Variables ***
${SuccessBreadcrumb} = xpath=//a[contains(text(),'Success')]
${SuccessMsgLocator} = xpath=//p[contains(text(), 'successfully created')]
${AlertMsg} = xpath=//div[#class='text-danger']
*** Keywords ***
Verify Validation Message
[Arguments] ${UserDetails}
sleep 2s
run keyword if ${SuccessBreadcrumb} is visible Keyword 1
# ... else Keyword 2
Keyword 1
[Arguments] ${UserDetails}
${AccountCreatedText} = get text ${SuccessMsgLocator}
should contain ${AccountCreatedText} ${UserDetails.ValidateMsg} ignore_case=true
# Keyword 2
Error Log
Run Keyword If ${SuccessBreadcrumb}, is visible, VerifySuccessText
Documentation:
Runs the given keyword with the given arguments, if condition is true.
Start / End / Elapsed: 20200213 12:27:52.882 / 20200213 12:27:52.882 / 00:00:00.000
12:27:52.882 FAIL Evaluating expression 'xpath=//a[contains(text(),'Success')]' failed: SyntaxError: invalid syntax (<string>, line 1)
In the documentation for Run Keyword If there does not exist an example with an object. However, using a combination of Run Keyword If with Run Keyword And Return Status will allow you to create a way to handle pass and fail situations within the same test case or keyword.
*** Settings ***
Library SeleniumLibrary
*** Test Cases ***
Check Element Visible
Open Browser
... url=https://www.google.com
... browser=chrome
${passed} Run Keyword And Return Status
... Element Should Be Visible xpath://*[#id="hplogo"]
Run Keyword If ${passed} Keyword Passed
... ELSE Keyword Failed
[Teardown] Close Browser
Check Element Not Visible
Open Browser
... url=https://www.google.com
... browser=chrome
${passed} Run Keyword And Return Status
... Element Should Be Visible xpath://*[#id="xxxx"]
Run Keyword If ${passed} Keyword Passed
... ELSE Keyword Failed
[Teardown] Close Browser
*** Keywords ***
Keyword Passed
Log To Console Passed
Keyword Failed
Log To Console Failed

robot framework: exception handling

Is it possible to handle exceptions from the test case? I have 2 kinds of failure I want to track: a test failed to run, and a test ran but received the wrong output. If I need to raise an exception to fail my test, how can I distinguish between the two failure types? So say I have the following:
*** Test Cases ***
Case 1
Login 1.2.3.4 user pass
Check Log For this log line
If I can't log in, then the Login Keyword would raise an ExecutionError. If the log file doesn't exist, I would also get an ExecutionError. But if the log file does exist and the line isn't in the log, I should get an OutputError.
I may want to immediately fail the test on an ExecutionError, since it means my test did not run and there is some issue that needs to be fixed in the environment or with the test case. But on an OutputError, I may want to continue the test. It may only refer to a single piece of output and the test may be valuable to continue to check the rest of the output.
How can this be done?
Robot has several keywords for dealing with errors, such as Run keyword and ignore error which can be used to run another keyword that might fail. From the documentation:
This keyword returns two values, so that the first is either string
PASS or FAIL, depending on the status of the executed keyword. The
second value is either the return value of the keyword or the received
error message. See Run Keyword And Return Status If you are only
interested in the execution status.
That being said, it might be easier to write a python-based keyword which calls your Login keyword, since it will be easier to deal with multiple exceptions.
You can use something like this
${err_msg}= Run Keyword And Expect Error * <Your keyword>
Should Not Be Empty ${err_msg}
There are couple of different variations you could try like
Run Keyword And Continue On Failure, Run Keyword And Expect Error, Run Keyword And Ignore Error for the first statement above.
Option for the second statement above are Should Be Equal As Strings, Should Contain, Should Match.
You can explore more on Robot keywords

How to have a run script with display_trace() in it for debug purposes

I have a typical sort of "run" script (below).
When I want to debug the scenario, I use VLAB GUI ... and in this instance I want the trace window open, so I have display_trace() at the end of the run script.
However, often I just want to run this scenario in batch as part of a regression test. The problem is that VLAB throws an exception on display_trace() when in batch mode.
I don't really like the idea of
try:
display_trace()
except:
pass
(or even catching the specific error that is thrown) ... it just feels "messy". What if there is a valid exception upon display_trace() that I miss?
Is there some way not to call display_trace() at all if I'm in batch mode?
run script:
from vlab import *
import os
image_path = os.path.join('o5e', 'bin','o5e_dbg.elf')
load('ecu.sim', args=['--testbench=testbench_o5e',"--image=%s" % image_path] + __args__)
# First set up MMU
add_sw_execute_breakpoint(get_address("BamExit"))
run(blocking=True)
# Then we can set breakpoints in user code space
add_sw_execute_breakpoint(get_address("init_variables"))
run(blocking=True)
# Trace stuff
vtf_sink = trace.sink.vtf("o5e.vtf")
add_trace("+src:ecu.core_system.Core0.InstrTraceMsg", sink=vtf_sink)
add_trace(get_ports("ecu.core_system.Core0", kind="bus"), sink=vtf_sink)
display_trace(vtf_sink)
The "interface_mode" session property can be used to query whether VLAB was launched in "graphical" mode, where the VLAB IDE is displayed, or in "text" mode.
You could use this property to conditionally call vlab.display_trace():
in_graphical_mode = vlab.get_properties()["interface_mode"] == "graphical"
if in_graphical_mode:
vlab.display_trace()