RobotFramework / Selenium: How to set screenshot-name to testcase-name on failure - selenium

I was wondering if there is a possibility to make the following happen. Let's say I have 3 Testcases with the following results in RIDE:
Testcase Easter -- PASS
Testcase Christmas -- FAIL
Testcase Foo -- PASS
I want to take a screenshot which should be named testcase_christmas.png (or with ' ' instead of '_', that does not matter). Is there a possibility to do it dynamically, something like
${testcase}= Get Testcase Name
Capture Page Screenshot ${testcase}
or anything like that? I am using:
Python 2.7.x (latest) 32 bit
wxPython 2.8 32 bit
geckodriver latest 64 bit

Robot framework automatically sets the variable ${TEST NAME} to contain the name of the currently executing test. See Automatic Variables in the user guide)
The documentation for SeleniumLibrary's Capture Page Screenshot shows that you can give it a filename as the first argument.
Putting those two together, you can do this:
Capture page screenshot ${TEST NAME}.png

The way I would go about this is creating a test teardown and using automatic variables form robot framework. Found here: http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#automatic-variables
Your keywords page / resource file should have a load test data keyword that gets the test name, along with setting a test variable you can assign the screenshot too.
*** Keywords ***
Load Test Data
${data} Get File ${TEST NAME}.txt
Set Test Variable ${data} ${data}
Common Test Teardown
capture page screenshot ${data}.png
Your test should call whatever test teardown you decide to use.
*** Settings ***
Test Setup Load Test Data
*** Testcases ***
Test Case A
My keywords
[Teardown] Common Test Teardown
Calling the test setup allows you to load the name of each test in your file and in the teardown if it fails will take a screenshot with the test case name you loaded in your test setup.

If you want to save the screenshots on basis of test case ie. a separate folder for all screenshots related to each test case. Then you can use:
Set Screenshot Directory ./Screenshots/${SUITE NAME}/${TEST NAME}
Capture Page Screenshot ABC.png
A Screenshot directory will be created in the project root folder where all the screenshots will be stored in different folders based on the test case and test suites.
For a single suite you can use
Set Screenshot Directory ./Screenshots/${TEST NAME}

Related

Polarion: xUnitFileImport creates duplicate testcases instead of referencing existing ones

I have the xUnitFileImport scheduled job configured in my polarion project (as described in Polarion documentation) to import e2e test results (formatted to JUnit test results)
<job cronExpression="0 0/5 * * * ? *" id="xUnitFileImport" name="Import e2e Tests Results" scope="system">
<path>D:\myProject\data\import-test-results\e2e-gitlab</path>
<project>myProject</project>
<userAccountVaultKey>myKey</userAccountVaultKey>
<maxCreatedDefects>10</maxCreatedDefects>
<maxCreatedDefectsPercent>5</maxCreatedDefectsPercent>
<templateTestRunId>xUnit Build Test</templateTestRunId>
<idRegex>(.*).xml</idRegex>
<groupIdRegex>(.*)_.*.xml</groupIdRegex>
</job>
This works and I get my test results imported into a new test run and new test cases are created. But if I run the import job multiple times (for each test run) it creates duplicate test case work items even though they have the same name, which leads to this situation:
Is there some way to tell the import job to reference the existing testcases to the
newly created test run, instead of creating new ones?
What i have done so far:
yes I checked that the "custom field for test case id" in the "testing > configuration" is configured
yes I checked that the field value is really set in the created test case
The current value in this field is e.g. ".Login" as i don't want the classnames in the report.
YES I still get the same behaviour with the classname set
In the scheduler I have changed the job parameter for the group id because it wasn't filled. New value is: <groupIdRegex>e2e-results-(.*).xml</groupIdRegex>
I checked that no other custom fields are interfering, only the standard fields are set
I checked that no readonly fields are present
I do use a template for the testcases as supported by the xUnitFileImport. The testcases are successfully created and i don't see anything that would interfere
However I do have a hyperlink set in the template (I'll try removing this soon™)
I changed the test run template from "xUnit Build test" to "xUnit Manual Test Upload" this however did not lead to any visible change
I changed the template status from draft to active. Had no change in behaviour.
I tripple checked all the fields in the created test cases. They are literally the same, which leads to the conclusion that no fields in the testcases interfere with referencing to them
After all this time i have invested now, researching on my own and asking on different forums, I am ready to call this a polarion bug unless someone proves me this functionality is working.
I believe you have to set a custom field that identifies the testcase with the xUnit file you're importing, for the importer to identify the testcase.
Try adding a custom field to the TestCase workitem and selecting it here.
Custom Field for Test Case ID option in settings
If you're planning on creating test cases beforehand, note that the ID is formatted form the {classname}.{name} for a given case.

How to print from object into same output thread as Capybara/Cucumber

Whenever I use a puts statement inside of a class, it gets printed before the step even starts then the usually usual Capybara output will begin to show up below that.
SF-NR-2:work nr$ cucumber --tags #homepage-tests
Using the default profile...
#homepage-tests
Feature: Homepage Tests
TEST PRINT
#### Homepage Test
#bvt
Scenario: Homepage loads in portrait mode # features/web/homepage.feature:7
Given I go to the homepage
TEST PRINT
And the homepage loads
1 scenario (1 passed)
2 steps (2 passed)
0m5.041s
I added
puts "TEST PRINT" to each step, but it prints before each step. If i pull the puts "TEST PRINT" out of the class scope, it will print in the same thread as capybara/cucumber are printing in.
The output you're seeing from Cucumber is output from the Cucumber formatter(s) you're using. Calling puts inside a step definition calls puts on the formatter which will then show it at the correct spot in the output data. The issue is that if you call puts from inside a different object (class in your app for instance) it's actually calling Kernel#puts which the cucumber formatter has no clue about.
You may be able to get what you want by writing to the Cucumber logger instead of Kernel#puts when in non step definition code by calling something like
Cucumber.logger.info 'text to log'`

Set variable based on currently active test case

I have the following situation. Test.robot is my robot suite with 4 test cases.
***Settings**
Resource Python Resource files
Suite Setup ${xyz}
Suite Teardown ${xyz}
Test Setup ${Xyz}
***Variables***
/*This is pointing to the folder with my data files*/
${data} {CURDIR}/
*** Test Cases ***
Test_case1
Test_case2
Test_case3
Test_case4
I have a directory with folders
1. Test_Case1
2. Test_Case2
3. Test_Case3
4. Test_Case4
test.robot (my robot suite)
The above 4 folders have the data files that my tests(test_case1, test_case2, test_case3,test_case4) will access while running. I am not sure how to set the variable such that it dynamically access the specific test folder when the test.robot suite is run.
I mean, when I run test.robot all four tests(test_case1, test_case2 etc) will run. I am unsure how to set the Variable ${data} such that it dynamically accesses the correct test_case folder when the specific test case is running
You could make use of the automatic variable ${TEST NAME} to retrieve the currently active test case's name and set your own variable using the data you extracted from this variable.

How to get disabled test cases count in jenkins result?

I have suppose 10 test cases in test suite in which 2 test cases are disabled.I want to get those two test cases in test result of jenkins job like pass = 7 ,fail = 1 and disabled/notrun= 2.
By default, TestNG generates report for your test suite and you may refer to index.html file under the test-output folder. If you click on "Ignored Methods" hyperlink, it will show you all the ignored test cases and its class name and count of ignored methods.
All test cases annotated with #Test(enabled = false) will be showing in "Ignored Methods" link.
I have attached a sample image. Refer below.
If your test generates JUnit XML reports, you can use the JUnit plugin to parse these reports after the build (as a post-build action). Then, you can go into your build and click 'Test Result'. You should see a breakdown of how the execution went (including passed, failed, and skipped tests).

Integrating RFT Test framework to work with RQM

I designed a framework in RFT where the test cases are written in spreadsheet specifying the data source, object and keyword and a driver script which processes through all this data and routes it to the appropriate method for each test step all in a spreadsheet. Now I want to integrate this with RQM so that each of my test cases in the spreadsheet is shown as passed/failed in RQM. Any ideas?
You could implement now an algorithm to read those testcases in the spreadsheet and pass them to RQM as attachments with logTestResult.
For example:
logTestResult( <your attachment> , true );
And if you are already connected to RQM the adapter will attach files that you indicate automatically to RQM. So, at the end you will see step by step the results and if the script ends correctly RQM will show you the script as "passed".
Thanks for the answer Juan. I solved this by passing the testcase name from Script Argument part of RQM and fetching the arguments in my starter script as shown below:-
public void testMain(Object[] args) throws Exception
{
String n=args[0].toString();
logInfo("Parameter from RQM"+n);
ModuleDriver d=new ModuleDriver();
d.execute_main(n);
}
Since I have verification points setup for each of the steps in my test cases the results get reported based on each of those verification points in RQM which is what i needed.