Polarion: xUnitFileImport creates duplicate testcases instead of referencing existing ones - testing

I have the xUnitFileImport scheduled job configured in my polarion project (as described in Polarion documentation) to import e2e test results (formatted to JUnit test results)
<job cronExpression="0 0/5 * * * ? *" id="xUnitFileImport" name="Import e2e Tests Results" scope="system">
<path>D:\myProject\data\import-test-results\e2e-gitlab</path>
<project>myProject</project>
<userAccountVaultKey>myKey</userAccountVaultKey>
<maxCreatedDefects>10</maxCreatedDefects>
<maxCreatedDefectsPercent>5</maxCreatedDefectsPercent>
<templateTestRunId>xUnit Build Test</templateTestRunId>
<idRegex>(.*).xml</idRegex>
<groupIdRegex>(.*)_.*.xml</groupIdRegex>
</job>
This works and I get my test results imported into a new test run and new test cases are created. But if I run the import job multiple times (for each test run) it creates duplicate test case work items even though they have the same name, which leads to this situation:
Is there some way to tell the import job to reference the existing testcases to the
newly created test run, instead of creating new ones?
What i have done so far:
yes I checked that the "custom field for test case id" in the "testing > configuration" is configured
yes I checked that the field value is really set in the created test case
The current value in this field is e.g. ".Login" as i don't want the classnames in the report.
YES I still get the same behaviour with the classname set
In the scheduler I have changed the job parameter for the group id because it wasn't filled. New value is: <groupIdRegex>e2e-results-(.*).xml</groupIdRegex>
I checked that no other custom fields are interfering, only the standard fields are set
I checked that no readonly fields are present
I do use a template for the testcases as supported by the xUnitFileImport. The testcases are successfully created and i don't see anything that would interfere
However I do have a hyperlink set in the template (I'll try removing this soon™)
I changed the test run template from "xUnit Build test" to "xUnit Manual Test Upload" this however did not lead to any visible change
I changed the template status from draft to active. Had no change in behaviour.
I tripple checked all the fields in the created test cases. They are literally the same, which leads to the conclusion that no fields in the testcases interfere with referencing to them
After all this time i have invested now, researching on my own and asking on different forums, I am ready to call this a polarion bug unless someone proves me this functionality is working.

I believe you have to set a custom field that identifies the testcase with the xUnit file you're importing, for the importer to identify the testcase.
Try adding a custom field to the TestCase workitem and selecting it here.
Custom Field for Test Case ID option in settings
If you're planning on creating test cases beforehand, note that the ID is formatted form the {classname}.{name} for a given case.

Related

RobotFramework / Selenium: How to set screenshot-name to testcase-name on failure

I was wondering if there is a possibility to make the following happen. Let's say I have 3 Testcases with the following results in RIDE:
Testcase Easter -- PASS
Testcase Christmas -- FAIL
Testcase Foo -- PASS
I want to take a screenshot which should be named testcase_christmas.png (or with ' ' instead of '_', that does not matter). Is there a possibility to do it dynamically, something like
${testcase}= Get Testcase Name
Capture Page Screenshot ${testcase}
or anything like that? I am using:
Python 2.7.x (latest) 32 bit
wxPython 2.8 32 bit
geckodriver latest 64 bit
Robot framework automatically sets the variable ${TEST NAME} to contain the name of the currently executing test. See Automatic Variables in the user guide)
The documentation for SeleniumLibrary's Capture Page Screenshot shows that you can give it a filename as the first argument.
Putting those two together, you can do this:
Capture page screenshot ${TEST NAME}.png
The way I would go about this is creating a test teardown and using automatic variables form robot framework. Found here: http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#automatic-variables
Your keywords page / resource file should have a load test data keyword that gets the test name, along with setting a test variable you can assign the screenshot too.
*** Keywords ***
Load Test Data
${data} Get File ${TEST NAME}.txt
Set Test Variable ${data} ${data}
Common Test Teardown
capture page screenshot ${data}.png
Your test should call whatever test teardown you decide to use.
*** Settings ***
Test Setup Load Test Data
*** Testcases ***
Test Case A
My keywords
[Teardown] Common Test Teardown
Calling the test setup allows you to load the name of each test in your file and in the teardown if it fails will take a screenshot with the test case name you loaded in your test setup.
If you want to save the screenshots on basis of test case ie. a separate folder for all screenshots related to each test case. Then you can use:
Set Screenshot Directory ./Screenshots/${SUITE NAME}/${TEST NAME}
Capture Page Screenshot ABC.png
A Screenshot directory will be created in the project root folder where all the screenshots will be stored in different folders based on the test case and test suites.
For a single suite you can use
Set Screenshot Directory ./Screenshots/${TEST NAME}

How to define run once context steps with Gauge?

Using Gauge we can repeat a set of steps before each scenario using Context Steps right after a test specification heading. For example:
Delete project
==============
* User log in as "mike"
Delete single project
---------------------
* Delete the "example" project
* Ensure "example" project has been deleted
Delete multiple projects
------------------------
* Delete all the projects in the list
* Ensure project list is empty
In the above Delete Project test specification, the context step User log in as "mike" is going to be executed twice, one time for each of the two detete scenarios.
How to define steps that run once and before all scenarios of a test specification?
Since you cannot have it run once through the spec file a workaround could be to use the suite store.
public void loginAsMike(){
if((boolean) DataStoreFactory.getSuiteDataStore().get('loggedIn')){
//execute steps
DataStoreFactory.getSuiteDataStore().put('loggedIn', true);
}
}
This way it will only run once. The only issue here would be if you were to run multiple tests in parallel. However if your only logging in as mike in one spec file only then this is a good solution.

Jenkins' EnvInject Plugin does not persist values

I have a build that uses EnvInject Plugin to set an environmental value.
A different job needs to scan last good Jenkins build of that job and get the value of that environmental variable.
This all works well, except sometimes the variable will disappear from build history. It seems that after some time passes, when I look at the 'Environment variables' section in build history, the injected value simply disappears.
How can I make this persist? Is this a bug, or part of the design?
If it make any difference, the value of the injected variable is +1500 chars and in the following format: 'component1=1.1.2;component2=1.1.3,component3=4.1.2,component4=1.1.1,component4=1.3.2,component4=1.1.4'
Looks like EnvInject and/or JobDSL have a bug.
Steps to reproduce:
Set up a job that runs this JobDSL:
job('run_deploy_mock') {
steps {
environmentVariables {
env('deployedArtifacts', 'component1=1.0.0.2')
}
}
}
Run it and it will create a job called 'deploy_mock'
Run the 'deploy_mock' job. After build #1 is done, go to build details and check 'Environmental Variables' section for an entry called 'component1'
Run the JobDSL job again
Check 'Environmental Variables' section for 'deploy_mock' build #1. The 'component1' variable is now missing.
If I substitute the '=' for something else, it works as expected.
Created Jenkins Jira

Selenium Timing issue: form fields are not filled correctly

I am using clj-webdriver to fill out a form in my test case.
(quick-fill-submit {"#firstname" "Firstnam"}
{"#lastname" "Lastnaem"}
{"#username" "Username"}
{"#password" "Password"}
{"#password2" "Password"}
{"#roles" "user"})
(click "button#add-user")
Every time I run this code in my test case the third value is filled in blank.
I moved the fields around and verified it. It is always the third field.
When execute my test case step by step in a repl it works fine but when
running it through lein test it fails.
This seems to be some kind of timing issue. When I for example stall the
execution by adding a
(wait-until #(= 1 2) 10000)
between the two functions the field gets filled. A simple
(Thread/sleep n)
does not work in this case. Why is Selenium not filling in the form correctly?
WebDriver and AJAX calls usually require tweaking the wait settings. You should try setting implicit-wait to something bigger than 0 (which is the default). Another option would be to use wait-until and a predicate that checks for the presence of the elements.

"VerifyTextPresent" returning incorrect result for Selenium IDE

I am using Selenium IDE to record some scenarios and wanted to check if a particular text is present on the page. I inserted a command "VerifyTextPresent". However, it always returns the result as true even when the particular text is not present.
What can be the probable reason? Do I need to modify anything?
Looking at the sourcecode it looks like you are putting the text you are searching for in the incorrect field.
verifyTextPresent (and assert...) has only two parameters unlike verifyText which also requires a target.
Unlike verifyText the text element you are searching for should be entered into the second field 'Target' not in 'Value'.
thus the code becomes
<tr>
<td>verifyTextPresent</td>
<td>XYZ</td>
<td></td></tr>
I made the same mistake when learning Selenium as the field names are misleading!
Selenium assertions have different modes:
All Selenium Assertions can be used in
3 modes: "assert", "verify", and
"waitFor". For example, you can
"assertText", "verifyText" and
"waitForText". When an "assert" fails,
the test is aborted. When a "verify"
fails, the test will continue
execution, logging the failure.
Try assertTextPresent. This should abort the test immediately.
check that page : http://release.seleniumhq.org/selenium-remote-control/1.0-beta-2/doc/java/com/thoughtworks/selenium/SeleneseTestBase.html#assertTrue%28boolean%29
assert and verify text fields build boolean results with using resource code.