(Environment: VSTS2010 using IronPython and the TFS SDK)
Workflow #1
The following workflow behaves as I expect:
Create a new testrun
Add testpoint(s)
Save testrun
My expected behavior: The testrun is saved with the testpoints.
Observed behavior: Matches my expected behavior - verified with a testrun.QueryTestResults() call.
Workflow #2
The following workflow does not behave as I expect:
Create a new testrun
Add testpoint
Save testrun
Add another testpoint
Save testrun
My expected behavior: The testrun should be saved with the test points.
Observed behavior: The first testpoint is saved. All other testpoints after the initial save do not get saved. There are no errors thrown or any feedback from the SDK indicating failures.
Workflow #3
Similarly, the following workflow does not behave as I expect:
Get an existing testrun by id
testManagementService.QueryTestRuns("SELECT * FROM TestRun WHERE TestRunId=%s" % testrunId)
Add testpoint
Save testrun
My expected behavior: The testrun should be saved with the test points.
Observed behavior: The added testpoint is not added. There are no errors thrown or any feedback from the SDK indicating failures.
Can anyone explain why the observed behavior differs from my expected behavior on workflows #2 and #3?
UPDATE (2012-11-16 12:00 CST)
Answering (editing) my own question since digging for this was not straight forward.
I found the following paragraph at http://blogs.msdn.com/b/nidhithakur/archive/2011/04/08/importing-testcase-results-to-mtm.aspx after doing a msdn.microsoft.com serach (search terms: itestrun add testpoint failure).
To be able to add results to this test plan, create a new test run in the test plan. map the test points which are present in the dictionary and add to this test run. Ideally, you should be able to add the results while adding the test point to the run but the run.Save() API is only working for single Save right now. So, you will need to add all the testpoints, save the test run and then iterate over the run collection to add results individually. For better performance, save the result collection once after all the results have been added/updated.
So this appears to be a limitation with VSTS2010. You cannot add test points after the first testrun save.
Related
I created a script which updates work items in a polarion document. However, right now each workitem update is a single save. But my script updates all work items in the document, which results in a large number of save actions in the api and thus a large set of clutter in the history.
If you edit a polarion document yourself, it will update all workitems.
Is it possible to do this same thing with the polarion API?
I tried
Using the tracker service to update work items. This only allows a single work item to be updated.
Using the web development tools to try and get information from the API. This seems to use a UniversalService for which no documentation is available at the API site https://almdemo.polarion.com/polarion/sdk/index.html
Update 1:
I am using the python polarion package to update the workitems.Python polarion Based on the answer by #boaz I tried the following code:
project = client.getProject(project_name)
doc = project.getDocument(document_location)
workitems = doc.getWorkitems()
session_service = client.getService("Session")
tracker_service = client.getService("Tracker")
session_service.beginTransaction()
for workitem in workitems:
workitem.description = workitem._polarion.TextType(
content=str(datetime.datetime.now()), type='text/html', contentLossy=False)
update_list = {
"uri": workitem.uri,
"description": workitem.description
}
tracker_service.updateWorkItem(update_list)
session_service.endTransaction(False)
The login step that #boaz indicated is done in the backend (See: https://github.com/jesper-raemaekers/python-polarion/blob/3e61527cf0f1f3c8614a30289a0a3409d2d8712d/polarion/polarion.py#L103)
However, this gives the following Java exception:
java.lang.RuntimeException: java.lang.NullPointerException
Update 2
There seems to be an issue with the session. If I call the following code:
session_service.logIn(user, password)
print(session_service.hasSubject())
it prints False.
The same thing happens when using the transaction:
session_service.beginTransaction()
print(session_service.transactionExists())
also prints False
Try wrapping your changes in a SessionWebService transaction, see JavaDoc:
sessionService = factory.getSessionService();
sessionService.logIn(prop.getProperty("user"),
prop.getProperty("passwd"));
sessionService.beginTransaction();
// ...
// your changes
// ...
sessionService.endTransaction(false);
sessionService.endSession();
As shown in the example in Polarion/polarion/SDK/examples/com.polarion.example.importer.
This will commit all your changes in one single SVN commit.
I have the xUnitFileImport scheduled job configured in my polarion project (as described in Polarion documentation) to import e2e test results (formatted to JUnit test results)
<job cronExpression="0 0/5 * * * ? *" id="xUnitFileImport" name="Import e2e Tests Results" scope="system">
<path>D:\myProject\data\import-test-results\e2e-gitlab</path>
<project>myProject</project>
<userAccountVaultKey>myKey</userAccountVaultKey>
<maxCreatedDefects>10</maxCreatedDefects>
<maxCreatedDefectsPercent>5</maxCreatedDefectsPercent>
<templateTestRunId>xUnit Build Test</templateTestRunId>
<idRegex>(.*).xml</idRegex>
<groupIdRegex>(.*)_.*.xml</groupIdRegex>
</job>
This works and I get my test results imported into a new test run and new test cases are created. But if I run the import job multiple times (for each test run) it creates duplicate test case work items even though they have the same name, which leads to this situation:
Is there some way to tell the import job to reference the existing testcases to the
newly created test run, instead of creating new ones?
What i have done so far:
yes I checked that the "custom field for test case id" in the "testing > configuration" is configured
yes I checked that the field value is really set in the created test case
The current value in this field is e.g. ".Login" as i don't want the classnames in the report.
YES I still get the same behaviour with the classname set
In the scheduler I have changed the job parameter for the group id because it wasn't filled. New value is: <groupIdRegex>e2e-results-(.*).xml</groupIdRegex>
I checked that no other custom fields are interfering, only the standard fields are set
I checked that no readonly fields are present
I do use a template for the testcases as supported by the xUnitFileImport. The testcases are successfully created and i don't see anything that would interfere
However I do have a hyperlink set in the template (I'll try removing this soon™)
I changed the test run template from "xUnit Build test" to "xUnit Manual Test Upload" this however did not lead to any visible change
I changed the template status from draft to active. Had no change in behaviour.
I tripple checked all the fields in the created test cases. They are literally the same, which leads to the conclusion that no fields in the testcases interfere with referencing to them
After all this time i have invested now, researching on my own and asking on different forums, I am ready to call this a polarion bug unless someone proves me this functionality is working.
I believe you have to set a custom field that identifies the testcase with the xUnit file you're importing, for the importer to identify the testcase.
Try adding a custom field to the TestCase workitem and selecting it here.
Custom Field for Test Case ID option in settings
If you're planning on creating test cases beforehand, note that the ID is formatted form the {classname}.{name} for a given case.
After executing our test automation we are sending the results into Rally. Here are the steps that we are following:
Create new Test Set
Add Test Case to Test set - The test case
already exists in a test folder
Query for the test case within the test set - We added this additional read since we were receiving concurrency issues trying to immediately post results after adding the test case to the testset
Post the results to the testcase within the testset.
The issue we are running into is in step 3 above. Sometimes, when a read is done to find the test case in the test set, it is unable to find it and so the results cannot be posted.
We have actually adjusted the code so it attempts 4 times with a 20 second delay between the third and 4th attempt and there are still several where results cannot be posted to Rally.
If anyone else has run into this issue and could provide guidance on a resolution that would be great
I designed a framework in RFT where the test cases are written in spreadsheet specifying the data source, object and keyword and a driver script which processes through all this data and routes it to the appropriate method for each test step all in a spreadsheet. Now I want to integrate this with RQM so that each of my test cases in the spreadsheet is shown as passed/failed in RQM. Any ideas?
You could implement now an algorithm to read those testcases in the spreadsheet and pass them to RQM as attachments with logTestResult.
For example:
logTestResult( <your attachment> , true );
And if you are already connected to RQM the adapter will attach files that you indicate automatically to RQM. So, at the end you will see step by step the results and if the script ends correctly RQM will show you the script as "passed".
Thanks for the answer Juan. I solved this by passing the testcase name from Script Argument part of RQM and fetching the arguments in my starter script as shown below:-
public void testMain(Object[] args) throws Exception
{
String n=args[0].toString();
logInfo("Parameter from RQM"+n);
ModuleDriver d=new ModuleDriver();
d.execute_main(n);
}
Since I have verification points setup for each of the steps in my test cases the results get reported based on each of those verification points in RQM which is what i needed.
I have written a custom record reader and looking for sample test code to test my custom reader using MRUnit or any other testing framework. Its working fine as per the functionality but I would like to add test cases before I make an install. Any help would be appreciable.
In my opinion, a custom record reader is like any iterator. For testing my record reader I have been able to work without MRUnit or any other hadoop junit frameworks. The test executes quickly and the footprint is small too. Initialize the record reader in your test case and keep iterating on it. Here is a pseudocode from one of my tests. I can provide you more details if you want to proceed in this direction.
MyInputFormat myInputFormat = new MyInputFormat();
//configure job and provide input format configuration
Job job = Job.getInstance(conf, "test");
conf = job.getConfiguration();
// verify split type and count if you want to verify the input format also
List<InputSplit> splits = myInputFormat.getSplits(job);
TaskAttemptContext context = new TaskAttemptContextImpl(conf, new TaskAttemptID());
RecordReader<LongWritable, Text> reader = myInputFormat.createRecordReader(splits.get(1), context);
reader.initialize(splits.get(1), context);
for (; number of expected value;) {
assertTrue(reader.nextKeyValue());
// verify key and value
assertEquals(expectedLong, reader.getCurrentKey());
}