Logging additional custom information in PHPUnit for reports - selenium

I am using PHPUnit Selenium for functional testing of my project.
I am using junit for logging and using the log file to gnerate the report. Following is the log tag in phpunit.xml
<phpunit>
<logging>
<log type="junit" target="reports/logfile.xml" logIncompleteSkipped="false" />
</logging>
</phpunit>
Then I use the logfile.xml to generate the report.
What I am looking for is the ability to log additional information (information telling what exactly is getting tested in assertion, in both cases i.e. in both pass/fail of assertion).
Basically in reports I want to tell what is being asserted. And that information will be written by the test writer in the test case manually along with assertion.
assert functions comes with the third optional parameter as message but that is shown only on failure.
Eg:
<?php
// $accountExists is the dummy variable which wil probably checking in database for the existence of the record
$this->assertEquals(true, $accountExists, 'Expecting for accountExists to be true');
?>
Above will return message on failure but not when test is passed.

you must use the
--printer command line argument to point to a custom printer class
http://www.phpunit.de/manual/3.6/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener
in your endTest function whatever you put in printf will show up in your log file.

Related

Polarion: xUnitFileImport creates duplicate testcases instead of referencing existing ones

I have the xUnitFileImport scheduled job configured in my polarion project (as described in Polarion documentation) to import e2e test results (formatted to JUnit test results)
<job cronExpression="0 0/5 * * * ? *" id="xUnitFileImport" name="Import e2e Tests Results" scope="system">
<path>D:\myProject\data\import-test-results\e2e-gitlab</path>
<project>myProject</project>
<userAccountVaultKey>myKey</userAccountVaultKey>
<maxCreatedDefects>10</maxCreatedDefects>
<maxCreatedDefectsPercent>5</maxCreatedDefectsPercent>
<templateTestRunId>xUnit Build Test</templateTestRunId>
<idRegex>(.*).xml</idRegex>
<groupIdRegex>(.*)_.*.xml</groupIdRegex>
</job>
This works and I get my test results imported into a new test run and new test cases are created. But if I run the import job multiple times (for each test run) it creates duplicate test case work items even though they have the same name, which leads to this situation:
Is there some way to tell the import job to reference the existing testcases to the
newly created test run, instead of creating new ones?
What i have done so far:
yes I checked that the "custom field for test case id" in the "testing > configuration" is configured
yes I checked that the field value is really set in the created test case
The current value in this field is e.g. ".Login" as i don't want the classnames in the report.
YES I still get the same behaviour with the classname set
In the scheduler I have changed the job parameter for the group id because it wasn't filled. New value is: <groupIdRegex>e2e-results-(.*).xml</groupIdRegex>
I checked that no other custom fields are interfering, only the standard fields are set
I checked that no readonly fields are present
I do use a template for the testcases as supported by the xUnitFileImport. The testcases are successfully created and i don't see anything that would interfere
However I do have a hyperlink set in the template (I'll try removing this soon™)
I changed the test run template from "xUnit Build test" to "xUnit Manual Test Upload" this however did not lead to any visible change
I changed the template status from draft to active. Had no change in behaviour.
I tripple checked all the fields in the created test cases. They are literally the same, which leads to the conclusion that no fields in the testcases interfere with referencing to them
After all this time i have invested now, researching on my own and asking on different forums, I am ready to call this a polarion bug unless someone proves me this functionality is working.
I believe you have to set a custom field that identifies the testcase with the xUnit file you're importing, for the importer to identify the testcase.
Try adding a custom field to the TestCase workitem and selecting it here.
Custom Field for Test Case ID option in settings
If you're planning on creating test cases beforehand, note that the ID is formatted form the {classname}.{name} for a given case.

How to get disabled test cases count in jenkins result?

I have suppose 10 test cases in test suite in which 2 test cases are disabled.I want to get those two test cases in test result of jenkins job like pass = 7 ,fail = 1 and disabled/notrun= 2.
By default, TestNG generates report for your test suite and you may refer to index.html file under the test-output folder. If you click on "Ignored Methods" hyperlink, it will show you all the ignored test cases and its class name and count of ignored methods.
All test cases annotated with #Test(enabled = false) will be showing in "Ignored Methods" link.
I have attached a sample image. Refer below.
If your test generates JUnit XML reports, you can use the JUnit plugin to parse these reports after the build (as a post-build action). Then, you can go into your build and click 'Test Result'. You should see a breakdown of how the execution went (including passed, failed, and skipped tests).

Integrating RFT Test framework to work with RQM

I designed a framework in RFT where the test cases are written in spreadsheet specifying the data source, object and keyword and a driver script which processes through all this data and routes it to the appropriate method for each test step all in a spreadsheet. Now I want to integrate this with RQM so that each of my test cases in the spreadsheet is shown as passed/failed in RQM. Any ideas?
You could implement now an algorithm to read those testcases in the spreadsheet and pass them to RQM as attachments with logTestResult.
For example:
logTestResult( <your attachment> , true );
And if you are already connected to RQM the adapter will attach files that you indicate automatically to RQM. So, at the end you will see step by step the results and if the script ends correctly RQM will show you the script as "passed".
Thanks for the answer Juan. I solved this by passing the testcase name from Script Argument part of RQM and fetching the arguments in my starter script as shown below:-
public void testMain(Object[] args) throws Exception
{
String n=args[0].toString();
logInfo("Parameter from RQM"+n);
ModuleDriver d=new ModuleDriver();
d.execute_main(n);
}
Since I have verification points setup for each of the steps in my test cases the results get reported based on each of those verification points in RQM which is what i needed.

BeanShell PreProcessor updates User define variables

I'm very new at JMeter issues.
In a test script i have a BeanShell PreProcessor element that updates some variables previously defined at a "User Defined Variables" element.
Latter those variables are used in "Http Requests". However, the value that is used in the http request is the default one.
The scripts seems to be working due to some debug print();
My question is if it's necessary to delay the script to be sure that the BeanShell finishes?
Thanks a lot for your attention
There is no need to put any delay to Beanshell Pre-Processor as it's being executed before request. I'd recommend to check your jmeter.log file to see if there are any scripting issues as Beanshell Pre-Processor does not report errors anywhere including View Results Tree listener.
There are at least 2 ways to assure that everything is fine with your Beanshell script:
Put your debug print code after variables replace logic to see if it fires
Use JMeter __Beahshell function right in your HTTP request. If it's ok - View Results Tree will demonstrate beanshell-generated value. If not - the field will be blank and relevant error will be displayed in the log.
Example test case:
Given following Test Plan structure:
Thread Group with 1 user and 1 loop
HTTP GET Request to google.com with path of / and parameter q
If you provide as parameter "q" following beanshell function:
${__BeanShell(System.currentTimeMillis())}
and look into View Results Tree "Request" tab you should see something like:
GET http://www.google.com/?q=1385206045832
and if you change function to something incorrect like:
${__BeanShell(Something.incorrect())}
you'll see a blank request.
The correct way of changing existing variable (or creating new if variable doesn't exist) looks like
vars.put("variablename", "variablevalue");
*Important: * JMeter Variables are Java Strings, if you're trying to set something else (date, integer, whatever) to JMeter Variable you need to cast it to String somehow.
Example:
int i = 5;
vars.put("int_i", String.valueOf(i));
Hope this helps.
You can update the vale of a "user defined variable".
You have to create a bean shell sampler
vars.put("user_defined_variable", "newvalue");
#theINtoy got it right.
http://www.blazemeter.com/blog/queen-jmeters-built-componentshow-use-beanshell
I'm new to jmeter too but as I know variables defined in "User defined variables" are constants, so you can't change them. I recommend to use "User Parameters" in preprocessors or CSV Data Set Config.

logging program info to file in twisted

I have written a code in twisted .I need to write the log information in when we have call
d.addErrback(on_failure).
from twisted.python import log
log.startLogging(open('/home/crytek.etl/foo.log', 'w'))
def on_failure(failure):
log.msg(failure)
d.addErrback(on_failure)
Is this the correct way of implementing this.I don't get any values written to the file.Can someone suggest on how this can be implemented
You probably want to consider opening your log file in append mode. Otherwise, every time your application starts you'll wipe out all your old logs. This could make it appear as though the log messages you're expecting to see aren't being logged.
from twisted.python import log
log.startLogging(open('/home/crytek.etl/foo.log', 'a'))
You should also log failures using log.err instead of log.msg
def on_failure(failure):
log.err(failure)
And you can do this more easily since on_failure has exactly the same signature as log.err. Just write:
d.addErrback(log.err)
Also, I liked, log.err doesn't have exactly the same signature as on_failure. It is better, it accepts a 2nd argument which is used to present a header for the failure in the log file. You can use it like this:
d.addErrback(log.err, "Frobbing the widget failed")
This will present "Frobbing the widget failed" together with the failure in the log file.