Extent report not generated when script abruptly ends - selenium

I have POM based selenium framework,i am using extent reports and the reports are generated fine if all of the scripts run. If one of the script fails abruptly due to browser disappear then my script is failing and then report is not generated.
Ex: I have 3 scripts to run as part of my driver script, when 3rd script is running if something goes wrong(like browser disappear) then report is not generated. I want extent report generated whenever it fails/stops. My driver script has extent.flush which is the last execution as part of scripts. How can we generate report on failure. If the failure is due to object not found then i am able to get the report.
How to generate report whenever i stop the execution?
Any help is greatly appreciated.
thanks
Raju

I'm assuming that you are using TestNG if that is true add the parameter alwaysRun = true in the TestNG methods e.g.
#BeforeMethod(alwaysRun = true)
public void beforeMethod()
//your_code
}
#AfterMethod(alwaysRun = true)
public void afterMethod() {
//your_code
}

Related

Karate - How to get one cucumber report generated instead of one for each feature file

In my Karate runner I am using .outputCucumberJson(true) as shown below to generate a Cucumber Report (in order to upload this back to our XRAY tests):
class KarateRunnerTest {
#Test
void testParallel() {
Results results = Runner.path("classpath:apiTesting/karateFeatureFiles/")
.outputCucumberJson(true)
.parallel(5);
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
}
However, it is producing one report for each Feature File.
Is there a way for it to just generate one report for all Feature Files?
So this currently can't be done in Karate.
As a workaround I used a little npm tool called 'cucumber-json-merge'.
This merged the reports into 1 and seems to work fine.

Retrieve output log when test fails during setup

I'm running automated unit tests with SpecFlow and Selenium. SpecFlow offers BeforeTestRun, BeforeFeature, and BeforeScenario attributes to execute code between tests at the appropriate time.
I'm also using log4net to log test output.
When a test fails during the test or during the BeforeScenario phase, I can see the output logged.
But when a test fails during BeforeTestRun or BeforeFeature, there is no output available.
This makes it difficult to diagnose when the test fails during the early stages on the remote testing server and all I have are the output logs.
Is there any way to use log4net to get output logs when the test fails before the individual test has begun?
You can implement custom method, with reference of TestResult and TestContext objects. And you can call it in [TearDown] or somewhere on end of the Test:
if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Failed")
{
string message = TestContext.CurrentContext.Result.Message;
string logs = TestContext.CurrentContext.Result.StackTrace;
}
else if (TestContext.CurrentContext.Result.Outcome.Status.ToString() == "Passed")
{
//User defined action
}
else
{
string otherlog = TestContext.CurrentContext.Result.Outcome.Status.ToString();
}
Actually it's weird that it doesn't show you log when a tests is failed.
It shows it well for me:
What I suggest to try is to check that you log4net pushes all the logs to console. Actually if you haven't done some special manipulation, then your logger should by default have a console appender.
I initiate my logger like this:
private static readonly ILog log =
LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);
Another guess is that maybe when your test fails on OneTimeSetup Fixture it doesn't have any log yet.

#Test(enabled=false) is not shown as skipped test case

We have a number of TestNG tests that are disabled due to functionality not yet being present (enabled = false), but when the test classes are executed the disabled tests do not show up as skipped in the TestNG report. We'd like to know at the time of execution how many tests are disabled (i.e. skipped). At the moment we have to count the occurrences of enabled = false in the test classes which is an overhead.
Is there a different annotation to use or something else we are missing so that our test reports can display the number of disabled tests?
For example below method still gets executed : -
#Test(enabled=false)
public void ignoretestcase()
{
System.out.println("Ignore Test case");
}
you can use SkipException
throw new SkipException("Skip the test");
Check this post Skip TestNg test in runtime

Selenium IDE Test Case Pass / Fail

I am new in using Selenium IDE. I'm writing this test case where User A clicks this link and then it should direct the user to the correct page. Unfortunately, the page returns:
An error occured. Message: script 'pp/agensi-list.phtml' not found in path (C:/htdffocs/star/application/views\scripts/**)
But on my selenium, it shows the test case has passed (it should fail).
Can someone tell me why?
Hi it is very easy to write test case results pass/fail using selenium with testng framework,please read from below link i have given clear examples.
How can I write Test Result (Pass/Fail) in Excel file using TestNG Framework with Selenium WebDriver?
For more with real time examples ,you can read write test case pass/fail in excel using testng
Simply you have to create Excel utility for fileinput stream and fileoutputstream classes as below and call that class into another classes using extends keyword
Excel Utility:
public class Excelutility {
#BeforeTest
public void exceloperation(){
FileinputStream file = new FileiinputStream("file path");
//do file input stream details here
Fileoutputstream newfile=new Fileoutputstream("filepath");
//do fileoutputstream details here like create workbook,create sheets etc
}
}
Another class:
public class operations{
#Test
public void openbrowser(){
//write driver operations here
//write excel operations using
if(Title.equalsIgnoreCase("HP Loadrunner Tutorial")){
Testcase="PASS";
}else{
Testcase = "FAIL";
}
Label l1=new Label(1,2,"status")
writablesh.addCell(l3);
}
The test is likely passing because you aren't verifying anything after the click.
If the test stops after the link is clicked then it will pass. You have to confirm that something happened. Look for a change in the CSS or perhaps even look for the message you listed above?

Integrating RFT Test framework to work with RQM

I designed a framework in RFT where the test cases are written in spreadsheet specifying the data source, object and keyword and a driver script which processes through all this data and routes it to the appropriate method for each test step all in a spreadsheet. Now I want to integrate this with RQM so that each of my test cases in the spreadsheet is shown as passed/failed in RQM. Any ideas?
You could implement now an algorithm to read those testcases in the spreadsheet and pass them to RQM as attachments with logTestResult.
For example:
logTestResult( <your attachment> , true );
And if you are already connected to RQM the adapter will attach files that you indicate automatically to RQM. So, at the end you will see step by step the results and if the script ends correctly RQM will show you the script as "passed".
Thanks for the answer Juan. I solved this by passing the testcase name from Script Argument part of RQM and fetching the arguments in my starter script as shown below:-
public void testMain(Object[] args) throws Exception
{
String n=args[0].toString();
logInfo("Parameter from RQM"+n);
ModuleDriver d=new ModuleDriver();
d.execute_main(n);
}
Since I have verification points setup for each of the steps in my test cases the results get reported based on each of those verification points in RQM which is what i needed.