Error running multiple tests in Specflow/Selenium - selenium

I have an existing project that uses Specflow and SpecRun to run some tests against Sauce Labs. I have a BeforeSenario hook that creates a RemoteWebDriver and an AfterScenario hook that closes this down.
I've now moved this into another project (copied the files over, just changed the namespace) and the first test runs fine but then get the following error:
An exception of type 'OpenQA.Selenium.WebDriverException' occurred in WebDriver.dll but was not handled in user code
Additional information: Unexpected error. The command you just sent (POST element) has no session ID.
This is generally caused by testing frameworks trying to run commands after the conclusion of a test.
For example, you may be trying to capture a screenshot or retrieve server logs
after selenium.stop() or driver.quit() was called in a tearDown method.
Please make sure this process happens before the session is ended.
I've compared the project and it's using the same version of SpecFlow, same .Net version. I can't see any difference between the two projects.
In my steps I have the following line:
public static IWebDriver driver = (IWebDriver)ScenarioContext.Current["driver"];
which I think is the issue as instead of getting a new instance of it from the ScenarioContext it's using the previous test's version which has now been disposed.
But I can't see why this is working in another project instead?
I am using the Specflow example in Github here
UPDATE
Looks like I've found the issue. In the Default.srprofile the testThreadCount was 1 whereas the value in the working solution was 10. I've now updated this to match and it works.
Not sure what this value should be though. I assume it shouldn't be the same number of tests, but then how do I get around my original issue of the shared driver context?

TestThreadCount specifics the number of threads used by SpecFlow+Runner (aka SpecRun) to execute the tests.
Each of the threads are separated. The default is AppDomain isolation, so every thread runs in a separate AppDomain.
In the SauceLab example there are 7 scenarios and the runner is configured to use 10 threads. This means, every scenario is executed in a different thread with its own AppDomain. As no thread executes a second scenario, you get this error not in the example
With only one thread, your thread is executing more than one scenario and you get this issue.
Easiest fix would be, if you remove the static from the field. For every scenario you get a new instance of the binding class. You do not have to remember it static.
For a better example how to use Selenium with SpecFlow & SpecFlow+ have a look here: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
You have to adjust the WebDriver- class for using SauceLabs over the RemoteWebDriver.

Related

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

Specflow - How to refer browser objects across scenarios under the same feature

I am new to specflow and I am using specflow to test a website.
I have just one feature with 2 scenarios.
In the first scenario, i just invoke the browser and navigate to the home page of the application under test. I am using selenium chrome driver for this.
In the second scenario, I need to refer the instance of the chrome driver to access the objects in the web page.
However, it seems like the page is not identified. I am getting the message '..object reference not set..
I am creating the instance of the driver under under the main class as public static
Please advise on how I could refer the instance of the driver across methods which belongs to all the scenarios under the same feature
Thanks
SK
After a bit of research, I identified that the issue was related to the sequence of execution of scenarios under the feature.
I had 3 scenarios which should be executed in a sequential order. ( There might be people who argue that this is not the ideal case though). The issue was, the scenario which I was expecting to be executed as third in the sequence was getting executed as the first scenario.
By renaming the scenarios alphabetically, I was able to control the execution flow. (I believe that is the way to control execution flow in nunit) and this resolved my bug.
Thanks
SK

Page Factory - how does it work

I've tried to implement one of our app modules by using PageFactory (for iOS)
Most of the elements are located by name and others by classname
In general everything works (more or less) but the thing is that the appium server has tons of logs , it seems that each time I'm trying to use some page control , and all the declared controls within that page are being update (?) which cause to longer time execution.
In case and I'm trying to debug my test , it takes a lot of time to move step by step (the appium server works extra hours ...)
I do use "CacheLookup" whenever it possible ...
Where my mistake is, or it's just should be like that ?
Thanks
Updated
Not enough info provided to say for sure. If you have a bunch of cucumber steps and each step is creating a new page instance then yes, you could create a class variable to communicate between cucumber steps
Class variables get thrown out at the end of each scenario so no cross scenario contamination. However, if a single scenario leaves a page and comes back you would need to explicitly set the class page handle to nil/null so that it is reinitialized upon reentry to that page. You want to avoid stale element errors.

Create screenshot after failed selenium command

The PHPUnit Selenium base class has an option to make a screenshot on failure, which is a huge help in finding out why the test failed. The selenium server, however, returns an error instead of a failure on any error condition other than explicit assert* calls (such us trying to do something with a non-existent element). If I try to make a screenshot after the server reports the error, I get another error saying that the server already discarded the session. Is there any way to change that behavior?
Update: this is because PHPUnit breaks the connection when it receives an error. I was able to change it by some (rather ugly) manipulation of the PHPUnit code.
Make those interactions as test cases.
For example in perl,
If it is written as below and fails due to a non-existent element. the script will error out
$sel->type("email-id","trial\#trial.com");
While if the above step is made as a test case by writing it as follows
$sel->type_ok("email-id","trial\#trial.com");
If there is a non-existent element, the test case will only fail, and the script will continue.
So using TAP (test any protocol) by using the module use Test::More; , if _ok is added after a function, the function return will be used to determine the fate of the test case.
ie. - A return of 'O' means the test Failed
and A return of '1' means the test Passed
It is not the Selenium server but the SeleniumTestCase class for PHPUnit 3.4 which automatically sends a stop command when it detects an error (Driver.php line 921). PHPUnit 3.6 seems handle errors better.
I think you can overwrite method 'travelbox' and make something like this:
public function onNotSuccessfulTest(Exception $e){
file_put_content('/xxx/xxx.jpg', $this->currentScreenshot());
}

Issue with the karate parallel runner

I wanted to see if anyone else might have observed the same issue. I looked in the project for any open/closed issues that might be like this but did not notice any.
I noticed that when I use the Karate Parallel runner (which we have been using for a while now), that every GET, POST, DELETE request gets called 2x, observed in the karate logs which came in the console.
When I do not use the Karate Parallel runner only a single request is made.
I noticed this when performing a POST to create a data source in our application. When I went to the applications UI to verify the new data source was created, I saw 2 of them. This leads me down the path to research further what might be happening.
Using Karate v0.9.5 with Junit 5
minimalistic Example -
https://drive.google.com/file/d/1UWnNtxGO7gr-_Z80MLJbFkaAmuaVGlAD/view?usp=sharing
Steps To Run The Code -
Extract ZIP
cd GenericModel
mvn clean test -Dtest=UsersRunner
Check the console logs API scenario get executed 2X
Note - It works fine for me for karate V0.9.4 with Junit 5
You mixed up parallel runner and JUnit runner and ended up having both in one test method. Please read the documentation: https://github.com/intuit/karate#junit-5-parallel-execution
Note that you use the normal #Test annotation not the #Karate.Test one.