I am new to specflow and I am using specflow to test a website.
I have just one feature with 2 scenarios.
In the first scenario, i just invoke the browser and navigate to the home page of the application under test. I am using selenium chrome driver for this.
In the second scenario, I need to refer the instance of the chrome driver to access the objects in the web page.
However, it seems like the page is not identified. I am getting the message '..object reference not set..
I am creating the instance of the driver under under the main class as public static
Please advise on how I could refer the instance of the driver across methods which belongs to all the scenarios under the same feature
Thanks
SK
After a bit of research, I identified that the issue was related to the sequence of execution of scenarios under the feature.
I had 3 scenarios which should be executed in a sequential order. ( There might be people who argue that this is not the ideal case though). The issue was, the scenario which I was expecting to be executed as third in the sequence was getting executed as the first scenario.
By renaming the scenarios alphabetically, I was able to control the execution flow. (I believe that is the way to control execution flow in nunit) and this resolved my bug.
Thanks
SK
Related
I have an existing project that uses Specflow and SpecRun to run some tests against Sauce Labs. I have a BeforeSenario hook that creates a RemoteWebDriver and an AfterScenario hook that closes this down.
I've now moved this into another project (copied the files over, just changed the namespace) and the first test runs fine but then get the following error:
An exception of type 'OpenQA.Selenium.WebDriverException' occurred in WebDriver.dll but was not handled in user code
Additional information: Unexpected error. The command you just sent (POST element) has no session ID.
This is generally caused by testing frameworks trying to run commands after the conclusion of a test.
For example, you may be trying to capture a screenshot or retrieve server logs
after selenium.stop() or driver.quit() was called in a tearDown method.
Please make sure this process happens before the session is ended.
I've compared the project and it's using the same version of SpecFlow, same .Net version. I can't see any difference between the two projects.
In my steps I have the following line:
public static IWebDriver driver = (IWebDriver)ScenarioContext.Current["driver"];
which I think is the issue as instead of getting a new instance of it from the ScenarioContext it's using the previous test's version which has now been disposed.
But I can't see why this is working in another project instead?
I am using the Specflow example in Github here
UPDATE
Looks like I've found the issue. In the Default.srprofile the testThreadCount was 1 whereas the value in the working solution was 10. I've now updated this to match and it works.
Not sure what this value should be though. I assume it shouldn't be the same number of tests, but then how do I get around my original issue of the shared driver context?
TestThreadCount specifics the number of threads used by SpecFlow+Runner (aka SpecRun) to execute the tests.
Each of the threads are separated. The default is AppDomain isolation, so every thread runs in a separate AppDomain.
In the SauceLab example there are 7 scenarios and the runner is configured to use 10 threads. This means, every scenario is executed in a different thread with its own AppDomain. As no thread executes a second scenario, you get this error not in the example
With only one thread, your thread is executing more than one scenario and you get this issue.
Easiest fix would be, if you remove the static from the field. For every scenario you get a new instance of the binding class. You do not have to remember it static.
For a better example how to use Selenium with SpecFlow & SpecFlow+ have a look here: https://github.com/techtalk/SpecFlow.Plus.Examples/tree/master/SeleniumWebTest
You have to adjust the WebDriver- class for using SauceLabs over the RemoteWebDriver.
I've tried to implement one of our app modules by using PageFactory (for iOS)
Most of the elements are located by name and others by classname
In general everything works (more or less) but the thing is that the appium server has tons of logs , it seems that each time I'm trying to use some page control , and all the declared controls within that page are being update (?) which cause to longer time execution.
In case and I'm trying to debug my test , it takes a lot of time to move step by step (the appium server works extra hours ...)
I do use "CacheLookup" whenever it possible ...
Where my mistake is, or it's just should be like that ?
Thanks
Updated
Not enough info provided to say for sure. If you have a bunch of cucumber steps and each step is creating a new page instance then yes, you could create a class variable to communicate between cucumber steps
Class variables get thrown out at the end of each scenario so no cross scenario contamination. However, if a single scenario leaves a page and comes back you would need to explicitly set the class page handle to nil/null so that it is reinitialized upon reentry to that page. You want to avoid stale element errors.
I have some test scenarios and cases written in Specflow/Selenium in Visual Studio, usin MsTest. I just want to associate them to Microsoft Test Manager, so a test case written there is associated to an automated test.
Is that possible? How?
More Data: test were created by using Scenario Outline with some lines of examples.
You can associate testcases to a workitem in TFS/MTM, but we found it to cumbersome to do: It is a manual action in MTM that references the TestMethod by name. But because the TestMethod is generated by specflow by combining the title of the Scenario Outline and the first column of your Examples table, it is difficult to maintain:
Whenever a Scenario Outline title is changed, or the term in the first column of the examples table is changed, you have to re-couple the TestMethods to the workitems
When you add new Examples or Scenarios to your feature, you have to remember to link them to the workitem, one-by-one
To find the correct TestMethod in the dll is nearly undoable when you approach the thousandish scenarios.
What we did was using the WorkItem attribute in the Feature to connect (parts of) the feature to a workitem like #Workitem:42 . This is a little unnoticed feature in SpecFlow:
MsTest: Support for MSTest's [Owner] and [WorkItem] attributes with tags like #owner:foo #workitem:123 (Issue 162, Pull 161)
and it creates a WorkItemAttribute attached to the method that is connected to that tagged Scenario (Outline) or Feature.
Then, we imported all testcases into MTM with the Test Case Management tool and ran a custom made tool (making use of the TeamFoundation namespace and the TestManagement and WorkItemTracking Client) that connected each imported testcase to the correct workitem.
Whenever a test did run we could see the results in MTM, but also from the perspective of the connected workitem.
I have a web application which needs to be tested in multiple browsers in multiple environments (i.e. Chrome, Firefox, and Internet Explorer in both Windows and Linux* (*with the obvious exception of Internet Explorer)).
Tests have been written in Java using JBehave, Selenium, and SerenityBDD (Thucydides)). These tests exercise an underlying REST API, testing if objects may be successfully created and deleted using the UI.
I am using Selenium Grid, and would like to run the tests on parallel nodes; however, the concern is that as the tests exercise an underlying REST API, there could be conflicts.
Is it possible to pass in parameters to the tests as a parameter within the Jenkins job configuration which runs the tests, so that there is a slight difference in the tests dependent on the node in which they are executing? (e.g. An object named 'MYOBJECT-CHROME' is created on Chrome, versus an object named 'MYOBJECT-FIREFOX' on Firefox, meaning any REST API conflicts can be avoided?)
If the software under test(SUT) allows multithread REST API requests there is no need for you worry about
meaning any conflicts can be avoided?
The tests concurrent requests should be set up as fixtures, meaning every atomic test should set/tear down the required for it test data or return the software under test's(SUT) state. A good candidate here is a Prebuilt fixture. It'll allow you to add it as a step in Jenkins and can reduce the overhead of creating all those test objects.
If you still need to parameterize the build, you can use your suite #tags from the BDD to define which set of tests will be executed.
in a continuous integration build environment when running several Selenium tests in parallel (using Firefox driver) for different applications and each tests records its screenshots after every "action" (e.g. navigating to a page, submitting a form etc.) it seems like that whichever application window pops up that one gets on the top of the z-axis and will have the focus.
So using the method getScreenshotAs() from the Selenium API to record images results in mixed up screenshots sometimes showing one application and sometimes the other application.
Recording the HTML responses with getPageSource() on the other hand seems to work correctly using the Firefox driver instance "bound" to the test.
Is there any solution how to deal with the mixed up image screenshots? Is there a possibility how to ensure that getScreenshotAs() only consideres its own Firefox driver instance? Thanks for any hints!
Peter
I don't know what flavor of selenium you are using but here is a reference to the API that looks like it would fix your problem, but I have never tested it.
http://selenium.googlecode.com/svn/trunk/docs/api/dotnet/index.html
What that link shows is the IWrapDriver which according to the documentation Gets the IWebDriver used to find this element.
So from my understanding you could set your IWebDriver in your method and then wrapit with the IWrapDriver and then use that to reference for you getScreenShotAs();