parallel execution in nunit for bdd test cases - selenium

I am working on BDD (written in selenium webdriver with c#).
For sequential execution we were using nunit but now the client requirement is parallel execution.
Gone through so many documentation but only found Pnunit.
Steps executed till now
downloaded pnunit
changes the setup method the use pnunit
created agent.conf file
run "agent agent.conf" to start agent
created app.conf file for parallel execution
run "launcher app.conf" for execution
but its also not working.
It says that class is not found under dll.
Please provide any suggestions.
-Neeraj

I've developed a method of running selenium tests in parallel that I've written about here http://blog.dmbcllc.com/running-selenium-in-parallel-with-any-net-unit-testing-tool/

Concurrent execution is not supported by Specflow using the standard test runners as the Specflow engine itself is not thread safe. This issue has been addressed and is currently being tested, and the fixed code should be merged in the next few weeks. Please see the discussion here and here
It is possible to use app domain isolation to run tests in parallel, Specflow+ and NCrunch use this technique.

YOu can try this tool https://github.com/qakit/ParallelTestRunner. I developed it for running NUnit tests in parallel (actually it will run in parallel not tests but testfixtures in your tests lib). Works fine for me =). If you will face any problems report me, will try to solve.

Related

Nunit3-console : Console.Writeln won't show up

I'm running nunit tests externally via Nunit3-console.
I'm not able to see any console.logs/console.Writeln
I find it mandatory to be able to track in real time, every step of the test.
I've read that nunit3 framework created a parallel test run feature, which is why the real time test output logs have been taken off.
But if I want to enjoy both worlds?
How can I trigger console logs during a test run?
Thanks in advance.
Nunit3 framework doesn't support live hard coded console outputs when running tests from Nunit3-console, because nunit3 framework was meant to bring the parallel testing methodology, they found it pointless to allow such outputs when more than 1 test is running in parallel.
I solved this by downgrading to Nunit 2.6.4 which isnt supporting parallel testing and allows me to fire console outputs from my test.

Grails functional test - DB setup/teardown, running as JUnit in Eclipse

I am running Geb functional tests in my Grails app through Eclipse "Run As JUnit..."
This normally works great and allows me to keep my test server running with grails run-app, and I get fast test execution times.
However, it doesn't allow me to use GORM domain objects in my setup/teardown methods. Those only work if I run with grails test-app, which requires a much longer cycle time.
Is there another way I can access the DB from my functional tests without GORM? I would be perfectly OK accessing the DB directly through the groovy.Sql class, as long as I don't have to duplicate configuration.
The question you linked to in your comment actually does contain a solution in this answer - you should use Grails Remote Control plugin to change the state of your application under test from your functional tests. Some reasons why are outlined in this answer to another question.

Istanbul-generated code coverage analysis for Intern tests running in browser?

I know there is an existing post asking if it's possible to generate an HTML report of code coverage analysis for tests written and run with Intern, and it's been answered:
Generate HTML code coverage reports with intern
However, the post doesn't mention what type of environment that the OP runs in; i.e., are the tests running in a Node.js client? I ask because I am running my unittests using the Intern framework in a browser [edit: invoking tests comparably to http://path/to/intern-tutorial/node_modules/intern/client.html?config=tests/intern]. The article here:
https://github.com/theintern/intern/wiki/Using-and-Writing-Reporters#custom-reporters
outlines that HTML is the only reporter available for the browser platform; LCOV and LCOVHTML are not. But has that changed at all? This limited array of reporters
for browsers isn't very convenient, and I was hoping to take advantage of Istanbul built into Intern, rather than try to plug in another code coverage analysis tool (or hack my own thing :( ).
Code coverage information will be correctly retrieved from code running in browsers if you run your tests with intern-runner. The actual collation and output of the coverage results occurs on the server (Node.js) side.

Siesta generate screenshot on fail

I'm running tests using Siesta from command line.
Is that possible to generate screenshots on each test fail while running Siesta test runner?
Edit:
Need to wait for the feature
http://www.bryntum.com/products/siesta/changelog/
It's not possible in current version 2.0.8, but they are planning to add this in future.
More info can be found on their forum http://www.bryntum.com/forum/
Also Siesta can be runned in cloud on http://www.browserstack.com/ or https://saucelabs.com/. These services offers evene whole screencasts of the tests.

Grails: local tests pass, Test environment tests fail

I have a Grails application that, when run on my local Windows machine, passes all tests in my integration test suite. When I deploy my app to my Test environment in Jenkins, and run the same suite of tests, a few of them are failing for inexplicable reasons.
I think the Test box is Linux but I am not sure. I am using mocks in my Grails app and am wondering if that may be causing confusion in values returned.
Has anyone any ideas?
EDIT:
My app translates an XML document into a new XML document. One of the elements in the returned XML document is supposed to be PRODUCT but comes back as product.
The place where this element is set is from an in-memory database that is populated from a DB script. It is the same DB script that is used locally and on my Test environment.
The app does not read any config files that would be different in different environments.
Like the others have stated the really isn't enough information here to help give a solid answer. A couple of things that I would look at are:
If it's integration tests that are failing maybe you've got some "bad tests" that are dependent on certain data that does not exist in your test environment that Jenkins is running against.
There is no guaranteed consistency for test execution order across machines/platforms. So it's entirely possible that the tests pass for you locally just because they run in a certain order and leave things mocked out or data setup from one test that is needed in another. I wrote a plugin a while ago (http://grails.org/plugin/random-test-order) to help identify these problems. I haven't updated the plugin since Grails 1.3.7 so it may not work with 2.0+ grails apps.
If the steps above don't identify the problem knowing any differences in how you are invoking the tests on Jenkins vs. Local would be beneficial. For example if you specify a specific grails environment (http://grails.org/doc/latest/guide/conf.html#environments) when running on Jenkins and what the differences are between that and the grails environment used on your local.