I'm running nunit tests externally via Nunit3-console.
I'm not able to see any console.logs/console.Writeln
I find it mandatory to be able to track in real time, every step of the test.
I've read that nunit3 framework created a parallel test run feature, which is why the real time test output logs have been taken off.
But if I want to enjoy both worlds?
How can I trigger console logs during a test run?
Thanks in advance.
Nunit3 framework doesn't support live hard coded console outputs when running tests from Nunit3-console, because nunit3 framework was meant to bring the parallel testing methodology, they found it pointless to allow such outputs when more than 1 test is running in parallel.
I solved this by downgrading to Nunit 2.6.4 which isnt supporting parallel testing and allows me to fire console outputs from my test.
Related
I really don't know how to ask question to Google about this, so I excuse me that it is naive.
Our team is developing SPA application in ReactJS. We also do back-end programming for NodeJS. Our project recently got more e2e tests. They are written using webdriver.io packages. Everything works as expected but circa 30 tests run about 50 minutes. It is too long to pause developer work and force him to run tests.
We came with the idea that now when we have so many tests, we need to run them on separate computer (other than a developer's laptop, further I call it e2e-laptop).
So I programmed a bash script and installed Ubuntu on a e2e-laptop. My idea is, that developer who wants to run e2e test logs in on e2e-laptop with ssh, runs specified script with arguments (eg: --rev= specific git revision the tests should run on, --email= where to send Allure report) and logs out. After tests are done he gets Allure report in his mailbox.
This all sounds to me OK, but not very well. It works - it is like a dirty MVP. But what I really would like to give my team is the web browser based UI that gives the features my script has. I can imagine this software is hosted on e2e-laptop, every developer can open its webpage address in his local browser. Then after authorization, there are options: run all specs, run chosen specs, send report and more. It would be the best if that software could also allow simultaneous running of tests commissioned by multiple developers.
What software I need?
You need a continuous integration tool. https://stackify.com/top-continuous-integration-tools/
I recommend Jenkins.
I would first try to run your selenium tests headless in a docker container on your laptop. Once you are able to do that, use that same configuration in your docker container running in Bitbucket pipelines. It could actually be the same container and the same scripts. Then, developers can just make a branch and work with the tests on that branch. If only a certain subset of tests need to run, then the developer can make the necessary changes on his or her local branch to run those tests and push it up to Bitbucket. This should help with the configuration https://github.com/SeleniumHQ/docker-selenium.
I am working on BDD (written in selenium webdriver with c#).
For sequential execution we were using nunit but now the client requirement is parallel execution.
Gone through so many documentation but only found Pnunit.
Steps executed till now
downloaded pnunit
changes the setup method the use pnunit
created agent.conf file
run "agent agent.conf" to start agent
created app.conf file for parallel execution
run "launcher app.conf" for execution
but its also not working.
It says that class is not found under dll.
Please provide any suggestions.
-Neeraj
I've developed a method of running selenium tests in parallel that I've written about here http://blog.dmbcllc.com/running-selenium-in-parallel-with-any-net-unit-testing-tool/
Concurrent execution is not supported by Specflow using the standard test runners as the Specflow engine itself is not thread safe. This issue has been addressed and is currently being tested, and the fixed code should be merged in the next few weeks. Please see the discussion here and here
It is possible to use app domain isolation to run tests in parallel, Specflow+ and NCrunch use this technique.
YOu can try this tool https://github.com/qakit/ParallelTestRunner. I developed it for running NUnit tests in parallel (actually it will run in parallel not tests but testfixtures in your tests lib). Works fine for me =). If you will face any problems report me, will try to solve.
I'm running tests using Siesta from command line.
Is that possible to generate screenshots on each test fail while running Siesta test runner?
Edit:
Need to wait for the feature
http://www.bryntum.com/products/siesta/changelog/
It's not possible in current version 2.0.8, but they are planning to add this in future.
More info can be found on their forum http://www.bryntum.com/forum/
Also Siesta can be runned in cloud on http://www.browserstack.com/ or https://saucelabs.com/. These services offers evene whole screencasts of the tests.
I have a Grails application that, when run on my local Windows machine, passes all tests in my integration test suite. When I deploy my app to my Test environment in Jenkins, and run the same suite of tests, a few of them are failing for inexplicable reasons.
I think the Test box is Linux but I am not sure. I am using mocks in my Grails app and am wondering if that may be causing confusion in values returned.
Has anyone any ideas?
EDIT:
My app translates an XML document into a new XML document. One of the elements in the returned XML document is supposed to be PRODUCT but comes back as product.
The place where this element is set is from an in-memory database that is populated from a DB script. It is the same DB script that is used locally and on my Test environment.
The app does not read any config files that would be different in different environments.
Like the others have stated the really isn't enough information here to help give a solid answer. A couple of things that I would look at are:
If it's integration tests that are failing maybe you've got some "bad tests" that are dependent on certain data that does not exist in your test environment that Jenkins is running against.
There is no guaranteed consistency for test execution order across machines/platforms. So it's entirely possible that the tests pass for you locally just because they run in a certain order and leave things mocked out or data setup from one test that is needed in another. I wrote a plugin a while ago (http://grails.org/plugin/random-test-order) to help identify these problems. I haven't updated the plugin since Grails 1.3.7 so it may not work with 2.0+ grails apps.
If the steps above don't identify the problem knowing any differences in how you are invoking the tests on Jenkins vs. Local would be beneficial. For example if you specify a specific grails environment (http://grails.org/doc/latest/guide/conf.html#environments) when running on Jenkins and what the differences are between that and the grails environment used on your local.
I'm a big fan of Firebug - I use it all the time for my web development needs. That said, one of the things I noticed with Firebug is that it significantly slows down the page. In particular, if Firebug is on when a (local) Selenium script is running, the script takes 2-3 times as long to execute, and I sometimes even see timeout errors. Their per-site activation model doesn't help here at all - I'm developing and testing that same site.
I'd like to be able to turn Firebug OFF right before my Selenium script starts, and turn it back on when Selenium is done (or, in the worst case, just keep it off - the biggest annoyance is launching Selenium only to find out that some tests failed for no apparent reason).
My favored solution for this is to make a new, separate Firefox profile (run firefox -ProfileManager), and launch your Selenium scripts using that profile instead. It'll be clean of everything except what you put into it. That way, as little as possible from your personal environment will taint your development environment and you'll maintain a clean separation.
I typically don't run tests from the same machine I develop on. If you can setup a separate test machine where you deploy and run the tests, you can keep Firefox, IE, etc, free of plugins/add-ons like firebug that might get in the way of your tests and avoid this problem completely.
Running your tests on a separate machine also frees your dev machine so that you can continue working while your tests are running. I'm not sure about your situation specifically, but think about when you have hundreds or thousands of test cases running, you don't want to be sitting there waiting for them to finish. You want to be able to work while it runs, view the report it generates, and investigate if necessary.
You could try the alpha builds of Firebug 1.4. The activation/suspend model has changed in this version to a simpler model: it is activated when you see the panel, otherwise it is in suspended mode, see http://blog.getfirebug.com/?p=124 for more information.