A test run failed on the build server but it passes locally, due to some issues in the tests and differences in the order of files returned by the OS.
What is the best way to run tests locally in the same order as the build server to debug the issue? (order of whole test classes matters much more than order of test methods. also it's ok if the solution is a complete hack)
Related
I have a lot of auto tests. I can run them in any order from Visual Studio 2017 and they work. I'm using WebAppDriver, c# and Selenium.
When we run the tests from Dev-Ops, the tests in the first class work but as soon as it starts running tests from another class, they don't work.
My tests are meant to be independent and able to run in any order but seemingly there is a problem.
I don't want to control the order that the tests run in, they should be independent but I do want to know what has gone wrong.
Each Test Class has a ClassInitialize with a setup that launches the system under test and attaches my driver to it. There is also a Teardown that closes the system under test.
My question is, how can I debug or find out what is going wrong when the tests are run in dev-ops, what's the difference between running the tests from Visual Studio 2017 where they all work and running the tests from dev-ops where they seems to stop working.
So far I have set my tests to run from VS2017 in the same order that dev-ops happens to run them in. I did this by creating a Playlist and editing the playlist file to set the order. It's not my intention to control test order but only for VS2017 to mimic the order that dev-ops runs the tests in. What I have found is that those tests that do run in the chosen order work from VS2017 but don't work in the same order when run from dev-ops.
What is dev-ops doing differently?
The solution I arrived at was:
in Dev-Ops -> Release Pipeline -> Tasks -> VS Test - Auto Test -> Visual Studio Test (The purple beaker icon), set "Batch Tests = Based on Test Assemblies".
In Visual Studio ensure that each Test Class is in it's own Project and then set the Assembly name of that project to be unique / relevant to that test class (Your Solution may need many projects).
Remove and re-link your auto tests from Visual Studio to the dev-ops test case(s), this is done 1 test at a time.
Then and only then will Dev-Ops be able to run tests from Class A followed by Tests from Class B etc sequentially.
Should end to end tests be run at build time (running the application on the build server), or after deployment? I have not yet found a solid answer for which one is the standard.
Edit
I mean after deploying either to QA/SIT/UAT etc... vs. just running it on a build server without fully deploying it.
The whole point of having a build server is to create a single build of the current source code, of which you run tests and make sure that things work before you deploy them. I don't know why anyone would want to run tests after then have been deployed. What happens if you find a bug? You going to roll back the deployment? Always test before deployment.
Ideally, you would have a build environment that mimics your production environment that will allow you to run tests in a "deployed" environment. It's the reason that you have a development/staging/production servers.
I'm running nunit tests externally via Nunit3-console.
I'm not able to see any console.logs/console.Writeln
I find it mandatory to be able to track in real time, every step of the test.
I've read that nunit3 framework created a parallel test run feature, which is why the real time test output logs have been taken off.
But if I want to enjoy both worlds?
How can I trigger console logs during a test run?
Thanks in advance.
Nunit3 framework doesn't support live hard coded console outputs when running tests from Nunit3-console, because nunit3 framework was meant to bring the parallel testing methodology, they found it pointless to allow such outputs when more than 1 test is running in parallel.
I solved this by downgrading to Nunit 2.6.4 which isnt supporting parallel testing and allows me to fire console outputs from my test.
I have a Grails application that, when run on my local Windows machine, passes all tests in my integration test suite. When I deploy my app to my Test environment in Jenkins, and run the same suite of tests, a few of them are failing for inexplicable reasons.
I think the Test box is Linux but I am not sure. I am using mocks in my Grails app and am wondering if that may be causing confusion in values returned.
Has anyone any ideas?
EDIT:
My app translates an XML document into a new XML document. One of the elements in the returned XML document is supposed to be PRODUCT but comes back as product.
The place where this element is set is from an in-memory database that is populated from a DB script. It is the same DB script that is used locally and on my Test environment.
The app does not read any config files that would be different in different environments.
Like the others have stated the really isn't enough information here to help give a solid answer. A couple of things that I would look at are:
If it's integration tests that are failing maybe you've got some "bad tests" that are dependent on certain data that does not exist in your test environment that Jenkins is running against.
There is no guaranteed consistency for test execution order across machines/platforms. So it's entirely possible that the tests pass for you locally just because they run in a certain order and leave things mocked out or data setup from one test that is needed in another. I wrote a plugin a while ago (http://grails.org/plugin/random-test-order) to help identify these problems. I haven't updated the plugin since Grails 1.3.7 so it may not work with 2.0+ grails apps.
If the steps above don't identify the problem knowing any differences in how you are invoking the tests on Jenkins vs. Local would be beneficial. For example if you specify a specific grails environment (http://grails.org/doc/latest/guide/conf.html#environments) when running on Jenkins and what the differences are between that and the grails environment used on your local.
I'm trying to set up a an automated build process and together with some coded ui tests. I think I've managed to set up pretty much everything up and working, the last missing piece of the puzzle being able to run the coded UI tests on the test agent machine.
So basically, I have a CI build that also runs unit tests, and if successful, deploys the binaries on a shared location. My goal is to then trigger the other process that runs the coded UI tests. I got the coded UI tests working on my dev computer by hard coding the location to start the application from. However, I am at a loss on how to configure this to work on the test agent. I used the LabDefaultTemplate11 build process template, and configured it to use the latest build completed by the CI build. But how do I specify what executable the test agent should use?
At first I thought it was enough to specify the build definition and build configuration, but then I realized there might be multiple executables, so the test agent would have to guess. Doesn't sound too good.
So in the end I guess my question is, how to (robustly) add the startup of the application to my coded UI tests in a manner that works both on my local dev machine, and the machine running the test agent?
Oh and I'm using TFS 2012 (with VS 2012 premium).
The lab template expects you to create Test Cases in MTM then associated coded ui tests to them in visual studio by opening the test case, selecting the associated automation tab and clicking the "..." button. You need to have the project with the coded ui tests open at the time.
Then in the lab build you select one or more Test Suite (from MTM) that contains the Test Cases for those coded uit tests.
When you make your tests in the first place make sure you're running your program/website in a way that the test agent will be able to also - eg use a standard installation directory or domain.
It is best practice to open the program being tested at the start of every test and close it at the end. However you could get around that by executing the program as part of the deploy instructions in the lab build.