Coded UI Tests on a lab environment - testing

I'm trying to set up a an automated build process and together with some coded ui tests. I think I've managed to set up pretty much everything up and working, the last missing piece of the puzzle being able to run the coded UI tests on the test agent machine.
So basically, I have a CI build that also runs unit tests, and if successful, deploys the binaries on a shared location. My goal is to then trigger the other process that runs the coded UI tests. I got the coded UI tests working on my dev computer by hard coding the location to start the application from. However, I am at a loss on how to configure this to work on the test agent. I used the LabDefaultTemplate11 build process template, and configured it to use the latest build completed by the CI build. But how do I specify what executable the test agent should use?
At first I thought it was enough to specify the build definition and build configuration, but then I realized there might be multiple executables, so the test agent would have to guess. Doesn't sound too good.
So in the end I guess my question is, how to (robustly) add the startup of the application to my coded UI tests in a manner that works both on my local dev machine, and the machine running the test agent?
Oh and I'm using TFS 2012 (with VS 2012 premium).

The lab template expects you to create Test Cases in MTM then associated coded ui tests to them in visual studio by opening the test case, selecting the associated automation tab and clicking the "..." button. You need to have the project with the coded ui tests open at the time.
Then in the lab build you select one or more Test Suite (from MTM) that contains the Test Cases for those coded uit tests.
When you make your tests in the first place make sure you're running your program/website in a way that the test agent will be able to also - eg use a standard installation directory or domain.
It is best practice to open the program being tested at the start of every test and close it at the end. However you could get around that by executing the program as part of the deploy instructions in the lab build.

Related

Running auto tests from different classes in dev-ops does not work

I have a lot of auto tests. I can run them in any order from Visual Studio 2017 and they work. I'm using WebAppDriver, c# and Selenium.
When we run the tests from Dev-Ops, the tests in the first class work but as soon as it starts running tests from another class, they don't work.
My tests are meant to be independent and able to run in any order but seemingly there is a problem.
I don't want to control the order that the tests run in, they should be independent but I do want to know what has gone wrong.
Each Test Class has a ClassInitialize with a setup that launches the system under test and attaches my driver to it. There is also a Teardown that closes the system under test.
My question is, how can I debug or find out what is going wrong when the tests are run in dev-ops, what's the difference between running the tests from Visual Studio 2017 where they all work and running the tests from dev-ops where they seems to stop working.
So far I have set my tests to run from VS2017 in the same order that dev-ops happens to run them in. I did this by creating a Playlist and editing the playlist file to set the order. It's not my intention to control test order but only for VS2017 to mimic the order that dev-ops runs the tests in. What I have found is that those tests that do run in the chosen order work from VS2017 but don't work in the same order when run from dev-ops.
What is dev-ops doing differently?
The solution I arrived at was:
in Dev-Ops -> Release Pipeline -> Tasks -> VS Test - Auto Test -> Visual Studio Test (The purple beaker icon), set "Batch Tests = Based on Test Assemblies".
In Visual Studio ensure that each Test Class is in it's own Project and then set the Assembly name of that project to be unique / relevant to that test class (Your Solution may need many projects).
Remove and re-link your auto tests from Visual Studio to the dev-ops test case(s), this is done 1 test at a time.
Then and only then will Dev-Ops be able to run tests from Class A followed by Tests from Class B etc sequentially.

How to run e2e tests automatically?

I really don't know how to ask question to Google about this, so I excuse me that it is naive.
Our team is developing SPA application in ReactJS. We also do back-end programming for NodeJS. Our project recently got more e2e tests. They are written using webdriver.io packages. Everything works as expected but circa 30 tests run about 50 minutes. It is too long to pause developer work and force him to run tests.
We came with the idea that now when we have so many tests, we need to run them on separate computer (other than a developer's laptop, further I call it e2e-laptop).
So I programmed a bash script and installed Ubuntu on a e2e-laptop. My idea is, that developer who wants to run e2e test logs in on e2e-laptop with ssh, runs specified script with arguments (eg: --rev= specific git revision the tests should run on, --email= where to send Allure report) and logs out. After tests are done he gets Allure report in his mailbox.
This all sounds to me OK, but not very well. It works - it is like a dirty MVP. But what I really would like to give my team is the web browser based UI that gives the features my script has. I can imagine this software is hosted on e2e-laptop, every developer can open its webpage address in his local browser. Then after authorization, there are options: run all specs, run chosen specs, send report and more. It would be the best if that software could also allow simultaneous running of tests commissioned by multiple developers.
What software I need?
You need a continuous integration tool. https://stackify.com/top-continuous-integration-tools/
I recommend Jenkins.
I would first try to run your selenium tests headless in a docker container on your laptop. Once you are able to do that, use that same configuration in your docker container running in Bitbucket pipelines. It could actually be the same container and the same scripts. Then, developers can just make a branch and work with the tests on that branch. If only a certain subset of tests need to run, then the developer can make the necessary changes on his or her local branch to run those tests and push it up to Bitbucket. This should help with the configuration https://github.com/SeleniumHQ/docker-selenium.

Should UI tests be run on a build server or after deployment?

Should end to end tests be run at build time (running the application on the build server), or after deployment? I have not yet found a solid answer for which one is the standard.
Edit
I mean after deploying either to QA/SIT/UAT etc... vs. just running it on a build server without fully deploying it.
The whole point of having a build server is to create a single build of the current source code, of which you run tests and make sure that things work before you deploy them. I don't know why anyone would want to run tests after then have been deployed. What happens if you find a bug? You going to roll back the deployment? Always test before deployment.
Ideally, you would have a build environment that mimics your production environment that will allow you to run tests in a "deployed" environment. It's the reason that you have a development/staging/production servers.

Microsoft Access automated build hangs at creation of .ACCDE

I'm attempting to automate the build of a source controlled MS Access application (it's only the front-end, the back-end is SQL Server). The Access client is published to the users via a simple C# console app via ClickOnce... It's in that console project that I'm also building the MS Access application via a custom msbuild tasks from this CodePlex library: https://buildmsaccessdb.codeplex.com/ (which is also mentinoed in another StackOverflow post on the subject). On my machine, it all works fine. The Access source code is compiled into an ACCDB, which is then converted into an ACCDE which is what gets included in the published app.
However, when I make it an automated build in TFS, it always stalls at the step where it converts the ACCDB to an ACCDE. I've tried a variety of ways for executing the "Make ACCDE" (SysCmd 603) command. I've tried it in powershell scripts, in VBA, etc... but it always seems to stall. Is that because the automated build process is not an interactive process and maybe the the SysCmd 603 needs to be ran interactively? If I stop the build and take a look at the ACCDB, everything is good. It compiles and can be manually compiled into an ACCDE... so it's not that the ACCDB isn't compilable.
I'd like to test it as an interactive TFS service but I don't control the service account it's running under.
Any tips on suggestions are welcome and thanks in advance! We have this whole automated build and release process up and nearly working except for this one piece!
I don't know much about the MSBuild task library, but from a quick look at the source it looks like it opens Access to run the tasks and interacts with a dialog box at one point. If that's the case you'll definitely need to run the build in interactive mode.
The fact that your build is hanging and not erroring out would also indicate this is the case.
Even though you don't control the service account, I would presume there's someone else in your organisation that does. I'd suggest you work with them and to try the build in interactive mode and ensure it works. If needed you could always set up a second build machine that runs in interactive mode, with the current build server remaining in "run as a service" mode.

Grails: local tests pass, Test environment tests fail

I have a Grails application that, when run on my local Windows machine, passes all tests in my integration test suite. When I deploy my app to my Test environment in Jenkins, and run the same suite of tests, a few of them are failing for inexplicable reasons.
I think the Test box is Linux but I am not sure. I am using mocks in my Grails app and am wondering if that may be causing confusion in values returned.
Has anyone any ideas?
EDIT:
My app translates an XML document into a new XML document. One of the elements in the returned XML document is supposed to be PRODUCT but comes back as product.
The place where this element is set is from an in-memory database that is populated from a DB script. It is the same DB script that is used locally and on my Test environment.
The app does not read any config files that would be different in different environments.
Like the others have stated the really isn't enough information here to help give a solid answer. A couple of things that I would look at are:
If it's integration tests that are failing maybe you've got some "bad tests" that are dependent on certain data that does not exist in your test environment that Jenkins is running against.
There is no guaranteed consistency for test execution order across machines/platforms. So it's entirely possible that the tests pass for you locally just because they run in a certain order and leave things mocked out or data setup from one test that is needed in another. I wrote a plugin a while ago (http://grails.org/plugin/random-test-order) to help identify these problems. I haven't updated the plugin since Grails 1.3.7 so it may not work with 2.0+ grails apps.
If the steps above don't identify the problem knowing any differences in how you are invoking the tests on Jenkins vs. Local would be beneficial. For example if you specify a specific grails environment (http://grails.org/doc/latest/guide/conf.html#environments) when running on Jenkins and what the differences are between that and the grails environment used on your local.