simple_cov: how to include tests of engine to the report? - ruby-on-rails-3

Our rails application is using an engine and build upon customer projects. The engine itself is not executable as a standalone program. Using simple_cov works just fine for the customer projects. However, launching all tests (customer and engine) causes rake to abort after performing all of them and writing the coverage report for the customer project. Basicly everthing works but writing the coverage report for the engine.
The engine is stored in an external folder next to the customer project. Used testing suite is rspec
So, is there a way to include the engine in the SimpleCov config?

Where are you requiring simplecov? I had a similar problem, not with an engine, but rather trying to cover all tests in a "non-standard" rails application with a services architecture. Most of the files (even files in lib/ and models/) were not loaded. The way I solved that was to require simple_cov in "config/environments/test.rb". Hope that helps!

Related

Grails: local tests pass, Test environment tests fail

I have a Grails application that, when run on my local Windows machine, passes all tests in my integration test suite. When I deploy my app to my Test environment in Jenkins, and run the same suite of tests, a few of them are failing for inexplicable reasons.
I think the Test box is Linux but I am not sure. I am using mocks in my Grails app and am wondering if that may be causing confusion in values returned.
Has anyone any ideas?
EDIT:
My app translates an XML document into a new XML document. One of the elements in the returned XML document is supposed to be PRODUCT but comes back as product.
The place where this element is set is from an in-memory database that is populated from a DB script. It is the same DB script that is used locally and on my Test environment.
The app does not read any config files that would be different in different environments.
Like the others have stated the really isn't enough information here to help give a solid answer. A couple of things that I would look at are:
If it's integration tests that are failing maybe you've got some "bad tests" that are dependent on certain data that does not exist in your test environment that Jenkins is running against.
There is no guaranteed consistency for test execution order across machines/platforms. So it's entirely possible that the tests pass for you locally just because they run in a certain order and leave things mocked out or data setup from one test that is needed in another. I wrote a plugin a while ago (http://grails.org/plugin/random-test-order) to help identify these problems. I haven't updated the plugin since Grails 1.3.7 so it may not work with 2.0+ grails apps.
If the steps above don't identify the problem knowing any differences in how you are invoking the tests on Jenkins vs. Local would be beneficial. For example if you specify a specific grails environment (http://grails.org/doc/latest/guide/conf.html#environments) when running on Jenkins and what the differences are between that and the grails environment used on your local.

Coded UI Tests on a lab environment

I'm trying to set up a an automated build process and together with some coded ui tests. I think I've managed to set up pretty much everything up and working, the last missing piece of the puzzle being able to run the coded UI tests on the test agent machine.
So basically, I have a CI build that also runs unit tests, and if successful, deploys the binaries on a shared location. My goal is to then trigger the other process that runs the coded UI tests. I got the coded UI tests working on my dev computer by hard coding the location to start the application from. However, I am at a loss on how to configure this to work on the test agent. I used the LabDefaultTemplate11 build process template, and configured it to use the latest build completed by the CI build. But how do I specify what executable the test agent should use?
At first I thought it was enough to specify the build definition and build configuration, but then I realized there might be multiple executables, so the test agent would have to guess. Doesn't sound too good.
So in the end I guess my question is, how to (robustly) add the startup of the application to my coded UI tests in a manner that works both on my local dev machine, and the machine running the test agent?
Oh and I'm using TFS 2012 (with VS 2012 premium).
The lab template expects you to create Test Cases in MTM then associated coded ui tests to them in visual studio by opening the test case, selecting the associated automation tab and clicking the "..." button. You need to have the project with the coded ui tests open at the time.
Then in the lab build you select one or more Test Suite (from MTM) that contains the Test Cases for those coded uit tests.
When you make your tests in the first place make sure you're running your program/website in a way that the test agent will be able to also - eg use a standard installation directory or domain.
It is best practice to open the program being tested at the start of every test and close it at the end. However you could get around that by executing the program as part of the deploy instructions in the lab build.

How to test code locally when using a build server?

I've never worked on tremendously huge projects and the workflow we use at work is check-out/code/compile locally to test/commit. I was wondering how a build server would change this process. How do developer test their code when the application is too huge to compile locally? They just code, commit and pray?
Absolutely not.
The developer usually has a build file which can build the project for him or her, which has some "targets" defined which do the testing. If you have a really big project, you may have certain portions of it precompiled for you, so you don't have to build the whole thing in one big chunk. You usually do your testing locally before you commit to your repository. Breaking the build in big projects can mark you as an object of ridicule and scorn. Breaking the build in really important, really big projects can be career limiting... ;-)
The build SERVER itself doesn't change this. The build server only runs your build file and the targets you tell it to.
There are also build components (I've just started using TeamCity - no affiliation) that allow "personal builds".
I haven't used it yet as we haven't got it set up properly but my understanding is that TeamCity allows running a build (and tests if they are any run on the server) with your changes before committing (and optionally the server will commit your changes if the build is succesful). in TeamCity this is called a Pre-Tested Commit.

Need a .NET database versioning script runner

I'm looking at versioning databases and came across the usual articles regarding how to do this (coding horror, ode to code, etc). This all make perfect sense to me, however I'm trying to find a script runner that will run the sql scripts for me. All these articles mention having something to run them automatically, but none of them make any recommendations.
Does anybody know of any utilities for running these scripts? Ideally something that works in the following way:
Runs everything in a transaction so if any single update fails, the whole thing fails
I have control over the name of the scheme version database table
Ability to have a series of scripts that are always run if an upgrade takes place
Can be run as part of an automated task
EDIT
Open Source
We Use DbUp as Script Runner in our Web Project. Its simple and nice open source tools that help you to write you own script runner with Console Application fashion.
DbUp is a .NET library that helps you to deploy changes to SQL Server
databases. It tracks which SQL scripts have been run already, and runs
the change scripts that are needed to get your database up to date.
we can run scripts from folder in filesystem or you can embed them to your assembly and run them as embedded scripts.
you can find more information and sample on their code repository on github.
http://dbup.github.com
Check out SSW SQL Deploy - it would appear to do just about all you're asking for. It keeps track of already executed scripts, it'll run a whole batch of scripts at once and on multiple servers (if required), and so forth.
It's a pretty simple, but nifty tool - highly recommended!

How to validate deployment packages created by msbuild? (preferably using mstest or nunit)

Our msbuild process creates a variety of zip packages for deployment (mostly web sites, but other things as well). We have a variety of recurring problems that keep sneaking back - files included that shouldn't be, missing resources. This screams for automated validation. The criteria to test for are simple
Validation of foosite package:
Resource files are present.
No test result files, obj files, or other build artifacts
And so forth.
Ideally, I could use nunit or mstest, which everone is familiar with. Msbuild knows where the packages are. We have a lot of packages, possible concurrent builds on different branches. Ergo, the location of the packages and names of the packages are not deterministic - so the tests don't know where the packages are.
What is the simplest way to feed msbuild information to mstest or nunit? The answer to this question would one possible answer, however, that question got architectural advice instead of an answer. I know this isn't a unit test, but the test framework is handy, anyway. I could create an exe to validate the build - but why add a couple hours to the project?
Or, do you have a better suggestion for automatically validating build packages? (MSIs, zips, whatever)?
What I've ended up doing is having a bunch of custom MS build tasks which spin up a virtual machine on Virtual Server, copy the MSI onto the machine, silently deploy it and then validate against it. I used PSExec to start the MSI. It could then use the MSTest command line runner to use MSTest and run your test bits.
This is probably overkill for you, but using a VM allows you to start clean and not be affected by any previous installs on your dev box.
If you want a fast fail, like a unit test, then I suggest you create unit tests against your packages. Such a test would unzip the .ZIP packages, and run some asserts against the contents.
You could even use some TDD techniques against the packages. For instance, if you have a deployment fail because a particular file is missing, then write a unit test that fails because the file is missing; change the build so that the file is present; then make sure the unit test succeeds.
But in general, deployment issues are broader than that, and I echo the suggestion from blowdart. Deploy into one or more virtual machines, then run automates tests over the deployed environments. These tests would not only test for simple things like was there an error returned during the installation itself; they would also check things like were the IIS virtual directories set up correctly, with the correct properties and contents, and does the web site basically run.
I'd use several different virtual machines to test different deployment scenarios: one for a clean deploy; one for an upgrade from version .-1, etc. It's possible that the same, or similar IVT tests could be run for each environment.
Even if you can't do this all at once, the thought process involved in this exercise should lead to a more formal definition of what your deployment environment really is. You this will be helpful when you get a chance to embody this formal definition in actual tests.