Running both 32 bit and 64 bit unit test in TFS 2010 - msbuild

My project has both 32 bit and 64 bit components. THey have both managed and unmanaged components. I need to run unit test for both configuration. I also have separate set of different test files to deploy for each configuration, so I've been using deployment item using .testrunconfig. I saw you can force it to run in 32 bit or run in 64 bit is the machine is 64 bit. I suppose I could create 2 build definition one for 32 bit and 64 bit, but if it's possible I rather have one.
So is there a way to accomplish this with one build configuration ? How do you conditionally set the deployment item based on the configuration ?

Since you already have two different .testrunconfig files that specify the deployment items as well as whether tests should run in 32bit or 64 bit environment, you can add a second test to your build by editing your build definition from Visual Studio, choosing the Process tab and selecting the little "..." button to edit your tests (assuming you're using the Default Template). This will open the Automated Tests dialog window where you can add your tests a second time and specify your second testrunconfig.
IIRC if you're building multiple configurations/platforms in your Items to Build specification, this method will run all tests for all configurations, which may or may not be what you want. To run your x86 binaries in 32 bit test environment, and your x64 binaries in 64 bit, you will have to edit the Build process template accordingly.

Related

Cross-compiling vs virtualization

I want my app to run in Windows and Ubuntu, in both 32- and 64-bit modes. So I must compile four times and also test it four times. The question is whether it's best to use cross-compiling or compile in virtual-machine (VM) like VirtualBox.
I know cross-compiling is hard for the first time, but this way I can keep the VM for testing "clean", with no development stuff that may hide some lack of files in the final user PC. On the other hand, compiling directly in a VM is quite more simple.
So I ask:
What are other pros/cons for each method?
Which is the right way?
Which is the most used way and why?
TL;DR: In this case, skip cross-compilation. Build and test on each target platform directly.
Details: If you need to ship your software on these 4 platforms, you will need either physical or virtual manifestations of them, regardless of whether you cross-compile or compile natively on the target platform.
Why? Because you will want to run tests on every target platform, not just one.
Why? Because your cross-compiler could have bugs on one platform but not another, and because 32-bit vs. 64-bit as well as Linux vs. Windows are sufficiently different. For example, if you only run tests on Ubuntu-32, but cross-compile to Windows-64 and ship the software, you might find a problem only once it reaches the customer.
Cross-compiling is hard to set up and hard to maintain. Given that you're going to want to test the code, the installation, etc. on every of these platforms, you might as well skip the cross-compilation and just run builds and tests on each of the target platforms directly.
Speaking of keeping VM state "clean": don't set it up manually, create it from scratch every time. Use tools like Packer and Vagrant to automate the builds, and use clean VMs every time to ensure it's reproducible and hermetic.

Coded UI Tests on a lab environment

I'm trying to set up a an automated build process and together with some coded ui tests. I think I've managed to set up pretty much everything up and working, the last missing piece of the puzzle being able to run the coded UI tests on the test agent machine.
So basically, I have a CI build that also runs unit tests, and if successful, deploys the binaries on a shared location. My goal is to then trigger the other process that runs the coded UI tests. I got the coded UI tests working on my dev computer by hard coding the location to start the application from. However, I am at a loss on how to configure this to work on the test agent. I used the LabDefaultTemplate11 build process template, and configured it to use the latest build completed by the CI build. But how do I specify what executable the test agent should use?
At first I thought it was enough to specify the build definition and build configuration, but then I realized there might be multiple executables, so the test agent would have to guess. Doesn't sound too good.
So in the end I guess my question is, how to (robustly) add the startup of the application to my coded UI tests in a manner that works both on my local dev machine, and the machine running the test agent?
Oh and I'm using TFS 2012 (with VS 2012 premium).
The lab template expects you to create Test Cases in MTM then associated coded ui tests to them in visual studio by opening the test case, selecting the associated automation tab and clicking the "..." button. You need to have the project with the coded ui tests open at the time.
Then in the lab build you select one or more Test Suite (from MTM) that contains the Test Cases for those coded uit tests.
When you make your tests in the first place make sure you're running your program/website in a way that the test agent will be able to also - eg use a standard installation directory or domain.
It is best practice to open the program being tested at the start of every test and close it at the end. However you could get around that by executing the program as part of the deploy instructions in the lab build.

Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests).
What solutions exist to implement a continuous integration machine (i.e. Build machine).
The requirements for this are:
A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about)
Before the actual build, the latest version of the source code must be acquired from the repository we are building from
The build must build the entire product
The build must build all Unit Tests
The build must execute all unit tests
A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded.
The summary must contain which changesets were in this build that were not yet in the previous successful (!) build
The system must be configurable so that it can build from multiple branches(/Repositories).
Ideally, this system would run on a single box (our product isn't that big) without any server components.
What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done?
Thanks
TeamCity, from JetBrains, the makers of ReSharp, will do all of that. You will have to configure it for what specifically it means to "build your product", but you can configure up everything you specified with it.
The software can alert you to failed builds, even down to alerting only the person responsible for checking in code that broke the build. It even comes with handy web pages you can view to see only your own changes, which builds they've been through successfully, which ones are pending, and which ones are currently being executed.
Since it is a distributed product, you can make it grow with your organization and product. If at some point you discover that you're waiting for the build to complete too much, because a lot of builds are being queued up, you can add more build agents. The build agents are basically separate client programs you install on additional machines, that execute the actual build configurations.
It comes in two flavors, the professional version and the enterprise version. The professional version is free, can contain up to 20 build configurations, 20 users, and 3 build agents. The enterprise version has unlimited users and build configurations, and you can also use LDAP based security (think domain verified users.) There's also some other bonuses from the enterprise version. You can also buy licenses for more build agents if you need more than the initial 3.
Now, if "no server components" means you don't want it to act like a web server, you're going to be hard pressed to find something that will react to your commits.
However, if you mean that you don't want to have to install a server OS, then TeamCity can work on workstation versions of Windows as well. That isn't to say that you shouldn't consider setting up a proper server for it, but it will run on a workstation if that is what you require.
Our product BuildMaster does all of the things you listed by design and there is a free, somewhat limited edition (e.g. you can only have a limited number of issue tracking providers integrate with it, the database change script packaging tool isn't included in the free version, etc.) for 5 users or fewer.
What you've described is the basics of a CI Tool, so every CI Tool should be OK.
I use Cruise Control.NET but it is bugged with Mercurial and is not very straightforward at first glance. I am nevertheless happy with it. Other tools that come in my mind are Hudson, Team Build (from TFS) and TeamCity.
I have not tried other tools but you can see pros/cons here :
TeamCity vs CC.net
Hudson vs CC.net, Link 1 and Link 2
CC.net vs TFS
EDIT : I forgot to mention that Hudson and Cruise Control.net are Open Source project, you can easily write plugins and patches to your install.
EDIT² : Mercurial bugs seem to be fixed in the upcoming 1.6 version of ccnet (changes commited to the trunk this week).
There's always BuildBot which I like (and have contributed some code to ). It's fairly easy to set-up and run on any OS, and to do simple tasks like that you say, and remarkably flexible if you need it.
What you might find missing is batteries-included log-scrapers and/or report generators that other more commercial CI-servers comes with, especially for Enterprise-y frameworks.
It scales pretty well too, Mozilla and Chromium use it, amongst others.

Powerbuilder run

I'm using Powerbuilder to call an external function from a DLL created in C#
If I generate an executable it works fine, it call the web service perfectly well, but when I'm trying to run it in "development" mode it don't use the "application_name.exe.config" file.
I tried to set "app.config" file hard coded in the DLL, but I was unsuccessful
Clues to resolve this issue?
I think you described it yourself: you're looking for it to use something used by the EXE when you're running from development mode. When you run from development mode, there is no EXE generated or used, so Windows won't be leveraging functionality linked to the EXE. (PB starts your application so quickly because it is only loading the application to the virtual machine and running its Open event.) If you need this, it sounds like you'll have to include Deploying of the EXE and running it as part of your testing cycle.
Good luck,
Terry.
When you compile and run from the exe you're using your exe. But when you run from the dev environment you're actually using pbxxx.exe (pb115.exe, pb110.exe etc.). You may be able to copy the "application_name.exe.config" into your pb directory and rename it something like pbxxx.exe.config. At least that's the way it works with manifest files -- I had two, one called appname.exe.manifest and one called pb115.exe.manifest.
Just curious but how many libraries/objects are in your application?
I have some very large applications and the longest any of them takes to do a full build is about 30 minutes. Something odd about your aapplication for it to take 2 hours to do a full build.
DLL's dont have config files. Only EXE's.

Local Build Automation?

Working in a team environment, we have a Team Foundation Server that also contains a Team Build component. It is configured to automatically build all projects and solutions at specific times or on request.
We develop a product that is built with several solultions that depend on eachother. When things have been changed in one solution, it has to be rebuilt locally manually in both debug and release mode so that changes take effect in another solution that depends on it.
Also when a developer retrieves all sources the first time, he has to build all solutions manually in the correct order to get a working environment.
What is the best way to automate things like this? Create .cmd files that trigger the correct msbuild files? Using a program such as CruiseControl.NET?
What do you people do to maintain a clean local development environment?
What I did for our Team was to provide a Visual Studio Solution which contains all projects. Then I created a simple .cmd file which uses the commmandline tools of Visual Studio to build this solution with their respective debug/release/profile configurations. This is a one step build solution that can be used from every engineering machine.
The next level is the continuous integration system that is setup to check for changes every 15 min and start a build if there are changes in the VCS. I'm using hudson as our CI system. The CI system is used to build the native projects, the java projects as well as the flex stuff. Since everything can be build from the commandline it's pretty easy to use it with hudson or CruiseControl.NET.