Integration testing two grails web applications at once? - testing

I've got two grails applications, A and B.
A makes explicit REST calls to B, and I'd like an automated way of knowing this works. In the traditional grails integration-test model, only a single instance is pulled up at a time. I'm using jenkins as my build server, but it almost seems like I would need to deploy both systems and run tests locally, which I'm not sure jenkins supports.
What's the best way to do full integration functional testing of A using B?

use jenkins to deploy your apps to a container after successfull builds. and have some environment property or config that sets the url of app B in app A.

This isn't unique to grails or even Jenkins, but here's what you can do.
Set up a functional test server where you can deploy your applications. Create one Jenkins job to poll project A for SCM changes and deploy to server A, and one job to poll project B for SCM changes and deploy to server B.
Create a downstream build, C, which runs any functional tests against the combined system. Note that you probably don't want to bootstrap either database here, so don't use the integration test phase. This where you would use something like selenium. Make sure to block both upstream builds A and B on dowstream builds, and block C on all upstream builds.
That way, no server gets re-deployed in the middle of a test-run.
As for integration testing, imagine B to be like any other web service, like a database or an LDAP server or anything. If you wanted a full integration test, you'd just setup a server for your test run and run against it, right? Do the same. Using your B build or another build, create an integration test job which explicitly knows about the B server.

Related

Which Environments Should Integration Test be Run In?

Given a development pipeline with playground, staging, and production environments, which environment is most appropriate for integration tests? What is the best practice around this?
My thinking is that it should be in the playground environment, to get the earliest results (ie shift left). However, I have also seen some examples of re-running integration tests for each environment.
Is there value in running integration tests multiple times, or does it make more sense to just run it once in an appropriate environment?
There might not be a standard best practice, it also depends on the application and the testing setup you have.
You can skip running tests on the production environment as it will affect the performance for your users. Also it is not a good idea to put testing data into your production environment. To test out whether the functionality is working fine on production, you can create an environment which mimics the production environment.
Since different environment like QA/Staging can have different environment configuration and different CPU/Memory settings, it is a good idea to run the integration tests on multiple environments.

Deployment through Jenkins

I have two jobs in Jenkins. One for build and the other for deployment.
Once the build job is successful i create a build tag and publish it on Github.
Next i take that tag and deploy those artifacts using publish over ssh plugin and selecting the option send files or execute commands over ssh as my post build step. I am also adding the already configured server at this step.
Now my concern is in some cases server details are not informed i.e username/password well in advance.
Is there a feature in Jenkins which can ask me to enter servername/username/password for deploying? Can i have a parametrized build having these 3 fields as inputs? So that when i click "build now" in deployment job it asks for these fields.
The publish over SSH plugin is designed to use credentials previously setup and managed by Jenkins. This is necessary because Jenkins managed the distribution of credentials when you run builds on slave nodes.
As an alternative solution that you could consider is using the Rundeck plugin. Rundeck is an general purpose automation tool, similar to Jenkins but focused on general purpose automation. The advantage is that you can use dedicated tools for build and deployment (useful when you have separate Dev and Ops teams) and Rundeck is better suited to managing large numbers of run-time servers.

Launch/Deploy latest version WCF service before running unit tests in vs2012

I want VS to deploy the latest version of the WCF every time I run my tests.
Currently I have to deploy manually, or run the WCF Service, for it to be deployed.
I'm looking for a functionality similar to starting multiple projects when debugging.
Maybe to specify the behaviour I am missing right now. I can not set breakpoints inside the webservice, as it's not run in debug-mode.
#codespike Using standard unit testing in VS2012
#codespike With Deploy I mean "copy latest version to local IIS, so it can be respond
to calls over the web.
#mayo Yes! that is a brilliant suggestion. Bypassing the Coding and Decoding (webtransport) phase, and go straight to the classes that manipulate the data.

two projects, one wcf service

hey everyone.
ok, a little background to the project... I've released a program to a couple clients that use a WCF service for them to connect to our servers in the office. Being that i used the clickOnce setup utility in VS2010, when i am doing testing on a VM i publish to a different spot on the server as to not give untested code to clients. However, the WCF service only gets published to one place for both versions (development and release).
What i'm working on now requires a change to the WCF service as a couple additional things get transferred between client and server. If I publish the modified WCF, will it affect the current clients, or will i be able to test my development version with no worries? I'm afraid i already know that the answer will be, yes it will affect them.
thanks!
dave k.
Isolation of test, dev and production:
Whenever you need to test something or put it in production, you need a separate environment. So you need a separate machine to develop and do local testing, another to test the checked-in code (for use by a tester, a customer etc.), and another to run the production code -- at the very least.
If your service interacts with other software, especially with software that gets updated a lot, this is an important way to make sure that you don't introduce side-effects and that what you build will be compatible with what is running on your production server.
So: isolate and make your test environment a "clone" of your production environment.
Two versions in parallel:
If you update your own code for customer X, you can still host a previous version of your production code for customer Y on another (virtual?) server. Customer Y can then choose when to switch to your new version, after which you can take the old code out of production.
You should create a seperate VM for your test environment.

Publish an web application on build with NAnt, MSBuild or any other tool

I have a scenario where I have to setup a test environment where I want to be able to tell my NAnt or other build tool to make an new IIS web application, put the latest bins in the newly created IIS web application, and post me an email where the new address and port where the new application are addressed, is this possible and how? which tool?
There are several ways to approach this:
Set up a continuous integration (CI) server on the test environment. This is a viable option if your test environment machine doesn't change often and it's a single machine.
Push the installation from your development machine using tools like PsExec
Combination of the two: you have a build CI server which pushes the installation to (multiple) test environments.
Of course, you also need a good build script which will set up the IIS application (NAnt offers tasks for this). Emailing to you can be done by CI server (CruiseControl.NET Email Publisher, Hudson...).
I suggest taking some time to read this excellent article series: Automation for the people: Deployment-automation patterns
Our CruiseControl .Net build server does exactly this as part of it's NAnt build-script process...
Once the code is retrieved from source control, it's all built/compiled in turn. Web projects are then handled slightly differently to normal .dlls, as they are deployed to a particular folder (either on the current machine or otherwise) where IIS (also set-up by the script) to serve the pages.
Admittedly, we're using Virtual Directories instead of creating and disposing of new website instances on the server, as otherwise we'd have to manage the port numbers for each website.
NAnt has the capabilities of doing all of this IIS work, as well as all of the email work too - I'd certainly recommend looking at this avenue of enquiry to solve your problem. Plus, you also get the continous integration aspect as a side-benefit in your case!