Unit Testing an SOA WCF system...finding it difficult to get decent coverage - wcf

We are currently replacing a 20 year old C based system with a modern SOA WCF system built in .NET3.5. Our industry requires rigorous testing including good automated unit test converage. We are having issues, however unit testing our SOA system to anywhere near the extent that the C based system was unit tested.
The single biggest problem is that most of the methods in the system are actually dependant on calling into code across service boundaries, for example we are heavily data driven but we don't access the database directly within our system: we call into a WCF Data Access Service.
Running any unit tests in visual studio is almost impossible as doing almost anything results in cross service calls of some kind. If its not data access its one of the other services. I reckon we can get about 5% coverage.
I see alot of people struggle with testing SOA so I take it this is not unique to us. The thing is QA is going to question why we are not unit testing more of the system.
To be honest I see VSTS unit testing as more of a regression testing than a validation (fit for use) tool. What options are there for unit testing SOA? Is it realistic in peoples experience to achieve good coverage? Is there a way to mock a Data Access Service (or any service: NOTE we dont use WCF proxies) or do we have to explain to QA that unit testing ability has gone backwards over the last 20 years...
Any sort of suggestions are welcome, I guess this is a general opinion question.

I'd have to say that unit-testing an SOA is a lot like unit-testing anything else. If anything it should be easier because it forces you to isolate dependencies.
Unit tests shouldn't generally cross service boundaries. Really, the idea of a service is that it is completely independent and opaque. You test the service directly - not over the WCF channel, but by unit-testing the actual classes that compose the service. Once you've tested the service itself (which you should be able to get near 100% coverage), you don't need to involve it in client-side tests.
For client-side unit tests, you mock the service. WCF actually makes this very easy for you because every WCF client implements an interface; if you normally use the FooService or FooServiceSoapClient in your code, change it to use the corresponding IFooService or FooServiceSoap that the proxy class implements. Then you can write your own MockFooService that implements the same interface and use that in your client tests.
Part of what sometimes trips people up here is the idea that the service can do wildly different things depending on the specific message. However, a client test that involves the service interface should generally only be testing one or two specific messages per test, so it's easy to mock the exact request/response you need for a given client test using a tool like Rhino Mocks.
Duplex gets a little tricky but keep in mind that duplex services are supposed to be based around interfaces as well; even the MSDN example introduces an ICalculatorDuplexCallback. Callbacks will be interfaces, so just like you can mock the service methods on the client side, you can mock the client callbacks on the service side. Just have mock/fake callbacks that you use for the service unit tests.
Mark Seeman has a pretty good blog post all about Unit-Testing Duplex WCF Clients, with example code and all. You should give that a read, I think it will help you out here.

It sounds like the testing you are doing now is integration tests or system tests. Where the test methods are calling external sources. To truly perform unit testing on a service you will need to somehow abstract the external calls out so you can mock (for example moq) or stub the calls to that service.
For example Code that is tightly coupled to the Data Access Service:
public class SomeSOA
{
public bool DoSomeDataAccess()
{
//call the real data service
DataService dataService = new DataService()
int getANumber = dataService.GetANumber();
return getANumber == 2;
}
}
Slightly refactored to reduce coupling to the DataService
public class SomeSOA
{
IDataService _dataService;
public SomeSOA(IDataService dataService)
{
_dataService = dataService;
}
public SomeSOA() :this(new DataServiceWrapper()){}
public bool DoSomeDataAccess()
{
int getANumber = _dataService.GetANumber();
return getANumber == 2;
}
}
public DataServiceWrapper : IDataService
{
public int GetANumber()
{
// call the real data service
}
}
Now with the re-factored code you can modify your tests to use a stub that can return expected results without calling the real DataService.
[TestMethod]
public void GetANumber_WithAValidReturnedNumber_ReturnsTure()
{
IDataService dataService = new DataServiceFake();
SomeSOA someSoa = new SomeSOA(dataService);
Assert.IsTrue(someSoa.DoSomeDataAccess();
}
public class DataServiceFake :IDataService
{
public int DoSomeDataAccess()
{
//return a fake result from the dataService.
return 2;
}
}
Now granted all of this is just pseudo code, but decoupling your service from the real implamentation of the DataAccessSerive will allow you to unit test your code and not rely on a real DataAccessService for them to perform correctly.
Hope this helps and makes sense!

Here's my suggestion: step away from the "Unit Testing" paradigm, and work on a suite of integration tests.
I assume that your system has a front end which calls the services.
Write a test suite that actually connects to running services (in your test environment obviously) and makes the same sequence of calls as your front end does.
Your test environment would have the latest services running against an empty test database. Just like with unit tests, each test would make the calls that populates the test data with just what it needs, invoke functionality, test that the visible information now matches what it should, then clears the database again.
You may have to create one other service that services the integration tests by clearing the database on request. (Obviously, you wouldn't actually deploy that one...)
While they are not "Unit tests", you would be getting full coverage.

Related

Unit testing, IoC, DI and mocking objects within a WCF service

I have a WCF service (INTWCF) that consumes another WCF service (EXTWCF). EXTWCF exposes 5 operations, will be hosted on an external app server (in a DMZ), and implements message and transport level security.
INTWCF will be hosted on an internal app server, does not implement any security, hosts two individual services with approx 30 operations - a number of which are called by the operations on EXTWCF (along with various other domain level applications), depending on various parameters passed in (EXTWCF contains simple logic to determine which operations on INTWCF should be called).
EXTWCF implements INTWCF using IoC and DI.
Using TDD, I would like to write my initial unit tests for the operations exposed on EXTWCF. I would therefore like to mock up INTWCF using Moq. I would have thought that I should mock up and inject INTWCF in to the unit testing project, but I've read (in quite a few places) that IoC and DI should not be used during unit testing due to the additional testing dependencies they introduce.
Am I being fed incorrect information, or is there another way to approach this problem? Is mocking appropriate for this situation? Seeing as though my unit tests will be accessing the operations on EXTWCF, they will not know about INTWCF. This seems to me to be a perfect case for DI?
I'm using Ninject for IoC and DI; if DI is the answer, does Ninject provide a bootstrapper / plugins for unit testing? I'm not familiar with any, and don't see anything on their web page?

How do you inject wcf client dependencies in a ViewModel and keep it testable?

TL;DR:
What is a good and testable way to implement the dependency between the ViewModels and the WCF services in a MVVM client?
Please read the rest of the question for more details about the problems I encountered while trying to do this:
I am working on a silverlight client that connects to a wcf service, and I want to write unit tests for the client.
So I'm looking for a good solution for using the wcf clients in my ViewModels and testing that interaction. I have found two solutions until now:
Solution 1: This is actually how I have implemented it until now:
public class ViewModelExample
{
public ViewModelExample(IServiceClient client)
{
client.DoWorkCompleted += ..
client.DoWorkAsync();
}
}
//This is how the interface looks like
public interface IServiceClient
{
event EventHandler<AsyncCompletedEventArgs> DoWorkCompleted;
void DoWorkAsync();
}
//I was able to put the interface on the generated clients because they are partial classes, like this:
public partial class GeneratedServiceClient : IServiceClient
{
}
The good part: it's relatively easy to mock
The bad part: my service client lives as long as my ViewModel, and when I have concurrent requests I don't know which answer belongs to which request.
Solution 2: Inspired by this answer
WCF Service Client Lifetime.
public class ViewModelExample
{
public ViewModelExample(IServiceFactory factory)
{
var client = factory.CreateClient();
client.DoWorkCompleted += ...
client.DoWorkAsync();
}
}
The good part: each request is on a different client, so no more problems with matching requests with answers.
The bad part: it's more difficult to test. I would have to write mocks for both the factory and the wcf client every time. This is not something I would like to do, since I alreay have 200 tests... :(
So my question is, how do you guys do it? How do your ViewModels talk to the wcf services, where do you inject the dependency, and how do you test that interaction?
I feel that I'm missing something..
Try having a Func<IServiceClient> injected into your VM instead of the a client instance; you'll have a 'language-level factory' injected instead of building a class for this. In the factory method you can instantiate your client however you want (each access could create a new instance for that for example).
The downside is that you'll still have to touch your tests for the most part, but I assume it will be less work:
public ViewModelExample(Func<IServiceClient> factoryMethod)
{
var client = factoryMethod();
client.DoWorkCompleted += ...
client.DoWorkAsync();
}
The WCF service should have it's own tests that confirm the functionality of itself.
You should then be mocking this WCF service and writing unit tests within your consumers.
Unfortunately, it's a pain and something we all have to do. Be pragmatic and get it done, it will save you getting bitten in the future.
Are you using IoC container by a chance? If you had, this problem would be totally mitigated by container (you'll simply register IService dependency to be created as brand new upon each request).
If that's not the case, then
I would have to write mocks for both the factory and the wcf client every time
is how you deal with this kind of "problems". The cost is relatively small, probably 2-3 extra lines of code per test (all you have to do is setup factory mock to return service mock, which you do need either way).

Using MEF in Service layer (WCF)

So far I found that MEF is going well with presentation layer with following benefits.
a. DI (Dependency Injection)
b. Third party extensibility (Note that all parties involved should use MEF or need wrappers)
c. Auto discovery of Parts (Extensions)
d. MEF allows tagging extensions with additional metadata which facilitates rich querying and filtering
e. Can be used to resolve Versioning issues together with “DLR and c# dynamic references” or “type embedding”
Pls correct me if I’m wrong.
I'm doing the research on whether to use MEF in Service layer with WCF. Pls share your experience using these two together and how MEF is helping you?
Thanks,
Nils
Update
Here is what my result of research so far. Thanks to Matthew for helping in it.
MEF for the Core Services - cost of changes are not justifying the benefits. Also this is big decision and may affect the service layer in good or bad way so needs lot of study. MEF V2 (Waiting for stable version) might be better in this case but little worried about using MEF V1 here.
MEF for the Function service performs - MEF might add the value but it’s very specific to the service function. We need to go deep into requirement of service to take that decision.
Study is ongoing process, so everyone please share your thoughts and experience.
I think any situation that would benefit from separation-of-concerns, would benefit from IoC. The problem you face here is how you require MEF to be used within your service. Would it be for the core service itself, or some function the service performs.
As an example, if you want to inject services into your WCF services, you could use something similar to the MEF for WCF example on CodePlex. I haven't looked too much into it, but essentially it wraps the service location via an IInstanceProvider, allowing you to customise how your service type is created. Not sure if it supports constructor injection (which would be my preference) though...?
If the WCF service component isn't where you want to use MEF, you can still take advantage of MEF for creating subsets of components used by the service. Recently for the company I work for, we've been rebuilding our Quotation process, and I've built a flexible workflow calculation model, whereby the workflow units are MEF composed parts which can be plugged in where needed. The important part here would be managing how your CompositionContainer is used in relation to the lifetime of your WCF service (e.g. Singleton behaviour, etc.). This is quite important if you decide to create a new container each time (container creation is quite cheap, whereas catalog creation can be expensive).
Hope that helps.
I'm working on a solution where the MEF parts that I want to use across WCF calls are stored in a singleton at the application level. This is all hosted in IIS. The services are decorated to be compatible with asp.net.
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
In Global.asax, I import the parts.
[ImportMany(typeof(IOption))]
public IEnumerable<IOption> AvailableOptions{ get; set; }
After initializing the catalog and container, I copy the imported objects to my singleton class.
container.ComposeParts(this);
foreach (var option in AvailableOptions)
OptionRegistry.AddOption(option);
EDIT:
My registry class:
public static class OptionRegistry
{
private static List<IOption> _availableOptions= new List<IOption>();
public static void AddOption(IOption option)
{
if(!_availableOptions.Contains(option))
_availableOptions.Add(option);
}
public static List<IOption> GetOptions()
{
return _availableOptions;
}
}
This works but I want to make it thread safe so I'll post that version once it's done.
Thread-safe Registry:
public sealed class OptionRegistry
{
private List<IOptionDescription> _availableOptions;
static readonly OptionRegistry _instance = new OptionRegistry();
public static OptionRegistry Instance
{
get { return _instance; }
}
private OptionRegistry()
{
_availableOptions = new List<IOptionDescription>();
}
public void AddOption(IOptionDescription option)
{
lock(_availableOptions)
{
if(!_availableOptions.Contains(option))
_availableOptions.Add(option);
}
}
public List<IOptionDescription> GetOptions()
{
return _availableOptions;
}
}
A little while ago i was wondering how I could create a WCF web service that will get all of its dependencies wired by MEF but that i wouldnt need to write a single line of that wire up code inside my service class.
I also wanted it to be completely configuration based so i could just take my generic solution to the next project without having to make code changes.
Another requirement i had was that i should be able to unit-test the service and mock out its different dependencies in an easy way.
I came up with a solution that ive blogged about here: Unit Testing, WCF and MEF
Hopefully will help people trying to do the same thing.

Need some advice for a web service API?

My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}

Inter-service Communication Architecture Using WCF, Dependency Injection, and Unit Testing

I'm new to WCF and in a large part, also distributed programming. I am working on a project that requires 4 discrete services.
I am trying to correctly define the responsibilities for each component of each service. Suppose I have service B which needs to communicate with service A. For each service, I have defined the service implementation class, the service host, and a proxy class.
In order to unit test, I am using dependency injection - since service B needs to communicate with service A, I have passed an instance of A's proxy class as a constructor argument to service B.
When I am unit testing service B, I must have A's service host up and running.
Is this the wrong way of going about dependency injection? If so, why, and how do you recommend I do it?
Is there a better way of going about dependency injection?
Should I have to run the service host to get the right results in the unit test?
Consider using
ChannelFactory instead of generated clients.
ChannelFactory<IHello> clientFactory = new ChannelFactory<IHello>("targetConfiguration");
IHello client = clientFactory.CreateChannel();
string result = client.SayHello();
Interface types wherever possible
one of the mock object frameworks (example) to inject interface implementations when writing your tests.
Regarding your third question, the answer is "No" if your aim is testing particular small units (the whole point of unit testing :). But it's always better to write some integration tests to make sure you don't have any serialization/hosting problems.