So far I found that MEF is going well with presentation layer with following benefits.
a. DI (Dependency Injection)
b. Third party extensibility (Note that all parties involved should use MEF or need wrappers)
c. Auto discovery of Parts (Extensions)
d. MEF allows tagging extensions with additional metadata which facilitates rich querying and filtering
e. Can be used to resolve Versioning issues together with “DLR and c# dynamic references” or “type embedding”
Pls correct me if I’m wrong.
I'm doing the research on whether to use MEF in Service layer with WCF. Pls share your experience using these two together and how MEF is helping you?
Thanks,
Nils
Update
Here is what my result of research so far. Thanks to Matthew for helping in it.
MEF for the Core Services - cost of changes are not justifying the benefits. Also this is big decision and may affect the service layer in good or bad way so needs lot of study. MEF V2 (Waiting for stable version) might be better in this case but little worried about using MEF V1 here.
MEF for the Function service performs - MEF might add the value but it’s very specific to the service function. We need to go deep into requirement of service to take that decision.
Study is ongoing process, so everyone please share your thoughts and experience.
I think any situation that would benefit from separation-of-concerns, would benefit from IoC. The problem you face here is how you require MEF to be used within your service. Would it be for the core service itself, or some function the service performs.
As an example, if you want to inject services into your WCF services, you could use something similar to the MEF for WCF example on CodePlex. I haven't looked too much into it, but essentially it wraps the service location via an IInstanceProvider, allowing you to customise how your service type is created. Not sure if it supports constructor injection (which would be my preference) though...?
If the WCF service component isn't where you want to use MEF, you can still take advantage of MEF for creating subsets of components used by the service. Recently for the company I work for, we've been rebuilding our Quotation process, and I've built a flexible workflow calculation model, whereby the workflow units are MEF composed parts which can be plugged in where needed. The important part here would be managing how your CompositionContainer is used in relation to the lifetime of your WCF service (e.g. Singleton behaviour, etc.). This is quite important if you decide to create a new container each time (container creation is quite cheap, whereas catalog creation can be expensive).
Hope that helps.
I'm working on a solution where the MEF parts that I want to use across WCF calls are stored in a singleton at the application level. This is all hosted in IIS. The services are decorated to be compatible with asp.net.
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
In Global.asax, I import the parts.
[ImportMany(typeof(IOption))]
public IEnumerable<IOption> AvailableOptions{ get; set; }
After initializing the catalog and container, I copy the imported objects to my singleton class.
container.ComposeParts(this);
foreach (var option in AvailableOptions)
OptionRegistry.AddOption(option);
EDIT:
My registry class:
public static class OptionRegistry
{
private static List<IOption> _availableOptions= new List<IOption>();
public static void AddOption(IOption option)
{
if(!_availableOptions.Contains(option))
_availableOptions.Add(option);
}
public static List<IOption> GetOptions()
{
return _availableOptions;
}
}
This works but I want to make it thread safe so I'll post that version once it's done.
Thread-safe Registry:
public sealed class OptionRegistry
{
private List<IOptionDescription> _availableOptions;
static readonly OptionRegistry _instance = new OptionRegistry();
public static OptionRegistry Instance
{
get { return _instance; }
}
private OptionRegistry()
{
_availableOptions = new List<IOptionDescription>();
}
public void AddOption(IOptionDescription option)
{
lock(_availableOptions)
{
if(!_availableOptions.Contains(option))
_availableOptions.Add(option);
}
}
public List<IOptionDescription> GetOptions()
{
return _availableOptions;
}
}
A little while ago i was wondering how I could create a WCF web service that will get all of its dependencies wired by MEF but that i wouldnt need to write a single line of that wire up code inside my service class.
I also wanted it to be completely configuration based so i could just take my generic solution to the next project without having to make code changes.
Another requirement i had was that i should be able to unit-test the service and mock out its different dependencies in an easy way.
I came up with a solution that ive blogged about here: Unit Testing, WCF and MEF
Hopefully will help people trying to do the same thing.
Related
When creating a .NET application where the controller calls a service, then the service calls a DAO for the database work, and I'm using Entity Framework Core 6.0 for the database services, do I also add the service layer objects in ConfigureServices, or just the data layer, and pass them in to constructors?
I'm not sure this is exactly a preference, rather I'm worried about multithreading, specifically.
The scheme is: API -> Controller -> Service -> DAO -> Database then back for the result.
Sample code:
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<CRM_MSCRMContext>();
services.AddScoped<HomeService>();
}
See above, I've added both the service that calls the context, as well as the context.
I'm injecting the DAO using the constructor:
Sample code:
public CRMDAO(CRM_MSCRMContext crmContext)
{
_crmContext = crmContext;
}
And I could be injecting the service into the controller, if I add to Scoped. Otherwise I think I'm just instantiating new in the constructor anyway:
public HomeController(HomeService homeService)
{
_homeService = homeService;
}
or I could use a constructor and forget injection here:
public HomeController()
{
_homeService = new HomeService();
}
Why would one be better than the other from a multithreading or database connection standpoint? I understand that this is going to be scoped per-request, so maybe it's no different than using the constructor to New the service object on every request anyway from the controller?
Thank you,
Dan Chase
OK so after reading documentation and experimenting, I found it seems to be all or nothing. I had to add everything in ConfigureServices, all DAO's and all Services, as well as all Contexts. Otherwise I kept getting "Unable to resolve while attempting to activate" errors. I also had to move everything to an Interface, or it doesn't seem to work at all.
If anyone has any tips let me know, but I figured best to answer my own question than to delete, because someone might be able to fill in more info.
I want to use AutoMapper 9.0 in a WCF project containing several services that will be hosted in IIS. I've only found one other related SO question but its dealing with a 10 year old version of AutoMapper and is not asking the same question. Its answer is similar to the top hits on Google which suggest using a ServiceBehavior but that doesn't seem applicable when I want multiple services to use the same mapper. The defense rests.
In a web project, you might create a static MapperConfiguration in the Global.asax when the application starts, but WCF doesn't have a Global.asax. It looks like there are a few options for executing initialization code in WCF:
Include an AppInitialize() method in the App_Code folder. This will be dynamically compiled at runtime and people have complained that it can have missing reference issues in IIS so I'm not confident AutoMapper or its dependencies will be found once deployed to IIS.
Create a custom ServiceHost. This seems like it would execute once when the application starts, but also looks like it ignores the web.config configuration, which I don't want.
Use the Configure method per service. This has the same drawback as #2 and also I become concerned with thread safety (as in the ServiceBehavior approach) since two services could try to initialize the MapperConfiguration at once.
I considered just creating a class with a static property that would create a static MapperConfiguration or IMapper instance if it was not already created, but as in #3, I'm worried this may not be thread safe. Maybe if I did something like this?
public static class MapperConfig
{
private static IMapper _modelMapper;
private static readonly object _mapperLocker = new object();
public static IMapper ModelMapper
{
get
{
lock(_mapperLocker)
{
if (_modelMapper == null)
{
var config = new MapperConfiguration(cfg => cfg.AddProfile(new MappingProfile1()));
_modelMapper = config.CreateMapper();
}
}
return _modelMapper;
}
}
}
Where two services may call ModelMapper simultaneously. Another downside of this is the first request to any service will have to wait for the mapping to compile, but I'm not sure I can get away from that. I definitely don't want it compiling the mappings per call and would prefer not to even have to do it per service. Can you advise on the thread safety of MapperConfiguration and the best way to use it in IIS-hosted WCF?
I used to code only in databases enviroments. Recent changes in the corp. made me start developing in whole new worlds.
This new project is something like SQL - C# - PHP.
The class I've been working on in VS2008 is almost dnoe. It calls all the SQL SPs I need and the info is there in the DataReaders.
When It came to read that info from PHP so I could populate the website I found out it wasn't that easy. I was recommended into trying several options, the one that suits the best for the project is to create a Web Service and then consume it (please be patient, As I just stated I'm new to most web related programming)
So, I'm trying to WCF/Rest and then consume it from PHP but I haven't got there yet.
I've read and watched several tutorials on WCF and It seems to be smooth, but all I've read is:
. Create Interface with its OperationContracts.
. Create Service with DataMembers etc and define the Methods listed in the Interface.
Ok, but what I'd like to do is not to specify any methods there, since all I want is to call C# method I've already written.
Should I do that in the Service or in the Interface? And first of all, is this the right the way to approach it?
You would want to write service methods that implement an operation contract interface. The service methods can call the C# code that you've already written.
For example, here is a simple service interface:
[ServiceContract]
public interface IYourService
{
[OperationContract]
int GetCountOfTransactions(string filter);
}
And then you would implement this interface in your service class:
public class YourService : IYourService
{
public int GetCountOfTransactions(string filter)
{
// Call your existing code
YourClass yourClass = new YourClass();
return yourClass.GetCountOfTransactions(filter);
}
}
There are plenty of examples out there for setting this up as a REST service, but I think you're on the right track.
The trickiest part is usually setting up the binding configuration to make sure all of your consuming client applications can connect.
Hopefully this helps.
TL;DR:
What is a good and testable way to implement the dependency between the ViewModels and the WCF services in a MVVM client?
Please read the rest of the question for more details about the problems I encountered while trying to do this:
I am working on a silverlight client that connects to a wcf service, and I want to write unit tests for the client.
So I'm looking for a good solution for using the wcf clients in my ViewModels and testing that interaction. I have found two solutions until now:
Solution 1: This is actually how I have implemented it until now:
public class ViewModelExample
{
public ViewModelExample(IServiceClient client)
{
client.DoWorkCompleted += ..
client.DoWorkAsync();
}
}
//This is how the interface looks like
public interface IServiceClient
{
event EventHandler<AsyncCompletedEventArgs> DoWorkCompleted;
void DoWorkAsync();
}
//I was able to put the interface on the generated clients because they are partial classes, like this:
public partial class GeneratedServiceClient : IServiceClient
{
}
The good part: it's relatively easy to mock
The bad part: my service client lives as long as my ViewModel, and when I have concurrent requests I don't know which answer belongs to which request.
Solution 2: Inspired by this answer
WCF Service Client Lifetime.
public class ViewModelExample
{
public ViewModelExample(IServiceFactory factory)
{
var client = factory.CreateClient();
client.DoWorkCompleted += ...
client.DoWorkAsync();
}
}
The good part: each request is on a different client, so no more problems with matching requests with answers.
The bad part: it's more difficult to test. I would have to write mocks for both the factory and the wcf client every time. This is not something I would like to do, since I alreay have 200 tests... :(
So my question is, how do you guys do it? How do your ViewModels talk to the wcf services, where do you inject the dependency, and how do you test that interaction?
I feel that I'm missing something..
Try having a Func<IServiceClient> injected into your VM instead of the a client instance; you'll have a 'language-level factory' injected instead of building a class for this. In the factory method you can instantiate your client however you want (each access could create a new instance for that for example).
The downside is that you'll still have to touch your tests for the most part, but I assume it will be less work:
public ViewModelExample(Func<IServiceClient> factoryMethod)
{
var client = factoryMethod();
client.DoWorkCompleted += ...
client.DoWorkAsync();
}
The WCF service should have it's own tests that confirm the functionality of itself.
You should then be mocking this WCF service and writing unit tests within your consumers.
Unfortunately, it's a pain and something we all have to do. Be pragmatic and get it done, it will save you getting bitten in the future.
Are you using IoC container by a chance? If you had, this problem would be totally mitigated by container (you'll simply register IService dependency to be created as brand new upon each request).
If that's not the case, then
I would have to write mocks for both the factory and the wcf client every time
is how you deal with this kind of "problems". The cost is relatively small, probably 2-3 extra lines of code per test (all you have to do is setup factory mock to return service mock, which you do need either way).
My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}