I need to hook into the WCF operation process to execute some code right before and right after each operation.
Some context:
I already have a custom servicehost, servicehostfactory and servicebehavior
all my services are based on a common base class
I've been snooping around and I think using a IParameterInspector would be the best choice, but I'm not entirely sure given that the code I need to execute has nothing to with parameters...
Any clues?
IParameterInspector is not a bad choice.
Do you need to know which operation/session/endpoint is happening, or are you just installing the same logic for all operations? Do you need to modify the Message object? (These considerations may change your choice of extensibility point.)
Do you need to modify thread-local storage? If so, prefer ICallContextInitializer.
Related
I have been investing time into .NET core and the Dependency Injection model using the ServiceCollection object.
Is there any way to protect the integrity of the service implementations that have already been added to the collection?
I would like to know that the original implementation of a service I have added hasn't been modified or replaced at some point during runtime.
In the case of security, if I was an attacker who knew what I was doing and had a remote code execution vulnerability, I could replace a key service implementation and aim to hide with a form of persistence.
In the case of fool proofing, if I had a large project I would hate to have to go debugging why something went wrong to find out a developer replaces the implementation of a service that was in widespread use.
Any suggestions? Perhaps there is some protection that prevents this, or its just not a concern?
You could implement IServiceCollection manually, defining a class, with a List<ServiceDescriptor> to implement all methods declared in the interfaces.
Then, modify the methods that may change the values of the items which are registries, adding notifying operations to determine whether the values are changed.
We are using Ninject in work as part of a modification to our legacy system. In some parts of the application we have opted to use a static Service Locator that wraps around Ninject. This is really only a static adapter.
When I request IEnumerable<IFoo> via our Service Locator it simply requests the same via Ninject's GetAll method. What I wanted to know is since I haven't actually enumerated the list, will all services remain inactive.
The reason I am asking is we are using Ninject to replace an old controller locator in a WinForms app. Some of these controllers are hairy so I don't want them activating until I have filtered to the one I want. How we are doing this is applying a Where clause to the collection on the service locator and then using FirstOrDefault to pick the correct one.
My understanding is that the activation will happen on enumeration (in our case at FirstOrDefault) is this correct?
You are correct that the GetAll doesn't actually do anything until you enumerate it in some manner. When you ask for an IEnumerable, each item pulled brings it to life - even if it's about to be filtered by a Where (the only way it could is if IQueryable were involved).
Each item Activated, will be Deactivated in line with the normal scoping rules.
The best way to avoid this is by having a .When... or other condition dictating the filtering.
DO NOT READ PAST THIS POINT - BAD ADVICE FOLLOWS.
A mad Hack is to request an IEnumerable<Lazy<T>> (which will require Ninject.Extensions.Factory). Good related article.
As the subject line describes, I am in the process of exposing a C# library into a WCF Service. Eventually we want to expose all the functionality, but at present the scope is limited to a subset of the library API. One of the goals of this exercise is also to make sure that the WCF service uses a Request / Response message exchange pattern. So the interface /API will change as the existing library does not use this pattern
I have started off by implementing the Service Contracts and the Request/Response objects, but when it comes to designing the DataContracts, I am not sure which way to go.
I am split between going back and annotating the existing library classes with DataContract/DataMember attributes VS defining new classes which are like surrogate classes to the existing classes.
Does anyone have any experience with similar task or have any recommendations on which way works best ? I would like to point out that our team owns the existing library so do have the source code for it. Any pointers or best practices will be helpful
My recommendation is to use the Adapter pattern, which in this case basically means create brand new DataContracts and ServiceContracts. This will allow everything to vary independently, and will allow you to optimize the WCF stuff for WCF and the API stuff for the API (if that makes sense). The last thing you want is to go down the modification route and find that something just won't map right once you are almost done.
Starting from .NET 3.5 SP1 you no longer need to decorate objects that you want to expose with [DataContract]/[DataMember] attributes. All public properties will be automatically exposed. This being said personally I prefer to use special DTO objects that I expose and decorate with those attributes. I then use AutoMapper to map between the actual domain models and the objects I want to expose.
If you are going to continue to use the existing library but want to have control over what you expose as the web service API, I would recommend defining new classes as wrapper(s) around the library.
What I mean to say is don't "convert" the existing library even if you think you're not going to continue to use it in other contexts. If it has been tested and proven, then take advantage of that fact and wrap around it.
Given the fact that I have a fully dynamic object model, that is, I have no concrete classes defined anywhere in code, but I still want to be able to create WCF DataContracts for them so I can use them in operations. How can I achieve this?
My concrete class "Entity" implements ICustomTypeDescriptor which is used to present the various properties to the outside world, but my expeimentation with WCF suggests that WCF does not care about ICustomTypeDescriptor. Is this correct or have I missed something?
Is this possible? It cannot be so that the only way to create a DataContract is to actually have a concrete harcoded class, can it?
you may use untyped service and message contract IIRC http://geekswithblogs.net/claeyskurt/archive/2008/09/24/125430.aspx
You might try System.Reflection.Emit.
Its quite tricky, but essentially you will just build a custom run-time type, with decorated data contract attributes. It gets tricky when creating encapsulated properties with PropertyChanged notifications, but in your service layer you can just get away with auto properties which are a lot easier.
This dated, but still very relevant link should get you going in the right direction.
http://drdobbs.com/184416570
Things evolve :-) Thanks to the excellent blog series by Alex D James its very easy to implement this.
I'm looking to push my domain model into a WCF Service API and wanted to get some thoughts on lazy loading techniques with this type of setup.
Any suggestions when taking this approach?
when I implemented this technique and step into my app, just before the server returns my list it hits the get of each property that is supposed to be lazy loaded ... Thus eager loading. Could you explain this issue or suggest a resolution?
Edit: It appears you can use the XMLIgnore attribute so it doesn’t get looked at during serialization .. still reading up on this though
Don't do lazy loading over a service interface. Define explicit DTO's and consume those as your data contracts in WCF.
You can use NHibernate (or other ORMs) to properly fetch the objects you need to construct the DTOs.
As for any remoting architecture, you'll want to avoid loading a full object graph "down the wire" in an uncontrolled way (unless you have a trivially small number of objects).
The Wikipedia article has the standard techniques pretty much summarised (and in C#. too!). I've used both ghosts and value holders and they work pretty well.
To implement this kind of technique, make sure that you separate concerns strictly. On the server, your service contract implementation classes should be the only bits of the code that work with data contracts. On the client, the service access layer should be the only code that works with the proxies.
Layering like this lets you adjust the way that the service is implemented relatively independently of the UI layers calling the service and the business tier that's being called. It also gives you half a chance of unit testing!
You could try to use something REST based (e.g. ADO.NET Data Services) and wrap it transpariently into your client code.