WCF Service Client Lifetime - wcf

I have a WPF appliction that uses WCF services to make calls to the server.
I use this property in my code to access the service
private static IProjectWcfService ProjectService
{
get
{
_projectServiceFactory = new ProjectWcfServiceFactory();
return _projectServiceFactory.Create();
}
}
The Create on the factory looks like this
public IProjectWcfService Create()
{
_serviceClient = new ProjectWcfServiceClient();
//ToDo: Need some way of saving username and password
_serviceClient.ClientCredentials.UserName.UserName = "MyUsername";
_serviceClient.ClientCredentials.UserName.Password = "MyPassword";
return _serviceClient;
}
To access the service methods I use somethingn like the following.
ProjectService.Save(dto);
Is this a good approach for what I am trying to do? I am getting an errorthat I can't track down that I think may be realted to having too many service client connections open (is this possible?) notice I never close the service client or reuse it.
What would the best practice for WCF service client's be for WPF calling?
Thanks in advance...

You're on the right track, I'd say ;-)
Basically, creating the WCF client proxy is a two-step process:
create the channel factory
from the channel factory, create the actual channel
Step #1 is quite "expensive" in terms of time and effort needed - so it's definitely a good idea to do that once and then cache the instance of ProjectWcfServiceFactory somewhere in your code.
Step #2 is actually pretty lightweight, and since a channel between a client and a service can fall into a "faulted state" when an exception happens on the server (and then needs to be re-created from scratch), caching the actual channel per se is less desirable.
So the commonly accepted best practice would be:
create the ChannelFactory<T> (in your case: ProjectWcfServiceFactory) once and cache it for as long as possible; do that heavy lifting only once
create the actual Channel (here: IProjectWcfService) as needed, before every call. That way, you don't have to worry about checking its state and recreating it as needed
UPDATE: "what about closing the channel?" asks Burt ;-) Good point!!
The acccepted best practice for this is to wrap your service call in a try....catch....finally block. The tricky part is: upon disposing of the channel, things can do wrong, too, so you could get an exception - that's why wrapping it in a using(....) block isn't sufficient.
So basically you have:
IProjectWcfService client = ChannelFactory.CreateChannel();
try
{
client.MakeYourCall();
}
catch(CommunicationException ce)
{
// do any exception handling of your own
}
finally
{
ICommunicationObject comObj = ((ICommunicationObject)client);
if(comObj.State == CommunicationState.Faulted)
{
comObj.Abort();
}
else
{
comObj.Close();
}
}
And of course, you could definitely nicely wrap this into a method or an extension method or something in order not to have to type this out every time you make a service call.
UPDATE:
The book I always recommend to get up and running in WCF quickly is Learning WCF by Michele Leroux Bustamante. She covers all the necessary topics, and in a very understandable and approachable way. This will teach you everything - basics, intermediate topics, security, transaction control and so forth - that you need to know to write high quality, useful WCF services.
Learning WCF http://ecx.images-amazon.com/images/I/41wYa%2BNiPML._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpg
The more advanced topics and more in-depth look at WCF will be covered by Programming WCF Services by Juval Lowy. He really dives into all technical details and topics and presents "the bible" for WCF programming.

Related

Using TAP progress reporting in a WCF service

I (new to WCF) am writing a WCF service that acquires and analyzes an X-ray spectrum - i.e. it is a long-running process, sometimes several minutes. Naturally, this begs for asynchronous calls so, using wsDualHttpBinding and defining the following in my ServiceContract
[ServiceContract(Namespace="--removed--",
SessionMode=SessionMode.Required, CallbackContract=typeof(IAnalysisSubscriber))]
public interface IAnalysisController
{
// Simplified - removed other declarations for clarity
[OperationContract]
Task<Measurement> StartMeasurement(MeasurementRequest request);
}
And the (simplified) implementation has
async public Task<Measurement> StartMeasurement(MeasurementRequest request)
{
m_meas = m_config.GetMeasurement(request);
Spectrum sp = await m_mca.Acquire(m_meas.AcquisitionTime, null);
UpdateSpectrum(m_meas, sp);
return m_meas;
}
private void McaProgress(Spectrum sp)
{
m_client.ReportProgress(sp);
}
Where m_client is the callback object obtained from m_client = OperationContext.Current.GetCallbackChannel(); in the "Connect" method - called when the WCF client first connects. This works as long as I don't use progress reporting, but as soon as I add progress reporting by changing the "null" in the m_mca.Acquire() method to "new Progress(McaProgress)", on the first progress report, the client generates an error "The server did not provide a meaningful reply; this might be caused by a contract mismatch..."
I understand the client is probably awaiting a particular reply of a Task rather than having a callback made into it, but how do I implement this type of progress reporting with WCF? I would like the client to be able to see the live spectrum as it is generated and get an estimate of the time remaining to complete the spectrum acquisition. Any help or pointers to where someone has implemented this type of progress reporting with WCF is much appreciated (I've been searching but find mostly references to EAP or APM and WCF and not much related to TAP).
Thanks, Dave
Progress<T> wasn't really meant for use in WCF. It was designed for UI apps, and may behave oddly depending on your host (e.g., ASP.NET vs self-hosted).
I would recommend writing a simple IProgress<T> implementation that just called IAnalysisSubscriber.ReportProgress directly. Also make sure that IAnalysisSubscriber.ReportProgress has OperationContract.IsOneWay set to true.

ASP.NET, WCF and per-operation static variables - how to use them safely?

I have a WCF service and I have the following (simplified) class:
public class PerOperationSingleton : IDisposable
{
private static bool _hasInstance = false;
public PerOperationSingleton()
{
if(_hasInstance)
throw new InvalidOperationException("Cannot have multiple instances during a single WCF operation");
_hasInstance = true;
}
public void Dispose()
{
_hasInstance = false;
}
}
I guess, it's pretty self explanatory piece of code. I don't need a singleton for entire WCF service but only during a single operation call. If one instance of the PerOperationSingleton is disposed, it should be safe to create a new instance during the same WCF operation.
The problem is that I don't know how to make the _hasInstance variable to be effective only for one WCF operation. I know about [ThreadStatic], but I've heard that ASP.NET and WCF do not guarantee that an operation will be executed on a single thread - it might be transferred to another thread.
I definitely don't want my _hasInstance = true to move to thread pool and get incorrectly detected if some other operation picks that thread from the pool.
If WCF operation moves to another thread, I would like the _hasInstance variable to keep the "true" value if it was set.
And I don't want to change some global settings for my WCF service to avoid affecting the performance or get into some problems which will be hard to debug and solve later (I don't feel proficient enough in advanced ASP.NET and WCF topics).
I cannot store _hasInstance in session either because my client requested to disable .NET sessions for various reasons.
I would like the class PerOperationSingleton actually to be environment agnostic. It shouldn't really know anything about WCF or ASP.NET.
How do I make _hasInstance variable static during entire call of my WCF operation and don't affect other WCF operations?
I would consider using OperationContext to make you data "static" during the operation call.
Here is a similar discussion Where to store data for current WCF call? Is ThreadStatic safe?

WCF service access for redunancy (failover)

I'm looking for the correct way to have redundancy for a WCF service. I think I'm trying to solve a "infrastructure" issue in code. I'm not up to speed on load balancers but it seems like there should be something like routers to exactly this.
// Cycle through the list of service.
foreach (var uri in InfrastructureInformation.ServiceUris)
{
try
{
using (var client = WcfClientFactory.Create<ServiceClient>(uri))
{
// Do stuff here.
}
}
catch
{
// todo: Do not catch "exception" here. We have to find a better way of doing this.
// Try the next URI.
}
}
Seems like there should be a way to have one URI that I could hit that some "balancer" would hand off to an available service. If one service goes down for maintenance then the balancer would just not give any request to that service.
Now I know about WCF routing and I thought well that's the answer. Just put up a WCF router and have it hand out the connection but what happens if it falls over? Doesn't this give you a single point of failure? I'm looking for something more industrial.
WCF in .NET 4.0 has a routing capability that can be used in a failover scenario like you describe. This WCF sample shows how the built-in RoutingService class can be used for this purpose.
You could have a look at Microsoft Network Load Balancing (aka NLB). Microsoft also mention this in the context of WCF. There is an article on this here.

Better Practice for Error handling from WCF

I have a class library that communicates with my WCF service. The class library can then be used in any of my applications. I am curious as to what would be the best practice in handling the errors. I have thought of two scenarios but wanted to get some feedback from the community. The idea is not only to make sure it's appropriate for .NET solutions, but any other language that might not use the dll but rather call the service directly via a SOAP style call.
Option #1
Create a result object which will return to the caller API. Such as.
Public abstract BaseResponse
{
[DataMember]
Public bool IsSuccess { get; set;}
[DataMember]
Public string ErrorMsg { get ;set ;}
}
Public GetProductResponse : BaseResponse
{
[DataMember]
Public Product p { get;set;}
}
Option #2 : Throw a SOA Fault and allow the end user handle it however they choose. I could handle it in my API - however a direct call to the service would require that end user to code against the fault and handle it correctly.
Typically what I end up doing is having a business layer that will throw application specific exceptions. In the event that I want to expose this as a web service, I'll put a very thin layer on top of that that exposes those business services as WCF services. This layer will do nothing more than pass calls down to the business layer and return results as DataContract or MessageContract objects. In this very thin WCF layer, I'll catch exceptions from the business layer and map them to SOAP faults. This allows any .Net application to consume the business layer directly and catch exceptions as well as .Net or non-.Net applications to consume the web service and catch SOAP faults.
I usually use Option 2 (soap faults, WCF FaultContracts) then I am doing an internal service, where I know the client is also WCF, and I can make sure FaultExceptions are handled correctly.
When I am making an external/customer-facing service, I usually use option 1, splitting up my message into a "header" and a "body" and have the header contain an error message. I find this easier for other people to understand when telling them how to use your web service, and easier for non-WCF users to implement.
Both ways are fine really, as any decent SOAP tool for whatever language should handle SOAP faults, but you never know...
If you are building a restful webservice yo could use the http status code. And regardless of the service flavour the error handlers in WCF makes the code substantially more readable since it allows a single try/catch definition for methods.
There is a simple example here http://bit.ly/sCybDO
I would use option #2 every time. Faults are a standardised part of the SOAP specification, so any compliant framework should handle them appropriately. If a client is using a framework that doesn't have built in handling, then they will have to write custom code to do it, but that is the case with option #1 anyway, since they will have to understand your custom error semantics.
Unless there is a really good reason, I would always used the standardised approach - it give you the best chance of interoperability.

Need some advice for a web service API?

My company has a product that will I feel can benefit from a web service API. We are using MSMQ to route messages back and forth through the backend system. Currently we are building an ASP.Net application that communicates with a web service (WCF) that, in turn, talks to MSMQ for us. Later on down the road, we may have other client applications (not necessarily written in .Net). The message going into MSMQ is an object that has a property made up of an array of strings. There is also a property that contains the command (a string) that will be routed through the system. Personally, I am not a huge fan of this, but I was told it is for scalability and every system can use strings.
My thought, regarding the web services was to model some objects based on our data that can be passed into and out of the web services so they are easily consumed by the client. Initially, I was passing the message object, mentioned above, with the array of strings in it. I was finding that I was creating objects on the client to represent that data, making the client responsible for creating those objects. I feel the web service layer should really be handling this. That is how I have always worked with services. I did this so it was easier for me to move data around the client.
It was recommended to our group we should maintain the “single entry point” into the system by offering an object that contains commands and have one web service to take care of everything. So, the web service would have one method in it, Let’s call it MakeRequest and it would return an object (either serialized XML or JSON). The suggestion was to have a base object that may contain some sort of list of commands that other objects can inherit from. Any other object may have its own command structure, but still inherit base commands. What is passed back from the service is not clear right now, but it could be that “message object” with an object attached to it representing the data. I don’t know.
My recommendation was to model our objects after our actual data and create services for the types of data we are working with. We would create a base service interface that would house any common methods used for all services. So for example, GetById, GetByName, GetAll, Save, etc. Anything specific to a given service would be implemented for that specific implementation. So a User service may have a method GetUserByUsernameAndPassword, but since it implements the base interface it would also contain the “base” methods. We would have several methods in a service that would return the type of object expected, based on the service being called. We could house everything in one service, but I still would like to get something back that is more usable. I feel this approach leaves the client out of making decisions about what commands to be passed. When I connect to a User service and call the method GetById(int id) I would expect to get back a User object.
I had the luxury of working with MS when I started developing WCF services. So, I have a good foundation and understanding of the technology, but I am not the one designing it this time.
So, I am not opposed to the “single entry point” idea, but any thoughts about why either approach is more scalable than the other would be appreciated. I have never worked with such a systematic approach to a service layer before. Maybe I need to get over that?
I think there are merits to both approaches.
Typically, if you are writing an API that is going to be consumed by a completely separate group of developers (perhaps in another company), then you want the API to be as self-explanative and discoverable as possible. Having specific web service methods that return specific objects is much easier to work with from the consumer's perspective.
However, many companies use web services as one of many layers to their applications. In this case, it may reduce maintenance to have a generic API. I've seen some clever mechanisms that require no changes whatsoever to the service in order to add another column to a table that is returned from the database.
My personal preference is for the specific API. I think that the specific methods are much easier to work with - and are largely self-documenting. The specific operation needs to be executed at some point, so why not expose it for what it is? You'd get laughed at if you wrote:
public void MyApiMethod(string operationToPerform, params object[] args)
{
switch(operationToPerform)
{
case "InsertCustomer":
InsertCustomer(args);
break;
case "UpdateCustomer":
UpdateCustomer(args);
break;
...
case "Juggle5BallsAtOnce":
Juggle5BallsAtOnce(args);
break;
}
}
So why do that with a Web Service? It'd be much better to have:
public void InsertCustomer(Customer customer)
{
...
}
public void UpdateCustomer(Customer customer)
{
...
}
...
public void Juggle5BallsAtOnce(bool useApplesAndEatThemConcurrently)
{
...
}