I have a WCF logging service that runs operates over MSMQ. Items are logged to a Sql Server 2005 database. Every functions correctly if used outside a TransactionScope. When used inside a TransactionScope instance the call always causes the transaction to be aborted. Message = "The transaction has aborted".
What do I need to do to get this call to work inside a Transaction? Is it even possible. I've read that for a client transaction to flow across a service boundary the binding must support transaction flow, which immediately limits the bindings to only NetNamedPipeBinding, NetTcpBinding, WSHttpBinding, WSDualHttpBinding and WSFederationHttpBinding.
I'm not intimately knowledgeable about MSMQ, but there's a really good blog post series by Tom Hollander on MSMQ, IIS and WCF: Getting them to play nicely - in part 3 which is the link provided Tom talks about getting transactional.
MSMQ can be transactional - or not. And in WCF, you can decorate both the service contract as well as individual operation contracts (methods) with transaction-related attributes, such as whether to allow, disallow, or require a transaction context.
As far as I understand, in your setup, you don't want the MSMQ part to be transactional - but you should be able to use it even if an ambient transaction is present. In this case, you need to add the TransactionFlow="ALlowed" to your operation contract like this:
[ServiceContract]
public interface ILoggingService
{
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
void LogData(......);
}
That should do it!
Marc
Sorry for the needless question...
I have solved my problem. I needed to place
[TransactionFlow(TransactionFlowOption.Allowed)]
on the operation in the service contract and then
[OperationBehavior(TransactionScopeRequired=true)]
on the implementation of the contract (the service itself).
Works a treat.
Related
So I have been tasked with setting up MSMQ so that if our mail server goes down (which is seems to often) the messages just end up in the Queue and will be delivered when they come back up. With that said I have to say I don't know much about this except what I have learned in the past 24 hours however I believe I know enough to take the right approach but I wanted to ask someone in the community because there is some confusion amongst my colleagues given some existing setup in our WCF application.
Currently we have some services that use msmq as the protocol for the endpoint. the endpoint looks like this
<endpoint address="net.msmq://localhost/private/Publisher"
behaviorConfiguration="BatchBehaviour"
binding="netMsmqBinding"
bindingConfiguration="MSMQNoSecurity"
contract="HumanArc.Compass.Shared.Publisher.Interfaces.Service.IPublisherSubscriber"
name="PublishSubscriber"/>
This of course lets the client make a service call and if for some reason the service wasn't up it will ensure that when the service comes back up the call will be processed. What I don't think that it will do is if you have something like the following in you service method.
try
{
smtp.Send(mail);
return true;
}
catch (System.Net.Mail.SmtpFailedRecipientException ex)
{
throw new Exception("User Credentials for sending the Email are Invalid",ex);
}
catch (System.Net.Mail.SmtpException smtpEx)
{
throw new Exception(string.Format("Application encountered a problem send a mail message to {0} ", smtpHostName),smtpEx);
}
WCF isn't going to retry and send the message again somehow, am I correct about this assumption?
What I think we should have is something that looks like the following in place of the call to smtp.send() above. (from http://www.bowu.org/it/microsoft/net/email-asp-net-mvc-msmq-2.html)
string queuePath = #".\private$\WebsiteEmails";
MessageQueue msgQ;
//if this queue doesn't exist we will create it
if(!MessageQueue.Exists(queuePath))
MessageQueue.Create(queuePath);
msgQ = new MessageQueue(queuePath);
msgQ.Formatter = new BinaryMessageFormatter();
msgQ.Send(msg);
Then somewhere in the startup of the service (I am not sure where yet) we set up an event handler that will actually call send() on the SmtpClient object. Something like this
msgQ.ReceiveCompleted += new ReceiveCompletedEventHandler(msgQ_ReceiveCompleted)
So to sum it all up my first question is which way is better? Create a service that uses net:msmq as the protocol or just change the email method to put messages in the queue and set up a handler for it? The next question, if my assumption about changing the method that calls SmtpClient.Send() is correct then where in the program should I wire up ReceiveCompleted? Out WCF service is hosted in a windows service, meaning there is actually a call to ServiceBase.Run(servicesToRun). Is there a place I could wire it up there? My experience with WCF is with much simpler IIS hosted services so I am not 100% sure.
Thanks - I realize this is a long question but I have been trying to research it and there is a lot of information and I can't seem to find a clear explanation of the benefits of doing things one way vs another.
Your approach to using msmq to address availability in a downstream dependency (in this case your smtp server) is valid. However, there are a couple of things you should understand about msmq first.
If you create a queue in msmq then by default it is non-transactional. In this mode the queue will not provide the kind of guaranteed delivery semantic you require. So create your queues as transactional.
Then you can tell WCF that your service operation will enlist in the transaction when it receives a message for processing. You do this by defining a behavior on your service operation implementation:
[OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]
public void SendEmail(Something mail)
{
....
smtp.Send(mail);
}
TransactionScopeRequired tells WCF that the service operation should enlist in the same transaction used to transmit the message from sender to receiver. TransactionAutoComplete states that the service method should commit the transaction once the operation has successfully completed. So in answer to your query above, a failure in the service operation will cause the transaction to rollback.
What happens at this point depends on your service bindings configuration.
<netMsmqBinding>
<binding name="netMsmqBinding_IMyServiceInterface"
exactlyOnce="true"
maxRetryCycles="3"
retryCycleDelay="00:01:00"
receiveErrorHandling="Move"> <-- this defines behavior after failure
...
</binding>
</netMsmqBinding>
When, for whatever reason the transaction is not committed (for example, an unhandled exception occurs), WCF will roll the message back onto the queue and retry processing once per minute up to 3 times (defined by maxRetryCycles and retryCycleDelay).
If the message still fails processing after this time then the receiveErrorHandling attribute tells WCF what to do next (The above binding specifies that the message be moved to the system poison message queue).
Note: exactlyOnce tells WCF that we require transactions, that each message will be delivered exactly once and in the order they were sent.
So your original approach is in fact correct and you just need to configure your service correctly to implement the behavior you want.
Presently I have one class which monitor serial ports for incoming data, process the data and raises events through delegates/events based on the received data. This is a stand alone application. Now I have to convert it to a service so that the serial port monitor class will start as a service when the windows starts and a client applications subscribes to the events from either a remote PC or from the local machine. I have seen many articles on using WCF for this kind of applications. But WCF is message based and it will create a service obect when the client is requested. But my requirement is the service should be started automatically and the client application should be able to subscribe for the events of the service class instance which is already created during startup. How can I achieve this using WCF ?
The default behavior in WCF is to create a new instance of your service class to handle each incoming request, but you can override this by decorating your class with:
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)]
To get good performance with a Singleton, though, there's a few things you'll need to consider:
Since you'll likely need to do some configuration of your Singleton instance, you'll probably want to use the ServiceHost constructor method that takes a Singleton instance as an argument. (For an example, see Figure 8 Initializing and Hosting a Singleton in this article
Threading: The default threading model (ConcurrencyMode) only allows a single thread to have access to your Singleton instance at a time. You may need to look at using ConcurrencyMode = ConcurrencyMode.Multiple to get good performance (which means you'll need to handle threading-related issues yourself).
Make sure the methods in your Callback contract are marked as [OperationContract(IsOneWay = true)] so that publishing events back to the subscribers doesn't cause your service instance to block until the event handler completes. (Using WCF in this way is covered in detail in this article by Juval Lowy
I imagine the following WCF service usage: (of a cash acceptor)
Service Consumer 1 Service Consumer 2
cashAcceptorService.BeginTransaction(); cashAcceptorService.StopDevice();
//this should throw exception: device is locked / used in a transaction
cashAcceptorService.AcceptMoney();
cashAcceptorService.EndTransaction();
Service Consumer 1 and 2 use the same WCF single instance. I wonder if this functionality is already implemented. Do WCF transactions offer this?
How do you see this done?
If the following is true:
The service is interacting with a transactional object (eg the database)
The service has transaction flow enabled
Then WCF does indeed offer this.
The client can then use the TransactionScope class. Any transactions initiated on the client will flow through to the service automatically.
using(TransactionScope transactionScope = new TransactionScope())
{
// Do stuff with the service here
cashAcceptorService.AcceptMoney();
//
//
transactionScope.Complete();
}
Handling transactions in WCF tends to be an entire chapter of a book, but this should be enough information to get you on the right track.
It is always better to understand the concept of distributed transactions . I recommend to read this article http://www.codeproject.com/Articles/35087/Truly-Understanding-NET-Transactions-and-WCF-Imple
I have WCF Per-Call service which provides data for clients and at the same time is integrated with NServiceBus.
All stateful objects are stored in UnityContainer which is integrated into custom service host.
NServiceBus is configured in service host and uses same container as service instances.
Every client has its own instance context(described by Juval Lowy in his book in chapter about Durable Services).
If I need to send request over bus I just use some kind of dispatcher and wait response using Thread.Sleep().Since services are per-call this is ok afaik.
But I am confused a bit about messages from bus, that service must handle and provide them to clients. For some data like stock quotes I just update some kind of stateful object and and then, when clients invoke GetQuotesData() just provide data from this object.
But there are numerous service messages like new quote added and etc.
At this moment I have an idea to implement something like "Postman daemon" =)) and store this type of messages in instance context. Then client will invoke "GetMail()", receive those messages and parse them. Problem is that NServiceBus messages are "Interface based" and I cant pass them over WCF, so I need to convert them to types derived from some abstract class.
I Don't know what is best way to handle this situation.
Have you considered a "pure" NServiceBus solution for communicating back to clients? NServiceBus already has that "Postman daemon" capability. NServiceBus messages don't have to be interfaces - you can use regular classes as well.
Hope that helps.
I have found myself responsible for carrying on the development of a system which I did not originally design and can't ask the original designers why certain design decisions were taken, as they are no longer here. I am a junior developer on design issues so didn't really know what to ask when I started on the project which was my first SOA / WCF project.
The system has 7 WCF services, will grow to 9, each self-hosted in a seperate console app/windows service. All of them are single instance and single threaded. All services have the same OperationContract: they expose a Register() and Send() method. When client services want to connect to another service, they first call Register(), then if successful they do all the rest of their communication with Send(). We have a DataContract that has an enum MessageType and a Content propety which can contain other DataContract "payloads." What the service does with the message is determined by the enum MessageType...everything comes through the Send() method and then gets routed to a switch statement...I suspect this is unusual
Register() and Send() are actually OneWay and Async...ALL results from services are returned to client services by a WCF CallbackContract. I believe that the reson for using CallbackContracts is to facilitate the Publish-Subscribe model we are using. The problem is not all of our communication fits publish-subscribe and using CallbackContracts means we have to include source details in returned result messages so clients can work out what the returned results were originally for...again clients have a switch statements to work out what to do with messages arriving from services based on the MessageType (and other embedded details).
In terms of topology: the services form "nodes" in a graph. Each service has hardcoded a list of other services it must connect to when it starts, and wont allow client services to "Register" with it until is has made all of the connections it needs. As an example, we have a LoggingService and a DataAccessService. The DataAccessSevice is a client of the LoggingService and so the DataAccess service will attempt to Register with the LoggingService when it starts. Until it can successfully Register the DataAccess service will not allow any clients to Register with it. The result is that when the system is fired up as a whole the services start up in a cascadeing manner. I don't see this as an issue, but is this unusual?
To make matters more complex, one of the systems requirements is that services or "nodes" do not need to be directly registered with one another in order to send messages to one another, but can communicate via indirect links. For example, say we have 3 services A, B and C connected in a chain, A can send a message to C via B...using 2 hops.
I was actually tasked with this and wrote the routing system, it was fun, but the lead left before I could ask why it was really needed. As far as I can see, there is no reason why services cannot just connect direct to the other services they need. Whats more I had to write a reliability system on top of everything as the requirement was to have reliable messaging across nodes in the system, wheras with simple point-to-point links WCF reliabily does the job.
Prior to this project I had only worked on winforms desktop apps for 3 years, do didn't know any better. My suspicions are things are overcomplicated with this project: I guess to summarise, my questions are:
1) Is this idea of a graph topology with messages hopping over indirect links unusual? Why not just connect services directly to the services that they need to access (which in reality is what we do anyway...I dont think we have any messages hopping)?
2) Is exposing just 2 methods in the OperationContract and using the a MessageType enum to determine what the message is for/what to do with it unusual? Shouldnt a WCF service expose lots of methods with specific purposes instead and the client chooses what methods it wants to call?
3) Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
4) Is the idea of a service not allowing client services to connect to it (Register) until it has connected to all of its services (to which it is a client) a sound design? I think this is the only design aspect I agree with, I mean the DataAccessService should not accept clients until it has a connection with the logging service.
I have so many WCF questions, more will come in later threads. Thanks in advance.
Well, the whole things seems a bit odd, agreed.
All of them are single instance and
single threaded.
That's definitely going to come back and cause massive performance headaches - guaranteed. I don't understand why anyone would want to write a singleton WCF service to begin with (except for a few edge cases, where it does make sense), and if you do have a singleton WCF service, to get any decent performance, it must be multi-threaded (which is tricky programming, and is why I almost always advise against it).
All services have the same
OperationContract: they expose a
Register() and Send() method.
That's rather odd, too. So anyone calling will first .Register(), and then call .Send() with different parameters several times?? Funny design, really.... The SOA assumption is that you design your services to be the model of a set of functionality you want to expose to the outside world, e.g. your CustomerService might have methods like GetCustomerByID, GetAllCustomersByCountry, etc. methods - depending on what you need.
Having just a single Send() method with parameters which define what is being done seems a bit.... unusual and not very intuitive / clear.
Is this idea of a graph topology with
messages hopping over indirect links
unusual?
Not necessarily. It can make sense to expose just a single interface to the outside world, and then use some internal backend services to do the actual work. .NET 4 will actually introduce a RoutingService in WCF which makes these kind of scenarios easier. I don't think this is a big no-no.
Is doing all communication back to a
client via CallbackContracts unusual.
Yes, unusual, fragile, messy - if you can ever do without it - go for it. If you have mostly simple calls, like GetCustomerByID - make those a standard Request/Response call - the client requests something (by supplying a Customer ID) and gets back a Customer object as a return value. Much much simpler!
If you do have long-running service calls, that might take minutes or more to complete - then you might consider One-Way calls which just deposit a request into a queue, and that request gets handled later on. Typically, here, you can either deposit the answer into a response queue which the client then checks, or you can have two additional service methods which give you the status of a request (is it done yet?) and a second method to retrieve the result(s) of that request.
Hope that helps to get you started !
All services have the same OperationContract: they expose a Register() and Send() method.
Your design seems unusual at some parts specially exposing only two operations. I haven't worked with WCF, we use Java. But based on my understanding the whole purpose of Web Services is to expose Operations that your partners can utilise.
Having only two Operations looks like odd design to me. You generally expose your API using WSDL. In this case the WSDL would add nothing of value to the partners, unless you have lot of documentation. Generally the operation name should be self-explanatory. Right now your system cannot be used by partners without having internal knowledge.
Is doing all communication back to a client via CallbackContracts unusual. Surely sync or asyc request-response is simpler.
Agree with you. Async should only be used for long running processes. Async adds the overhead of correlation.