In what scenarios is recommended a reliable session? - wcf

In few words, if I am not wrong, a session is used when I want to ensure that the packages are sent in order, and to be able to use sessions is needed a reliable connection.
But my doubt what kind of applications need that? In my case is a simple application in which a client request to a service data from a database, the service get the data from the database and send to the client the results. Also the client can requeset to add, modify or delete data from database. In this case, should I need a reliable connection and sessions or not?
Thanks.

Session presumes that for some period of time you want to retain some data. Such a period of time, as far as session is concerned, refers to client's lifecycle that is when client opens up proxy, both service along with session are created, when client closes proxy service and session terminate their actions. There is exception when closing proxy does not actually perform it right away and this occures when you invoke one-way-operation. Service will keep working as long as operation performs its action despite the fact that it previously received an order to get rid of instance.
Based on provided information I assume the best choice would be PerCall. You do not store any data between calls and every single call can be perceived separately. Additionaly, leverage of ConcurrencyMode set to multiple so as to allow services being created simultaneously.
Personally, I find session useful in MSMQ, whenever I want to specific number of messages be wrapped into single queue-message. If error occures, regardless of whether which message is in charge of it, the whole queue-message is rolled back.

Related

WCF service writes log only if client receives results

I'm working on a WCF service to help our new code interoperate with a legacy system. The process goes like this:
Client calls the service with a request for the legacy system.
Service writes the request into a database.
Legacy system services request from the DB in its own time and writes results back into the DB (updating a status flag to say results are ready).
Client retrieves results by calling a second service method, which polls the DB until the ready flag is set.
Just before returning the results, the service updates the status flag to client has results, so that the related DB rows can be deleted.
My concern is the race condition at the last step. I can see this happening:
Service updates status to client has results.
Client times out after waiting for the service to poll the DB.
Service tries to return results. Hilarity ensues.
One way to solve this would be to have three service calls instead of two: the second call retrieves results, and the last one is an explicit acknowledgement by the client that it has them. I'd like to know whether there is a way which doesn't impose this extra "protocol" burden on the client though.
I've looked briefly into using transactions in WCF, and it sounds like they might be able to do what I need. The client (optionally) starts a transaction, flows it to the service, which uses it if it's there, and commits it when done. This seems as if it implicitly does the "third call".
Does this idea have any merit? Any disadvantages that you can see? Are there any other avenues I could explore?
Using transaction flow is possible but flowing transaction in polling scenario (in each poll call) is terrible architecture. What you generally need is transaction flow for the real read operation where service modifies the record and returns data back to the client. The client will commit the transaction and it will commit changes performed by the service.
Using transactional processing places some additional requirements on your service and clients.
Another approach can be transactional MSMQ:
Client calls the service with a request for the legacy system = client sends a message to the service's queue
Service writes the request into a database = service processes the message from its queue
Legacy system services request from the DB in its own time and writes results back into the DB (updating a status flag to say results are ready).
Service polls the database and places messages to correct client queues. Placing the message and modifying database records runs in transaction
Client processes incoming message
Transactional queue allows transactional reading (the message is removed from the queue only if transaction is committed) and writing (the message is added to the queue only if transactions is committed). That will allow deleting records before the client reads the message because the message will remain in the queue until he successfully reads it (or until it timeouts and even after that it can be passed to some error queues).
In both cases you should think about clients who will consume the service. Transaction flowing can be interoperable but not every web service stack supports it. MSMQ is not interoperable.
Why not reduce the likelihood of the client timing out by doing this instead:
Client calls service with a request for the legacy system.
Service writes the request into a database.
Legacy system services request from the DB in its own time and writes results back into the DB (updating a status flag to say results are ready).
Client calls a service to find out whether the results are ready. NB. no polling: just returns with an immediate yes or no.
If the results are NOT ready, client waits a bit and then goes back to step 4.
If the results ARE ready, call the service to retrieve the results. The service can update the status to "Client has results" at that point.
By doing this, the client won't be waiting for the service call in step 4. to return for a prolonged period, and the chances of a timeout should be minimal.
However, you're never going to be 100% certain that the client has received the results unless the client makes a final service call to say so. (What if, for example, the client dies after making the very last request?)

WCF Server Push connectivity test. Ping()?

Using techniques as hinted at in:
http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.callbackcontract.aspx
I am implementing a ServerPush setup for my API to get realtime notifications from a server of events (no polling). Basically, the Server has a RegisterMe() and UnregisterMe() method and the client has a callback method called Announcement(string message) that, through the CallbackContract mechanisms in WCF, the server can call. This seems to work well.
Unfortunately, in this setup, if the Server were to crash or is otherwise unavailable, the Client won't know since it is only listening for messages. Silence on the line could mean no Announcements or it could mean that the server is not available.
Since my goal is to reduce polling rather than immediacy, I don't mind adding a void Ping() method on the Server alongside RegisterMe() and UnregisterMe() that merely exists to test connectivity of to the server. Periodically testing this method would, I believe, ensure that we're still connected (and also that no Announcements have been dropped by the transport, since this is TCP)
But is the Ping() method necessary or is this connectivity test otherwise available as part of WCF by default - like serverProxy.IsStillConnected() or something. As I understand it, the channel's State would only return Faulted or Closed AFTER a failed Ping(), but not instead of it.
2) From a broader perspective, is this callback approach solid? This is not for http or ajax - the number of connected clients will be few (tens of clients, max). Are there serious problems with this approach? As this seems to be a mild risk, how can I limit a slow/malicious client from blocking the server by not processing it's callback queue fast enough? Is there a kind of timeout specific to the callback that I can set without affecting other operations?
Your approach sounds reasonable, here are some links that may or may not help (they are not quite exactly related):
Detecting Client Death in WCF Duplex Contracts
http://tomasz.janczuk.org/2009/08/performance-of-http-polling-duplex.html
Having some health check built into your application protocol makes sense.
If you are worried about malicious clients, then add authorization.
The second link I shared above has a sample pub/sub server, you might be able to use this code. A couple things to watch out for -- consider pushing notifications via async calls or on a separate thread. And set the sendTimeout on the tcp binding.
HTH
I wrote a WCF application and encountered a similar problem. My server checked clients had not 'plug pulled' by periodically sending a ping to them. The actual send method (it was asynchronous being a server) had a timeout of 30 seconds. The client simply checked it received the data every 30 seconds, while the server would catch an exception if the timeout was reached.
Authorisation was required to connect to the server (by using the built-in feature of WCF that force the connecting person to call a particular method first) so from a malicious client perspective you could easily add code to check and ban their account if they do something suspicious, while disconnecting users who do not authenticate.
As the server I wrote was asynchronous, there wasn't any way to really block it. I guess that addresses your last point, as the asynchronous send method fires off the ping (and any other sending of data) and returns immediately. In the SendEnd method it would catch the timeout exception (sometimes multiple for the client) and disconnect them, without any blocking or freezing of the server.
Hope that helps.
You could use a publisher / subscriber service similar to the one suggested by Juval:
http://msdn.microsoft.com/en-us/magazine/cc163537.aspx
This would allow you to persist the subscribers if losing the server is a typical scenario. The publish method in this example also calls each subscribers on a separate thread, so a few dead subscribers will not block others...

WCF - AsyncPattern=true or IsOneWay=true

Few methods in my WCF service are quite time taking - Generating Reports and Sending E-mails.
According to current requirement, it is required so that Client application just submits the request and then do not wait for the whole process to complete. It will allow user to continue doing other operations in client applications instead of waiting for the whole process to finish.
I am in a doubt over which way to go:
AsyncPattern = true OR
IsOneWay=true
Please guide.
It can be both.
Generally I see no reason for WCF operation to not be asynchronous, other than developer being lazy.
You should not compare them, because they are not comparable.
In short, AsyncPattern=True performs asynchronous invocation, regardless of whether you're returning a value or not.
OneWay works only with void methods, and puts a lock on your thread waiting for the receiver to ack it received the message.
I know this is an old post, but IMO in your scenario you should be using IsOneWay on the basis that you don't care what the server result is. Depending on whether you need to eventually notify the client (e.g. of completion or failure of the server job) then you might also need to look at changing the interface to use SessionMode=required and then using a Duplex binding.
Even if you did want to use asynchronous 2-way communication because your client DID care about the result, there are different concepts:
AsyncPattern=true on the Server - you would do this in order to free up server resources, e.g. if the underlying resource (?SSRS for reporting, Mail API etc) supported asynchronous operations. But this would benefit the server, not the client.
On the client, you can always generate your service reference proxy with "Generate Asynchronous Operations" ticked - in which case your client won't block and the callback will be used when the operation is complete.

REST, WCF and Queues

I created a RESTful service using WCF which calculates some value and then returns a response to the client.
I am expecting a lot of traffic so I am not sure whether I need to manually implement queues or it is not neccessary in order to process all client requests.
Actually I am receiving measurements from clients which have to be stored to the database - each client sends a measurement every 200 ms so if there are a multiple clients there could be a lot of requests.
And the other operation performed on received data. For example a client could send an instruction "give me the average of the last 200 measurements" so it could take some time to calculate this value and in the meantime the same request could come from another client.
I would be very thankful if anyone could give any advice on how to create a reliable service using WCF.
Thanks!
You could use the MsmqBinding and utilize the method implemented by eedsi9n. However, from what I'm gathering from this post is that you're looking for something along the lines of a pub/sub type of architecture.
This can be implemented with the WSDualHttpBinding which allows subscribers to subscribe to events. The publisher will then notify the user when the action is completed.
Therefore you could have Msmq running behind the scenes. The client subscribes to the certain events, then perhaps it publishes a message that needs to be processed. THe client sits there and does work (because its all async) and when the publisher is done working on th message it can publish an event (The event your client subscribed to) letting you know that its done. That way you don't have to implement a polling strategy.
There are pre-canned solutions for this as well. Such as NService Bus, Mass Transit, and Rhino Bus.
If you are using Web Service, Transmission Control Protocol (TCP/IP) will act as the queue to a certain degree.
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
This guarantees that if client sends packet A, B, then C, the server will received it in that order: A, B, then C. If you must reply back to the client in the same order as request, then you might need a queue.
By default maximum ASP.NET worker thread is set to 12 threads per CPU core. So on a dual core machine, you can run 24 connections at a time. Depending on how long the calculation takes and what you mean by "a lot of traffic" you could try different strategies.
The simplest one is to use serviceTimeouts and serviceThrottling and only handle what you can handle, and reject the ones you can't.
If that's not an option, increase hardware. That's the second option.
Finally you could make the service completely asynchronous. Implement two methods
string PostCalc(...) and double GetCalc(string id). PostCalc accepts the parameters, stuff them into a queue (or a database) and returns a GUID immediately (I like using string instead of Guid). The client can use the returned GUID as a claim ticket and call GetCalc(string id) every few seconds, if the calculation has not finished yet, you can return 404 for REST. Calculation must now be done by a separate process that monitors the queue.
The third option is the most complicated, but the outcome is similar to that of the first option of putting cap on incoming request.
It will depend on what you mean by "calculates some value" and "a lot of traffic". You could do some load testing and see how the #requests/second evolves with the traffic.
There's nothing WCF specific here if you are RESTful
the GET for an Average would give a URI where the answer would wait once the server finish calculating (if it is indeed a long operation)
Regarding getting the measurements - you didn't specify the freshness needed (i.e. when you get a request for an average - how fresh do you need the results to be) Also you did not specify the relative frequency of queries vs. new measurements
In any event you can (and IMHO should) use the queue (assuming measuring your performance proves it) behind the endpoint. If you change the WCF binding you might still be RESTful but will not benefit from the standard based approach of REST over HTTP

Best approach for WCF client

I have client application that uses WCF service to insert some data to backend database. Client application is going to call service on per event basis (it can be every hour or every second).
I'm wondering what's the best way of calling that service.
Should I create communication channel and keep it open all the time, or should I close channel after each call and create it again?
The first question is whether your server needs to maintain any state about the client directly (i.e. are you doing session-like transactions?) If you are, you will need to be able to manage how the server holds the information between communications.
My initial feeling of your question is that if there is no need to leave a connection open, then close it each time and recreate a new connection on demand. This will avoid issues where a connection can be placed into a faulted state between calls. The overhead of creating and destroying connections is minimal, and it will (probably) save you a lot of time in debugging when something goes wrong.
I would think you probably wanna implement a Keep Alive pattern, with a configurable duration to inform your underlying mechanism to close the connection if past beyond the Keep-alive duration with zero communication activity.