WCF service parallel processing - wcf

I have a system, that request data from a WCF webservice
it is like so:
WCF1 calls WCF2, WCF2 calls WCF3 and WCF3 do its job and returns response
My problem here is some of the operations takes a long time "about 2 min" to process
So if WCF1 send a request that takes long of time, and then another request from WCF1 that should take a second, it will wait until the first request finishes
i read about the problem, some of users said to use
<ServiceBehavior(ConcurrencyMode:=ConcurrencyMode.Multiple, InstanceContextMode:=InstanceContextMode.PerCall)> _
this doesn't solve the problem 100%
Can u please advice

I do not think those settings will solve your problem.
From the source:
In PerCall instancing, concurrency is not relevant, because each
message is processed by a new InstanceContext and, therefore, never
more than one thread is active in the InstanceContext.
Bottom line, because you are using PerCall activation, there is no point in adding the ConcurrencyMode.Multiple as each incoming request will get its own instance of the service class to handle its request.
You should try to narrow down what is really causing the performance problem. Look at the underlying code that your WCF services are calling.
EDIT
From another SO post.
In terms of behavior: ConcurrencyMode does not matter for a PerCall
service instance.
In terms of performance: A PerCall service that is
ConcurrencyMode.Multiple should be slightly faster because its not
creating and acquiring the (unneeded) thread lock that
ConcurrencyMode.Single is using.

Related

Change timeout for each WCF method or call

I have a quite large "old" WCF Service with many different methods.
The most of these methods are "normal" so they should answer in less than 10 seconds but there are several methods (8 or 9) that are long processes so they can take a long time to get a response.
The receivetimeout and sendtimeout were set to 00:40:00 to ensure they had time enought to accomplish these processes.
The problem is sometimes we have connection issues and the "normal" methods take a really long time to crash...
They are all in the same service because they use a really big model and they wanted to reuse the model from the service in every call (not having a PersonsService.User and a RobotsService.User... because they are the same class in different services).
The first solution I imagine is to make a different Service with those long processes and set a short timeout to the normal service... but I should have to make a lot of changes because of Model use...
Is there any way to set a different timeout in each call? Or by service method? Should I chunk the Service anyway?
Thanks in advance!!
First of all, the timeout to configure in your case is OperationTimeout, which allows the time limit to wait for the service to reply before timing out. You can modify the operation timeout before making a call from the client side.
To set the OperationTimeout on the channel, you can type case your proxy/channel instance as IContextChannel and set OperationTimeout.
For example:
IClientChannel contextChannel = channel as IClientChannel;
contextChannel.OperationTimeout = TimeSpan.FromMinutes(10);
HTH,
Amit

WCF Multiple requests from same client

i am building a WCF Service and i need clients to be able to acquire multiple results in the same time.
For example 5 callings of void UploadPhoto(byte[] photo);
and 1 string GetInfo()
If I understand it correctly, than whenever I do a request for a service, I need to get a response for the first one before the second gets proceeded. Is that correct?
Thanks
You can make multiple calls if you increase the System.Net.ServicePointManager.DefaultConnectionLimit the default is 2.
You need to set the WCF Service as Per-Call Service to process concurrent requests.
That is not quite correct.
If you call a WCF (or other web service) syncronosly then you have to wait for the response before doing anything else.
However, you can call a wcf service asyncronosly, in which case you do not have to wait for the result. You create a handler that handles the result when it comes back, but the main program continues.
Have a look at Ladislav's answer to this question: Difference between WCF sync and async call?

How does WCF instance work

I am trying to understand how instances with WCF works. I have a WCF service which the InstanceContextMode set to PerCall (so for each call of every client a new instance will be created) and ConcurrencyMode set to Single (so the service instance is executing exactly one or no operation call at a time).
So with this I understand that when a client connects a new instance is created. But what happens when the client leaves the service. Does the instance die. The reason I ask is because I need to implement a ConcurrentQueue in the service. So a client will connect to the service and put loads of data to be processed and then leave the service. The workers will work of the queue. After the work is finished I need the instance to be destroyed.
Basically, learning from the "WCF Master Class" tought by Juval Lowy, per-call activation is the preferred choice for services that need to scale, i.e. that need to handle lots of concurrent requests.
Why?
With the per-call, each incoming request (up to a configurable limit) gets its own, fresh, isolated instance of the service class to handle the request. Instantiating a service class (a plain old .NET class) is not a big overhead - and the WCF runtime can easily manage 10, 20, 50 concurrently running service instances (if your server hardware can handle it). Since each request gets its own service instance, that instance just handles one thread at a time - and it's totally easy to program and maintain, no fussy locks and stuff needed to make it thread-safe.
Using a singleton service (InstanceContextMode=Single) is either a terrible bottleneck (if you have ConcurrencyMode=Single - then each request is serialized, handled one after another), or if you want decent performance, you need ConcurrencyMode=Multiple, but that means you have one instance of your service class handling multiple concurrent threads - and in that case, you as a programmer of that service class must make 100% sure that all your code, all your access to variables etc. is 100% thread-safe - and that's quite a task indeed! Only very few programmers really master this black art.
In my opinion, the overhead of creating service class instances in the per-call scenario is nothing compared to the requirements of creating a fully thread-safe implementation for a multi-threaded singleton WCF service class.
So in your concrete example with a central queue, I would:
create a simple WCF per-call service that gets called from your clients, and that only puts the message into the queue (in an appropriate fashion, e.g. possibly transforming the incoming data or something). This is a quick task, no big deal, no heavy processing of any kind - and thus your service class will be very easy, very straightforward, no big overhead to create those class instances at all
create a worker service (a Windows NT service or something) that then reads the queue and does the processing - this is basically totally independent of any WCF code - this is just doing dequeuing and processing
So what I'm saying is : try to separate the service call (that delivers the data) from having to build up a queue and do large and processing-intensive computation - split up the responsibilities: the WCF service should only receive the data and put it into a queue or database and then be done with it - and a second, separate process should do the processing/heavy-lifting. That keeps your WCF service lean'n'mean
Yes, per call means, you will have a new insance of the service per each connection, once you setup the instance context mode to percall and ConcurrencyMode to single, it will be single threaded per call. when the client leaves, done with the job, your instance will dispose. In this case, you want to becareful not to create your concurrentqueue multiple times, as far as i can imagine, you will need a single concurrentqueue? is that correct?
I would recommend you to use IntanceContextMode=Single and ConcurrencyMode to Mutli threaded. This scales better.if you use this scheme, you will have a single concurrent queue, and you can store all your items within that queue.
One small note, ConcurrentQueue, has a bug, you should be aware of, check the bug database.

WCF call order in single concurrency mode

Assume a WCF service with ServiceBehavior.ConcurrencyMode = Single.
When exactly does the service start blocking for concurrent calls?
For example, say we have two clients: Slow and Fast.
At time 0 Slow starts a slow service call that includes a huge chunk of data.
At time 1 Fast makes a fast service call.
At time 2 the slow data finally arrives and the service code is executed on the server.
Assuming buffers configured in WCF to be larger than the huge chunk, which call will get executed first?
In other words, does blocking start when all call data has been received at the server side or when the client initiates the call?
Is the service blocked during the data transfer or only during code execution?
Unless you configure InstanceContextMode to Single as well both calls will be executed concurrently. So suppose that you have InstanceContextMode set to Single.
I didn't test it but I would expect such behavior. Concurrency mode is service behavior so it takes place once the service instance / instance context is resolved. In buffered mode that happens after whole message is received in streaming mode it should happen after message headers are received. So in case of buffered transport I would expect that fast client will be processed first and in case of streamed transport it depends if message headers from slow client was already received.
But as I wrote before this is only my expectation.

WCF Architecture question - Fast response and Queing

I have a simple WCF need - basically clients running in isolation and a server so really client/server intially.
WCF helps us decouple the service layer and practise a SOA approach for scale.
All we are doing on the server (per call/multiple concurrency) is writing to a db and then performing some IO for another system which will have immediate use for - but this might change as (unknown) requirements build.
Speed: We need the service to be literally quick as possible: 1 second is OK - 2 is slow - and some errors need to be sent back immediately.
I was considering using server async patterns, queues (MSMQ), Azure, to allow the service method to queue and return quickly. NB However, some processing might be 'online' in the WCF service (db write) with an immediate return with response/error, others could be offline (IO). Disadvantage: This requires a means to callback the client if there is a show-stopper error and design and development scales accordingly.
i) Although WCF allows for the service I see the technology as providing an interprocess comm channel and perhaps the actual service operations should run in win services. Eg. WCF writes to a db which a long-running service polls and picks up. As the system gets bigger and bigger some operations may be genuine fire and forget long running - which complete or are needed hours after. We can take these out of the immediate loop. This is true decoupling even if it slows us down. A WCF method can't pass to a service unless it is calling another WCF service and can't call a windows service!
From an architectural viewpoint, is it OK to have some operations complete and return, and others pass to a true bus or service (by some mechanism)? Am I over-engineering this?
ii) As all the operations of db and IO will take say 1 or 2 seconds max I feel I might just call the service aysnc from the client and wait for it to return and then marshal back to the client UI. This is also simple. This might prove a wrong decision in the long run but having said that, all service layer ops would be in a seperate dll so that these could be called by another service for later scale. A method call could be marked as immediate or queue for processing, say.
Thoughts?
From an architectural viewpoint, is it
OK to have some operations complete
and return, and others pass to a true
bus or service (by some mechanism)? Am
I over-engineering this?
It is OK, all depends on detailed requirements.
Thought #1
Have operations return unique request ID and one operation that provides status by request ID.
Thought #2
Have operations return result if they are done within X number of seconds or request ID.