Calling multiple services in parallel - wcf

I need to call two WCF services at a time. Both these services return similar output (Y/N). If either service returns Y, then I will not wait for the other. But if first service returns N, then I will wait for the second response.

Launch 2 different threads if one is over, kill the other.

Related

WCF Multiple requests from same client

i am building a WCF Service and i need clients to be able to acquire multiple results in the same time.
For example 5 callings of void UploadPhoto(byte[] photo);
and 1 string GetInfo()
If I understand it correctly, than whenever I do a request for a service, I need to get a response for the first one before the second gets proceeded. Is that correct?
Thanks
You can make multiple calls if you increase the System.Net.ServicePointManager.DefaultConnectionLimit the default is 2.
You need to set the WCF Service as Per-Call Service to process concurrent requests.
That is not quite correct.
If you call a WCF (or other web service) syncronosly then you have to wait for the response before doing anything else.
However, you can call a wcf service asyncronosly, in which case you do not have to wait for the result. You create a handler that handles the result when it comes back, but the main program continues.
Have a look at Ladislav's answer to this question: Difference between WCF sync and async call?

Prioritize real time msgs over batch msgs using Queues/MDBs

In my application a specific service has a constant bandwidth (For e.g 100 transactions at a time ) , requests to the service arrive real-time as well as batch jobs (Queues). The real time requests doesnt have a uniform distribution. I need a way to make sure that real time jobs are processed first before the batch jobs and also make sure that at any time I don't exceed the threshold of the service.
Please evaluate the following approach.
Have 2 queues A - real time and B - Batch job. Have a thread pool of size = 100 (Service Threshold ) and let the
thread pool first try to pick msgs from A if any else pick from B.
My application runs on Weblogic , I want to make use of MDBs instead of the thread pool but there is no way to make the MDBs listen to multiple Queues.
Within JMS you can set a message priority which should be respected if possible. This may be something simple to try.
Another option could be to set a JMS property on the message with the client and use a Message Selector on the MDB. You could set MY_MESSAGE_TYPE=batch/rt and then have multiple MDB's deployed that are listening to the same queue but can be assigned to different work managers. Keep in mind that Work Manager != Thread Pool. You can also set a Request Class to ensure that if the batch pool is in use that the RT pool will not be starved for threads/CPU.
With this design I believe that if you have two MDB's, one with a message selector, messages that meet the selector criteria should be delivered to the MDB with that selector (RT) before an MDB with no selectors (BATCH). This would be a fairly simple POC to do - set up a client that sends messages to the queue, some of which have the JMS property set to RT and others that do not have it set.
10.0 referece (which is still applicable): http://docs.oracle.com/cd/E11035_01/wls100/config_wls/self_tuned.html

Making a thread-unsafe DLL call in BizTalk Orchestration (or only running one Orchestration at a time)

I have an issue with a 3rd party DLL, which is NOT thread-safe, but which I need to call within an orchestration.
I'm making the DLL call within an Expression shape. The same DLL is called in a number of different orchestrations.
The problem I have is that for a series of incoming messages, BizTalk will run multiple orchestrations (or multiple instances of an orchestration) in parallel - which leads to exceptions within the DLL.
Is there any way around this, given that refactoring the DLL isn't an option. Or, is there a way to throttle BizTalk to only run one orchestration at any one time. (I've seen some hacks restricting the working pool to the number of processors, but this doesn't seem to help. We can't downgrade to a single-core machine!)
I would rather find a way of keeping the DLL happy (though I can't think how) than throttle BizTalk - but if there is a way to throttle that would be an acceptable short-term solution whilst we discuss with the 3rd party. (who are a large organisation and really should know better!)
Even on a single core machine, BizTalk will run concurrent orchestrations.
You could throttle the orchestration by implementing the singleton pattern in the orchestration.
You do this by creating a loop in the orchestration and having two receive shapes, one before the start of the loop and one inside the loop.
Both these receive are bound to the same inbound logical port.
You create a correlation set which specifies something like BTS.MessageType and set the first receive shape to initiate the correlation and the second receive to follow the correlation.
As long as the loop does not end you can guarantee that any message of a certain type will always be processed by the same instance of the orchestration.
However, using singletons is a design decision which comes with drawbacks. For example, throughput suffers, and you have to ensure that your singleton cannot suspend, else it will create a block for all subsequent messages.
Hope this helps.

Service instances in WCF

I'm using perfmon to examine my service behaviour. What I do is I launch 6 instances of client application on separate machines and send requests to server in 120 threads (20threads per client application).
I have examined counters and maximum number of instances (I use PerSession model and set number of instances to 100) is 12, what I consider strange as my response times from service revolve around 120 seconds... I thought that increasing number of instances will cause WCF to create more instances, and as a result response times would be quicker.
Any idea why WCF doesn't create even more instances of service?
Thanks Pawel
WCF services are throttled by default - it's a service behavior, which you can tweak easily.
See the MSDN docs on ServiceThrottling.
Here are the defaults:
<serviceThrottling
maxConcurrentCalls="16"
maxConcurrentInstances="Int.MaxValue"
maxConcurrentSessions="10" />
With these settings, you can easily control how many sessions or concurrent calls can be handled, and you can make sure your server isn't overwhelmed by (fraudulent) requests and brought to its knees.
Ufff, last attempt to understand that silly WCF.
What I did now is:
create client that starts 20 threads, every thread sends requests to service in a loop. Performance counter on server claims that only 2 instances of service object are created all the time. Average request time is about 40seconds (I start measuring before proxy call and finish after call returns).
modify that client to start 5 threads and launch 4 instances of that client (to simulate 20 threads behaviour from previous example). Performance monitor shows that 8 instances of service object are created all the time. Average request time is 20seconds.
Could somebody tell me what is going on? I thought that there is a problem with server that it doesn't want to handle more requests at concurently, but apparently it is client that causes a stir and don't want to send more requests concurently... Maybe there is some kind of configuration option that limits client from sending more then two requests at one time... (buffer,throttling etc...)
Channel factory is created in every thread.
You might want to refer to this article and make adjustment to your WCF configuration (specifically maxConnections) to get the number of connections you want.
Consider using something like http://www.codeplex.com/WCFLoadTest to hit the service.
Also, perfmon will only get you so far. If you want to debug WCF service you should look at the SvcTraceViewer and SvcConfigEditor in the Windows SDK.
On your service binding what have you set the maxconnections to? Calls to connect will block once the limit is reached.
Default is 10 I think.
http://msdn.microsoft.com/en-us/library/ms731379.aspx

REST, WCF and Queues

I created a RESTful service using WCF which calculates some value and then returns a response to the client.
I am expecting a lot of traffic so I am not sure whether I need to manually implement queues or it is not neccessary in order to process all client requests.
Actually I am receiving measurements from clients which have to be stored to the database - each client sends a measurement every 200 ms so if there are a multiple clients there could be a lot of requests.
And the other operation performed on received data. For example a client could send an instruction "give me the average of the last 200 measurements" so it could take some time to calculate this value and in the meantime the same request could come from another client.
I would be very thankful if anyone could give any advice on how to create a reliable service using WCF.
Thanks!
You could use the MsmqBinding and utilize the method implemented by eedsi9n. However, from what I'm gathering from this post is that you're looking for something along the lines of a pub/sub type of architecture.
This can be implemented with the WSDualHttpBinding which allows subscribers to subscribe to events. The publisher will then notify the user when the action is completed.
Therefore you could have Msmq running behind the scenes. The client subscribes to the certain events, then perhaps it publishes a message that needs to be processed. THe client sits there and does work (because its all async) and when the publisher is done working on th message it can publish an event (The event your client subscribed to) letting you know that its done. That way you don't have to implement a polling strategy.
There are pre-canned solutions for this as well. Such as NService Bus, Mass Transit, and Rhino Bus.
If you are using Web Service, Transmission Control Protocol (TCP/IP) will act as the queue to a certain degree.
TCP provides reliable, ordered
delivery of a stream of bytes from one
program on one computer to another
program on another computer.
This guarantees that if client sends packet A, B, then C, the server will received it in that order: A, B, then C. If you must reply back to the client in the same order as request, then you might need a queue.
By default maximum ASP.NET worker thread is set to 12 threads per CPU core. So on a dual core machine, you can run 24 connections at a time. Depending on how long the calculation takes and what you mean by "a lot of traffic" you could try different strategies.
The simplest one is to use serviceTimeouts and serviceThrottling and only handle what you can handle, and reject the ones you can't.
If that's not an option, increase hardware. That's the second option.
Finally you could make the service completely asynchronous. Implement two methods
string PostCalc(...) and double GetCalc(string id). PostCalc accepts the parameters, stuff them into a queue (or a database) and returns a GUID immediately (I like using string instead of Guid). The client can use the returned GUID as a claim ticket and call GetCalc(string id) every few seconds, if the calculation has not finished yet, you can return 404 for REST. Calculation must now be done by a separate process that monitors the queue.
The third option is the most complicated, but the outcome is similar to that of the first option of putting cap on incoming request.
It will depend on what you mean by "calculates some value" and "a lot of traffic". You could do some load testing and see how the #requests/second evolves with the traffic.
There's nothing WCF specific here if you are RESTful
the GET for an Average would give a URI where the answer would wait once the server finish calculating (if it is indeed a long operation)
Regarding getting the measurements - you didn't specify the freshness needed (i.e. when you get a request for an average - how fresh do you need the results to be) Also you did not specify the relative frequency of queries vs. new measurements
In any event you can (and IMHO should) use the queue (assuming measuring your performance proves it) behind the endpoint. If you change the WCF binding you might still be RESTful but will not benefit from the standard based approach of REST over HTTP