In order to optimize some server-side database calls I decided to use System.Threading.Tasks.Task to parallelize several database calls then use Task.WaitAll() to get all the results, package them up and send them to the client via WCF. This seems to work fine when testing on the dev web server in Visual Studio (cassini) but does not work when deployed to IIS. Profiling the client calls (with firebug) shows that calls get to IIS but no corresponding calls are submitted to SQL Server.
Anyone experienced this? Is there a limitation in using Tasks within IIS?
There is no direct limitation - however, when you use a Task, it schedules the Task on the ThreadPool. IIS, by default, shares a single thread pool for the entire IIS process, which can (especially on a busy server) cause thread starvation to occur. This means that the same guidance about using the ThreadPool applies when working with tasks. See this post for details.
In order to see if this is the problem, you could, at least as a test, generate all of your Task instances with the TaskCreationOptions.LongRunning hint. This will cause the default TaskScheduler to create task on it's own, dedicated (new) Thread instead of using a ThreadPool thread. While I don't think this is a good idea for a long term solution, you would be able to verify that it's thread pool starvation causing your problem. If it is, you could determine other options, such as potentially using a custom TaskScheduler to manage the threads/tasks for this operation.
Related
I'm migrating a service based integration platform from .Net Framework to .Net Core. The original versions of the integration platform have proven very successful and compared to replacing it with a 'off the shelf' integration solution, it has a far better ROI.
So after redeveloping the code, all tests has been working very well and have achieved higher levels of performance with a single IIS server that I could with 2 IIS servers with the original versions.
Except... If I go over ~3 message/sec with multiple clients, I start seeing duplicate GUID key errors when trying to save instrumentation data to my DB. All these errors are generated from the on-ramp service. The on-ramp places the message on a queue. The messages are then consumed by an off-ramp service and sent to the destination (for this load test the destination is a file folder).
Even though the off-ramp is also running on the same server as the on-ramp, we do not see any duplication errors generated by the off-ramp. I suspect this is due to the queue creating a linier process, so only one instance of the off-ramp is running at any time vs the on-ramp that has up to 4 clients firing concurrent messages at it's API.
Initially I thought the issue was caused by a static global variable class I had implemented, crossing process boundaries. But I would expect that the issue would be seen with the off-ramp as well, as the service architecture for both are virtually identical.
Summary of thoughts on issue:
If it is a pure coding issue, then errors would happen at low messaging rates.
The error would also be seen on the off-ramp if the GUID duplication was chance.
The on and off ramps are both running on the same server, but duplication only seen on the on ramp. IE on ramp not impacting the off ramp and visa versa.
Duplication has to be due to shared memory between concurrently running on-ramp instances, generated by multiple client scenario.
To try and resolve the issue I removed the static global variable class but I'm still seeing the duplication errors.
This issue was never observed in the original IIS implementation (after millions of message processed). I suspect the issue is with process isolation in the IIS hosted Kestrel .Net Core service host. From what I have read there is good isolation between different apps (based on IIS path) but not within the same app. So basically within the same IIS app pool. This could explain why .Net Core does not support multiple app running in the same IIS app pool.
If any one has a good idea how i can achieve process isolation between instances of the same app running in the same IIS app pool I would appreciate your thoughts/suggestions.
After running more tests I was able to resolve the issue. The problem was with the scope of the instrumentation variable. At low rates there was never a problem, but at high throughput, the same instrumentation object was being accessed by a second instance of the process.
The issue was difficult to track down due to the short lived nature of the integration services.
Thanks to anyone who reviewed the question.
Martin
I'm writing an application which will use the Azure Service Bus. For local development I'm using Windows Server Service Bus to provide the same services (the code to use either is identical).
I want to write the application to be tolerant of transient errors when sending or receiving messages. To that end, I want to be able to test the fault-handling code can deal with the local Service Bus instance suddenly being unavailable during execution of various operations.
Ideally, I'd want to write some automated integration tests around these scenarios, but I appreciate that may not be practically achieved.
What can I do to simulate transient errors on my local Service Bus?
One easy thing would be to call the stop-sbservice (affects one node) or stop-sbfarm (affects the entire farm) cmdlets. This would let you simulate a servicebus outage locally. You can then call start-sbservice or start-sbfarm to bring the service back and validate that your code recovers properly. This approach also has the added benefit that you control when the service returns (compare to just crashing the process). This page has information on the available cmdlets.
If that's not enough, another approach that I've used in the past is to shut down the network interface, or, if the server is in another machine, put up a firewall on the ports used to communicate to service bus.
I have a WCF Web Service that has no concurrency configuration in the web.config, so I believe it is running as the default as persession. In the service, it uses a COBOL Virtual Machine to execute code that pulls data from COBOL Vision files. Per the developer of the COBOL VM, it is a singleton.
When more than one person accesses the service at a time, I'll get periodic crashes of the web service. What I believe is happening is that as one process is executing another separate process comes in at about the same time. The first process ends and closes the VM down through normal closing procedures. The second process is still executing and attempting to read/write data, but the VM was shutdown and it crashes. In the constructor for the web service, an instance of the VM is created and when a series of methods complete, the service is cleaned up and the VM closed out.
I have been reading up on Singleton concurrency in WCF web services and thinking I might need to switch to this instead. This way I can open the COBOL VM and keep it alive forever and eliminate my code shutting down the VM in my methods. The only data I need to share between requests is the status of the COBOL VM.
My alternative I'm thinking of is creating a server process that manages opening the VM and keeping it alive and allowing the web service to make read/write requests through that process instead.
Does this sound like the right path? I'm basically looking for a way to keep the Virtual Machine alive in a WCF web service situation and just keep executing code against it. The COBOL VM system sends me back locking information on the read/writes which I can use to handle retries or waits against.
Thanks,
Martin
The web service is now marked as:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single)]
From what I understand, this only allows a single thread to run through the web service at a time. Other requests are queued until the first completes. This was a quick fix that works in my situation because my web service doesn't require high concurrency. There are never more than a handful of requests coming in at a time.
I am a newbies programming WCF in .net. Recently, I worked on one of the WCF project which responds bytes of image file to the client. Everything worked fine but the performance. Although the service is built on with concurrency mode as parallel, it puts all the requests in queue. Thus, if 5 requests are in queue, the last request has to wait 5X (15 sec instead of 3 sec). The msdn blog: http://social.msdn.microsoft.com/Forums/en/wcf/thread/861ea6f7-6c4e-4c3f-abde-ae60228244ea explains about similar problem. But the solution was not helpful to me. I would like to thank in advance to you all for any suggestion/help.
Firstly, if at all possible, I recommend using IIS7 on Server 2008+ if at all possible. Its capabilities far exceed IIS6'. If you're unable to use IIS7 ...
Be sure you've configured you website hosting your WCF services as a web garden. This allows multiple worker processes to process incoming requests. This overcomnig situations where the ASP.NET thread-pool saturates/blocks, resulting in requests being queued while the single worker process churns through each request in sequence.
Secondly, as the article you point to states, be sure to boost the number of concurrent threads ASP.NET is configured to handle.
Note, if your code is calling into code that serializes work to a single blocking thread (e.g. COM objects written in VB6 that perform ANY string manipulation), then it doesn't matter how many worker threads you configure - they'll all be serialized down to one thread (since VB6's string routines are single-threaded)! This is why the web-garden & multiple worker processes configuration is so important.
HTH.
Is it possible to use NServiceBus to publish and consume messages in the same application, specifically a web application?
In the future we will almost certainly need to maintain a separate long running service to process messages generated by this application, and this is why we are hoping to use NServiceBus from the start, but right now it would be nice to just start up the consumer and the publisher when the web application starts. This will make testing and deployment far easier for us.
I presume I will need to reference the NServiceBus.Host.exe and start up the process in the global.asax, but need help on what exactly I need to call to do this.
This is not a mode of deployment that is supported out of the box. While you could make this work by manually creating an additional appdomain for the second NServiceBus endpoint, you'd also likely need to give it a custom configuration source, and of course its own queue.
All in all, I'd recommend keeping it as a separate process, even if it is on the same box. That being said, you can create a second web app to host it rather than using the generic host if you don't want to manage windows services in addition to web apps.
Hope that helps.