I'm writing an application which will use the Azure Service Bus. For local development I'm using Windows Server Service Bus to provide the same services (the code to use either is identical).
I want to write the application to be tolerant of transient errors when sending or receiving messages. To that end, I want to be able to test the fault-handling code can deal with the local Service Bus instance suddenly being unavailable during execution of various operations.
Ideally, I'd want to write some automated integration tests around these scenarios, but I appreciate that may not be practically achieved.
What can I do to simulate transient errors on my local Service Bus?
One easy thing would be to call the stop-sbservice (affects one node) or stop-sbfarm (affects the entire farm) cmdlets. This would let you simulate a servicebus outage locally. You can then call start-sbservice or start-sbfarm to bring the service back and validate that your code recovers properly. This approach also has the added benefit that you control when the service returns (compare to just crashing the process). This page has information on the available cmdlets.
If that's not enough, another approach that I've used in the past is to shut down the network interface, or, if the server is in another machine, put up a firewall on the ports used to communicate to service bus.
Related
Is it possible to run multiple instances of the same XPC service using the XPC APIs found in Foundation.framework (NSXPCConnection, etc.)? The docs don't provide much insight on this matter.
EDIT: Did a quick test, and it seems like only one instance of the service is running even though I created two XPC connections. Is there any way to have it run another instance?
A bit late, but the definitive answer to this question is provided in the xpcservice.plist manpage:
ServiceType (default: Application)
The type of the XPC Service specifies how the service is instantiated.
The values are:
• Application: Each application will have a unique instance of this service.
• User: There is one instance of the service process created for each user.
• System: There is one instance of the service process for the whole system. System XPC Services are restricted to reside in system frameworks and must be owned by root.
Bottom line: In most cases there is a single instance of an XPC Service and only in the case where different applications can connect to the same service (not even possible when the service is bundled with an app), will there be multiple instances (one-instance-per-app).
I believe XPC services designed for one instance per multiple connections. Probably, it is more convenient to manage named pipes with one running executable. So, the most likely it is impossible to create multiple instances simultaneously.
Since XPC services should have no state, it should not matter, whether one ore more instances are running:
XPC services are managed by launchd, which launches them on demand, restarts them if they crash, and terminates them (by sending SIGKILL) when they are idle. This is transparent to the application using the service, except for the case of a service that crashes while processing a message that requires a response. In that case, the application can see that its XPC connection has become invalid until the service is restarted by launchd. Because an XPC service can be terminated suddenly at any time, it must be designed to hold on to minimal state—ideally, your service should be completely stateless, although this is not always possible.
–– Creating XPC Services
Put all neccessary state information into the xpc call and deliver it back to the client, if it has to persist.
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man5/xpcservice.plist.5.html
ServiceType key in XPCService dictionary: Application or User or System
But this ‘ServiceType’ is irrelevant… IF Service is embedded in an application bundle then it will only be visible to the containing application and will be, by definition, Applicaton-type services. A subsequent connection request from an application to a service will result in a new connection to an existing service.
I know I'm late to the party, but while you can't do this with plain XPC,
there's library (a component of OpenEmu) that should be able to do what you're asking: OpenEmuXPCCommunicator
I have a WCF Web Service that has no concurrency configuration in the web.config, so I believe it is running as the default as persession. In the service, it uses a COBOL Virtual Machine to execute code that pulls data from COBOL Vision files. Per the developer of the COBOL VM, it is a singleton.
When more than one person accesses the service at a time, I'll get periodic crashes of the web service. What I believe is happening is that as one process is executing another separate process comes in at about the same time. The first process ends and closes the VM down through normal closing procedures. The second process is still executing and attempting to read/write data, but the VM was shutdown and it crashes. In the constructor for the web service, an instance of the VM is created and when a series of methods complete, the service is cleaned up and the VM closed out.
I have been reading up on Singleton concurrency in WCF web services and thinking I might need to switch to this instead. This way I can open the COBOL VM and keep it alive forever and eliminate my code shutting down the VM in my methods. The only data I need to share between requests is the status of the COBOL VM.
My alternative I'm thinking of is creating a server process that manages opening the VM and keeping it alive and allowing the web service to make read/write requests through that process instead.
Does this sound like the right path? I'm basically looking for a way to keep the Virtual Machine alive in a WCF web service situation and just keep executing code against it. The COBOL VM system sends me back locking information on the read/writes which I can use to handle retries or waits against.
Thanks,
Martin
The web service is now marked as:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single)]
From what I understand, this only allows a single thread to run through the web service at a time. Other requests are queued until the first completes. This was a quick fix that works in my situation because my web service doesn't require high concurrency. There are never more than a handful of requests coming in at a time.
I have a back end system that drops events to my system. It is critical that these events don't get lost (I work for a health care company and lost info can impact a patient's care).
I would like to make this system drop it's data into NServiceBus so that it can be published to subscribers that need it. However, my server that is dropping these messages is an AIX machine, so it can't run .NET Code.
This system can send the messages via a lot of standard protocol and communication types (TCP, WSDL Based Services, Call A Database Sproc, etc).
One option I have considered is to setup a WCF service that the AIX mainframe will call. I can then have my WCF service make the call to NServiceBus.
But the events sent per minute of this back end service can at times be fairly high (about 500 messages per minute). I am worried that WCF is not up to this, while NService bus says it can handle 1000 messages per second. Am also worried about data loss in the event of a downtime. NserviceBus claims it is not going to loose any data.
Am I wrong? Is WCF going to be just fine? Or am I making a weak link in the chain?
Is there a way I can use an established protocol to add items directly to an NServiceBus Queue?
Or should I just write my own .NET app that will allow NServiceBus to use a TCP connection?
Note: Because these messages are critical, the message must be acknowledged or the server will keep sending it.
I would take a look at the WCF integration that comes right out of the box. The WCF service is contained within the same host as NSB. The integration does nothing more than just push the message onto the queue, so I don't think you'll have a throughput issue. Seeing that this is critical data, I would suggest clustering the service. The other option would be to install 2 or more instances of the service on different machines and load balance the HTTP calls across both. In essence you would have 1 logical Publisher with 2 physical components doing the publishing.
First of all i will describe current state:
Server consists of several WCF services, hosted in one or several win services on diffirent machines.
Service responsible for recieving data from diffirent devices. Communication with devices is implemented using sockets. Service instance mode - singleton.
Data broker service - responsible for persisting data and sharing in by request. Instance mode - singleton.
Configuration service - responsible for changing configuration database and working with administration console(WPF app, like SSMS). Handles connections from console, subscriber management, etc. Instance mode - singleton.
Client access service - quite the same as above thith management of clients but also notifyes clients of new data, and acts like facade to service bus. And singleton again.
Identity management service - Checks permissions and returns result. Singleton.
All of those services are connected with NServiceBus and i realy like how it works at this moment.
But:
Too many singletons. Mainly because to use servicebus i must have single instance of it afaik. I dunno maybe i can use nservice bus in session mode, but dont know how to handle issue that all of those services will use one queue.
And what if i will have 300+ clients? singleton can become unresponsive..
And i wanted to ask for some critics about all of this and maybe some one could suggest something.
Thanks in advance.
Alexey
Alexey,
While you should only have one instance of the bus per process, you can put that instance in a globally accessible place (as shown in the AsyncPages sample), and use that from non-singleton objects like web pages and WCF services.
Also, it is probably not appropriate to have all your services using one queue. Without better understanding your situation, I'd give the default recommendation of one queue for each of the services you identified.
Hope that helps.
We're planning a system running on Windows/.Net 3.5 that has a number of "services" that need to run in the background. Some will be active all of the time, but some will only be called occassionally and can be stood-up on demand.
As far as I can see, my options are:
Windows Services - always running(?)
IIS hosted something - called on demand
COM+/ .Net Enterprise Sevices - most complex option, but most powerful?
Distributed transactions is not a requirement, these are mainly computation engines, rather than transaction processors.
Does anyone have any experience of working with all of these and what further pros & cons can be claimed for each technology?
EDIT
Is suppose there are multiple ways of hosting code in IIS, web services, WCF (as pointed out below), any others? Relative pros/cons?
WCF feels like the right way to go. There are still many choices to make. WCF provides a number of communication mechanisms and hosting environments:
WCF combines the following technologies under one set of APIs-
ASMX;
WSE;
Remoting;
COM+;
MSMQ.
So for instance you can use persistent messages from MSMQ for occassionaly connected clients or standard XML encoding SOAP messages over an HTTP transport layer. You can also use new features in 3.5 like binary encoding of XML or JSON encoding over HTTP.
Hosting environments include:
Console applications
Windows services
WCF services inside IIS 7.0
and on Windows Vista or Windows Server 2008 you can use WAS (Windows Activation Services) to host WCF services.
Different hosting environments have pros and cons. I suggest you look at MSDN for more details (e.g. http://msdn.microsoft.com/en-us/library/bb332338.aspx).
Because WCF encompasses a lot of functionality it is more difficult to learn than any one of the technologies it replaces. I still think it pays for itself in the long run.
It depends on what the software will do, and how (and if) users or systems need to interact with it. Depending on those things, there may be one more, often overlooked, option: set it up as a scheduled task. This is often a very good alternative to a windows service, if the software is of the kind that will act on certain time intervals (check for a change in a database, act on the changed data and send it somewhere, for instance).
If you will have other systems talking directly to your software, I would imagine that a WCF application hosted in IIS would be a rather straighforward way. We use both those approaches in my current assignment; WCF services for looking up and storing data, and scheduled tasks for data calculations that run on a regular basis.
The scheduled task has one upside compared to the others in one specific field; it uses system resources only when running.
You mentioned starting up a process "on demand". WAS - Windows Activation Service, or sometimes called Windows Process Activation Servvice, though it is never abbreviated "WPAS" - is the thing inside Windows that provides on-demand process activation. The way it works - when a message arrives, WAS can start a worker process to handle the message. WAS was, prior to IIS7, fairly tightly integrated into IIS. It was used primarily to activate processes that did web work - like an ASP.NET worker process. With IIS7, WAS is generalized so that it can activate worker processes based on non-HTTP as well as HTTP messages. If you write your app to receive messages through WCF, you can get activation essentially "for free". That applies if it is HTTP, TCP, MSMQ; SOAP or otherwise.
The key thing with this on-demand startup though, is that it is tied to the communication. In fact the process lifecycle model for WAS is tied to communication as well. By default if there are no incoming messages after a while, the process will be shut down by WAS. That may or may not be what you want.
As for process hosting - COM+ offers a hosting environment but it is primarily intended for use as a host for processes that communicate. This may not be the perfect fit for you.
If you have compute engines, you may just want to run a Windows Service. A service like that can be started and stopped either administratively or programmatically. In the latter case, you could imagine a WAS-activated worker process programmatically starting a windows service.
You could also imagine writing a simple Windows Service that watches a location (filesystem, message queue, etc) for a message, and when that file or message arrives, the Windows Service starts up a compute engine process, which itself is NOT a Windows Service, but is just a process.
Speaking of MSMQ - That is basically the same model as MSMQ triggers. You can configure MSMQ to start a process when a message arrives on a particular queue.
There are lots of options.