My management is evaluating non-Azure Microsoft Windows Service Bus (Azure is out of consideration for security reasons). It will be used to setup topic/subscription model with a number of WCF services with netMessagingBinding that we building, so I just have a few basic questions about that.
Are there any specific hardware requirements like dedicated server, dedicated database etc. for WSB to run in production environment?
It's easy to configure WCF service to listen on a specific topic subscription. Is there any way for WCF service to listen to multiple subscriptions?
Appreciate the answers.
You can install the service components and the databases all on one server (that is the default). However, for a number of reasons, we installed the services on a dedicated app server and then created the Service bus databases on an existing database server. The install package allows you to specify a different db server. Check this article for the minimum server requirements
Yes you can get one WCF service to listen to multiple subscriptions. You would need to create two (or more) System.ServiceModel.ServiceHost instances and then run them inside one process. For example we had one windows service running two ServiceHost's. Each host listened at a different queue and therefore implemented a different contract. This meant where queues were logically grouped we didn't need a new windows service per queue. You could do the same with subscriptions.
For question one, you will have to go through the exercise of hardware sizing. the good news is that WCF services can scale vertically, so you can add up servers if there were issues in handling client load.
To do hardware sizing you will have to make an estimate the expected load and then do performance/scalablity testing to figure the load bearing capacity of your serviceBus/services .
you could find a lot of resources for load testing like this one http://seroter.wordpress.com/2011/10/27/testing-out-the-new-appfabric-service-bus-relay-load-balancing/
once you do load testing and come up with the numbers, you can then do sizing using references like this one http://msdn.microsoft.com/en-us/library/bb310550.aspx
Related
I am in a situation where I can use Service Fabric (locally) but cannot leverage Azure Service Bus (or anything "cloud"). What would be the corollary for queuing/pub-sub? Service Fabric is allowed since it is able to run in a local container, and is "free". Other 3rd party messaging infrastructure, like RabbitMQ, are also off the table (at the moment).
I've built systems using a locally grown bus, built on MSMQ and WCF, but I don't see how to accomplish the same thing in SF. I suspect I can have SF services use a custom ICommunicationListener that exposes msmq, but that would only be available inside the cluster (the way I understand it). I can build an HTTPBridge (in SF) in front of those to make them available outside the cluster, but then I'd lose the lifetime decoupling (client being able to call a service, using queues, even if that service isn't online at the time) since the bridge itself wouldn't benefit from any of the aspects of queuing.
I have a few possibilities but all suffer from some malady that only exists because of SF, locally. Also, the same code needs to easily deploy to full Azure SF (where I can use ASB and this issue disappears) so I don't want to build two separate systems just because of where I am hosting it in some instances.
Thanks for any tips.
You can build this yourself, for example like this. This uses a BrokerService that will distribute message-data to subscribed services and actors.
You can also run a containerized queuing platform like RabbitMQ with volumes.
By running the queue system inside the cluster you won't introduce an external dependency.
The problem is not SF, The main issue with your design is that you are coupling architectural requirements to implementations. SF runs on top of VirtualMachines, in the end, the only difference is that SF put the services in those machines, using another solution you would have an Agent Deploying these services in there or doing a Manual deployment. The challenges are the same.
It is clear from the description that the requirement in your design is a need for a message queue, the concept of queues are the same does not matter if it is Service Bus, RabbitMQ or MSMQ. Each of then will have the basic foundations of queues with specifics of each implementation, some might add transactions, some might implement multiple patterns, and so on.
If you design based on specific implementation, you will couple your solution to the implementation and make your solution hard to maintain and face challenges like you described.
Solutions like NServiceBus and Masstransit reduce a lot of these coupling from your code, and if you think these are not enough, you can create your own abstraction. Then you use configurations to tied your business logic to implementations.
Despite the above advice, I would not recommend you using different
solutions per environment, because as said previously, each solution
has it's own implementations and they might not assimilate to each other, as example, you might face issues in
production because you developed against MSMQ on DEV and TEST
environments, and when deployed to Production you use ServiceBus, they
have different limitations, like message size, retention period and son
on.
If you are willing to use MSMQ, you can add MSMQ to the VMs running your cluster and connect from your services without any issue. Take a look into this SO first: How can I use MSMQ in Azure Service Fabric
I am looking for suggestion for hosting my WCF enterprise application.
The app. require to run without stopping at the server. It also use TCP to yield the best performance at the intranet environment.
I am thinking to host it at window service because IIS recycle process, and has timeout.
However, I find this from the msdn http://msdn.microsoft.com/en-us/library/ff649818.aspx :
Window service...Lack of enterprise features. Windows services do not have the security, manageability, scalability, and administrative features that are included in IIS.
Does it mean Window Service is not suitable for enterprise application? But How about MS SQL, Oracle, MySQL etc. They all host at Win. Service right?
Regards
Bryan
Windows service is suitable for enterprise application! The quoted text actually means that IIS has a lot of built-in management features which are not available in custom hosting (like windows service) unless you implement them at your own.
One of such features is the recycling you want to avoid which helps application to keep low resource consumption (server is in healthy state). Another such feature is IIS checking of the worker state. If worker process looks stuck (don't process requests for any reason), IIS will start automatically another process and routes new requests to that process.
IIS + WAS + AppFabric can provide very big feature set but they are not good for every scenario. If you have service which requires some background continuous, scheduled or multi threaded processing it is probably better to move to self hosted scenario.
In my application I need to push notifications of real time events from server to clients. The amount of data to pass is very small, mostly and Id. The number of clients listening simultaneously can be around 100 and I may have to publish one notification every 2 - 3 seconds. Both the server and client are built using .Net and WCF.
Given these requirements I have built a set of WCF services which will be run on a load balanced server cluster. The Instance context mode is Per Call and there is no need for sessions etc.
I am currently using BasicHttpBinding. Will TCP binding be better? Does it run on IIS 5 or 6? If not why?
What configuration for serialization can work best?
What are the things I need to do to make sure I get maximum performance?
Edit - Adding more information based on some of the responses -
I host a small WCF service in the client process using manual hosting. The server just calls this service on each client to push the data to all of them.
Firstly have you considered using messaging for what you are trying to achieve?
In answer to will TCP binding work better than BasicHttpBinding- almost certainly yes. If you want to use TCP, you can't use IIS- look into WAS with Windows Server 2008. If you're stuck with Windows Server 2003, then you'll have to host in a windows service instead.
You've made a good choice by choosing per call- this is the preferred instance management mode for creating scalable WCF services.
Edit:
Now you've update your question, I recommend you take a look at IDesign's Pub/Sub framework if you want to stick with WCF. I'd also look at Pub/Sub with MSMQ in WCF and also with "Vanilla" products such as Tibco RV.
If you need pushing data from service to clients you need sessions and you need duplex binding - NetTcpBinding or WSDualHttpBinding. It will not work with BasicHttpBinding because it allows only pulling data (client pools the service for changes). Push data means tha service sends data to clients when needed.
NetTcpBinding always crete session. It can't be hosted in IIS 6 or older. NetTcpBinding is allowed only in Windows Activation Service (WAS) which is extension of IIS 7.x. For older systems you need self hosting = windows service.
Edit:
Based on your description you need Publish-Subscribe message exchange pattern.
We're planning a system running on Windows/.Net 3.5 that has a number of "services" that need to run in the background. Some will be active all of the time, but some will only be called occassionally and can be stood-up on demand.
As far as I can see, my options are:
Windows Services - always running(?)
IIS hosted something - called on demand
COM+/ .Net Enterprise Sevices - most complex option, but most powerful?
Distributed transactions is not a requirement, these are mainly computation engines, rather than transaction processors.
Does anyone have any experience of working with all of these and what further pros & cons can be claimed for each technology?
EDIT
Is suppose there are multiple ways of hosting code in IIS, web services, WCF (as pointed out below), any others? Relative pros/cons?
WCF feels like the right way to go. There are still many choices to make. WCF provides a number of communication mechanisms and hosting environments:
WCF combines the following technologies under one set of APIs-
ASMX;
WSE;
Remoting;
COM+;
MSMQ.
So for instance you can use persistent messages from MSMQ for occassionaly connected clients or standard XML encoding SOAP messages over an HTTP transport layer. You can also use new features in 3.5 like binary encoding of XML or JSON encoding over HTTP.
Hosting environments include:
Console applications
Windows services
WCF services inside IIS 7.0
and on Windows Vista or Windows Server 2008 you can use WAS (Windows Activation Services) to host WCF services.
Different hosting environments have pros and cons. I suggest you look at MSDN for more details (e.g. http://msdn.microsoft.com/en-us/library/bb332338.aspx).
Because WCF encompasses a lot of functionality it is more difficult to learn than any one of the technologies it replaces. I still think it pays for itself in the long run.
It depends on what the software will do, and how (and if) users or systems need to interact with it. Depending on those things, there may be one more, often overlooked, option: set it up as a scheduled task. This is often a very good alternative to a windows service, if the software is of the kind that will act on certain time intervals (check for a change in a database, act on the changed data and send it somewhere, for instance).
If you will have other systems talking directly to your software, I would imagine that a WCF application hosted in IIS would be a rather straighforward way. We use both those approaches in my current assignment; WCF services for looking up and storing data, and scheduled tasks for data calculations that run on a regular basis.
The scheduled task has one upside compared to the others in one specific field; it uses system resources only when running.
You mentioned starting up a process "on demand". WAS - Windows Activation Service, or sometimes called Windows Process Activation Servvice, though it is never abbreviated "WPAS" - is the thing inside Windows that provides on-demand process activation. The way it works - when a message arrives, WAS can start a worker process to handle the message. WAS was, prior to IIS7, fairly tightly integrated into IIS. It was used primarily to activate processes that did web work - like an ASP.NET worker process. With IIS7, WAS is generalized so that it can activate worker processes based on non-HTTP as well as HTTP messages. If you write your app to receive messages through WCF, you can get activation essentially "for free". That applies if it is HTTP, TCP, MSMQ; SOAP or otherwise.
The key thing with this on-demand startup though, is that it is tied to the communication. In fact the process lifecycle model for WAS is tied to communication as well. By default if there are no incoming messages after a while, the process will be shut down by WAS. That may or may not be what you want.
As for process hosting - COM+ offers a hosting environment but it is primarily intended for use as a host for processes that communicate. This may not be the perfect fit for you.
If you have compute engines, you may just want to run a Windows Service. A service like that can be started and stopped either administratively or programmatically. In the latter case, you could imagine a WAS-activated worker process programmatically starting a windows service.
You could also imagine writing a simple Windows Service that watches a location (filesystem, message queue, etc) for a message, and when that file or message arrives, the Windows Service starts up a compute engine process, which itself is NOT a Windows Service, but is just a process.
Speaking of MSMQ - That is basically the same model as MSMQ triggers. You can configure MSMQ to start a process when a message arrives on a particular queue.
There are lots of options.
I am looking at using MSMQ as a solution to do asynchronous execution in my upcoming project. I want to know the differences between using WCF and frameworks like MassTransit or even hand written MSMQ client to place/read task off MSMQ.
Basically the application will be several websites (internal through LAN or external through the Internet) reading/writing data through a service layer (be it WCF or normal web service). Then this service layer will do one of two things: 1. write data to database 2. and/or trigger the background process by placing a message in the queue. 3. obviously it can also retrieve data from database. The little agent (a windows service) on the other side of the queue will monitor the queue and execute based on the task command.
This architecture will be quite easy to scale (add more queues and agents) and easy to implement compared to RPC or distributed execution or whatever. And the agent processing doesn’t need to be real time. And the agent and service layer are separate applications except they share the common domain objects and Repositories etc.
What do you think? Architecture suggestions for the above requirements are welcomed. Thank you!
WCF adds an abstraction over MSMQ. In fact, once you define compatible contracts (operations must be OneWay), you can switch out MSMQ in the config, transparently. (For instance, you could switch to normal HttpWS or a NetTcp binding.)
You should evaluate the other WCF benefits, like security and so on, to see how those fit in with your needs. Again, they should be reasonably transparent of the fact you're using MSMQ underneath. For instance, adding SOAP security and so on should "just work", independent of using MSMQ.
(Although, IIRC, you still need to login to the desktop on each machine that uses MSMQ, with the service account that will use MSMQ, to generate the certificate in the machines local profile. And then, it doesn't work very well from IIS6, since user profiles aren't loaded. A real pain in general, but nothing to do with WCF specifically.)
Apart from that:
Have you looked at SQL Server Service Broker? After using MSMQ + WCF and SSSB, I think that SSSB is vastly easier to configure and manage. SSSB works with T-SQL commands over any SQL client (I use it from Mono, on Linux, with transactions). It'll also give you transactional send/receive, even remotely (I think MSMQ 4 now allows this). It really takes a lot of the pain away from message queuing, and if you're using SQL Server already...
SSSB is often overlooked since the SQL Management Studio doesn't have GUI designers for it all, but it isn't hard and is a great option. The one downside is that if you want local send capability (i.e., queue message when network is down), you'll need to run a local SQL Express instance.
Your architecture seems sound and reasonable. However you should consider using the WCF net MSMQ transport over hand coded MSMQ classes. WCF wraps this common functionality into a nice programming model. Also I believe there is some improvements in the protocol used by wcf compared to basic System.Messaging
Have a look at the value-add over plain MSMQ:
http://readthedocs.org/docs/masstransit/en/latest/overview/valueadd.html
In summary, you get a lot of messaging concepts clearly presented in the API with MassTransit; to an extent you wouldn't have if you hand-coded it or used WCF.