Why Would You Host a wcf service in a Windows service? - wcf

What would be reasons that you want to host a wcf service in a windows service and not in IIS?

One reason is that IIS6 only supports bindings based on HTTP. If you want to use TCP, MSMQ, etc., then you need to host in a separate program.

When hosting in IIS you are only allowed to bind to a single port per a base address, in each web site (Meaning you cant specify two bindings with different ports since you can only use a single port, or endpoints that use different ports)
You can only use a single base address in IIS, the only way around this is deploying multiple versions of the same project in different websites (yuck)
The IIS process must recycle eventually, and when it does it dumps everything and restarts, which is good alot of the time since memory is freed and trapped resources are released, but when using singletons this can have an undersired effect depending on your code
[edit] : more points
In a standard setup your worker process always have 2GB Virtual memory available (no matter if you have 1, 2 or 4GB physical memory in the machine).

Freedom. You as the developer don't need someone to administer the box
Sometimes IIS6 is really just overkill
You are using it as interprocess communication conduit
You wish to declare all of the bindings in code. This is far less confusing and more powerful than the xml config files that seem to be all the rage. I can't envision many scenarios where I would want a non-programmer messing with bindings. The xml approach is fine for prototyping and systems that need to be highly dynamic, but overall I don't think its a good idea

Related

vb.net - passing parameter to an application which is already running [duplicate]

Both Pipes and ASP.NET Core gRPC support local and remote IPC/RPC (with some platform limitations for gRPC)
When would I use one technology (Pipes) or the other (gRPC)?
Observations, thoughts and considerations I'm keeping in mind:
gRPC seems to be geared towards replacing WCF in some future iteration.
local deployments and with machine restrictions (running as non-admin/user, machine firewalls, different platforms/OS)
network traversal, and compatibility with same-machine -> multi-machine (frontend/backend arrays) for load and expansion
Spanning secure zones (where a Proxy is used, or other TLS cipher/order/registry setting) affects the ability for HTTP/2 to work
Pipes (named pipes?) have a different surface area and port (do they also use port 135, or NetBIOS over TCP (not sure of name))... how is it scanned and secured?
"memory mapped files" seem to be a challenge to get working, however it seems to work in ASP.NET Core with gRPC in the UDS configuration. Is this a correct inference?
Right now my scenario is to have two console apps communicate with each other, same machine or remote. Adding Asp.NET Core Web is an optional front end alternative for my scenario.
Simple IPC
Depends on how much communication is going to happen. If your communication is limited to simple collaborative signal passing or sharing some data between two processes you can safely use NamedPipeClientStream and NamedPipeServerStream on local system or local network but if you plan for the same on different systems then I would suggest using TcpClient and TcpListener.
Comprehensive IPC
WCF or now its replacement gRPC is for scenario where a complete API/Framework need to be executed remotely. For example I have an entire library of classes which I need to call from a different process (which mostly run on a different system); in that case gRPC kind of solutions make more sense.
Only you can decide.
This is a design decision which is highly unique for your application; your future plans and your system environment and any third person can only give you clues but ultimately you are the only person who can make the right decision.

Hosting dataservice in IIS or Local Services

Normally I host my WCF services in IIS but I've been told by a colleague that services run better (performance wise) when hosted in Local services.
Is this true? What are the pros and cons for each?
Without knowing how your services are designed, how much CPU/memory they consume, how many concurrent clients they support, how they are accessed, etc., it's hard to make a general statement about which method is better/faster. So I'll share my previous experience.
Initially we hosted our WCF services in local Windows Services, after doing some rudimentary performance testing against the two WCF deployment methods. Hosting them in Win Svcs was slightly faster (not noticeably). .NET 4.0/REST/wsHTTPBinding/total 3,000 conc. calls/load balanced sever farm/memory intensive/default WCF settings (initially didn't tweak them).
Then we noticed the memory on our WCF servers was maxing out several days after starting the service and when it happened our services generated strange exceptions sporadically. We turned on perf counters on our servers. That's when we learned that perf profiling WCF services hosted in Win Svc didn't give us a whole lot of insight because many perf counters simply didn't return any info at all, which was confirmed by Microsoft Tech Support team. We also used ANTS to look for memory leaks but didn't find any major issue in our code. We then started tweaking WCF settings (e.g. maxbufferpoolsize) attentively with the help from Microsoft consultants. Ultimately we came to the conclusion that GC wasn't happening frequently enough to free up allocated memory. We even tried switching to server mode GC from workstation mode, which actually ended up worsening the problem.
As the last resort, we switched to IIS. The performance of the service didn't get any better, which was fully expected. However, some of the IIS-specific perf counters confirmed our suspicion about GC not happening frequently enough. We then found this wonderful setting in IIS that allowed us to specify when and how often to recycle an app pool. Yes, we could have developed a simple custom solution to restart our WCF services but why reinvent the wheel, we thought. Additionally, when you recycle an app pool in IIS, IIS doesn't kill it abruptly. Instead, it creates a new one to handle subsequent requests while the old one stays a live for a configurable amount of time to finish processing all outstanding requests. That built-in capability allowed us to maintain our uptime SLA.
Based on my experience, I would suggest you keep them in IIS unless you really really really need to squeeze for that last bit of juice from your servers.

Under what circumstances does it make sense to run a WCF client and server on the same machine?

In Learning WCF, by Michele Bustamante, there is a section that describes a binding called the NetNamedPipes binding. The book says that this binding can only be used for WCF services that will be called exclusively from the same machine.
Under what circumstances would it make sense to use this? Ordinarily, I would write asynchronous code without using WCF... Why would Microsoft provide something for WCF that can only run on the same machine?
Look at it from the other direction. Once the service is built, you can run in in a variety of binding configurations. If it was a remote machine, you could use the HTTP or TCP bindings. Or, the service happened to be running on the same box, you have those options plus the named pipes option. The named pipes is just another option that is provided just in case you are running locally, but you should be able to switch to a different binding if you are running remote.
Yu could start with everything on the same box because you have less traffic, and use named pipes because it was the shortest path to the service. Then, if load demanded it, you can move the service to another box, and then change it to use TCP or HTTP instead.
You probably won't have a service that exposes only a NetNamedPipe endpoint - that doesn't make a lot of sense. But if you run your WCF service on a server, exposing service endpoints out to the world using the usual bindings, and you need e.g. a management or admin console or something like that, running on that same machine, it can make sense to use the NetNamedPipe binding since it's the fastest around.
Another possible scenario that I learned about is having an error collection service - any error or exception that happens is sent to a service to be logged. Again: that service would probably expose several types of endpoints, but if you have other services running on the same server, using NetNamedPipe binding to connect these two services makes a lot of sense.
I don't think you'll use the NetNamedPipe binding an awful lot in your WCF days - but it can definitely make sense in some cases and be quite useful in such specialized scenarios.

Best host on Windows for UI-less processes

We're planning a system running on Windows/.Net 3.5 that has a number of "services" that need to run in the background. Some will be active all of the time, but some will only be called occassionally and can be stood-up on demand.
As far as I can see, my options are:
Windows Services - always running(?)
IIS hosted something - called on demand
COM+/ .Net Enterprise Sevices - most complex option, but most powerful?
Distributed transactions is not a requirement, these are mainly computation engines, rather than transaction processors.
Does anyone have any experience of working with all of these and what further pros & cons can be claimed for each technology?
EDIT
Is suppose there are multiple ways of hosting code in IIS, web services, WCF (as pointed out below), any others? Relative pros/cons?
WCF feels like the right way to go. There are still many choices to make. WCF provides a number of communication mechanisms and hosting environments:
WCF combines the following technologies under one set of APIs-
ASMX;
WSE;
Remoting;
COM+;
MSMQ.
So for instance you can use persistent messages from MSMQ for occassionaly connected clients or standard XML encoding SOAP messages over an HTTP transport layer. You can also use new features in 3.5 like binary encoding of XML or JSON encoding over HTTP.
Hosting environments include:
Console applications
Windows services
WCF services inside IIS 7.0
and on Windows Vista or Windows Server 2008 you can use WAS (Windows Activation Services) to host WCF services.
Different hosting environments have pros and cons. I suggest you look at MSDN for more details (e.g. http://msdn.microsoft.com/en-us/library/bb332338.aspx).
Because WCF encompasses a lot of functionality it is more difficult to learn than any one of the technologies it replaces. I still think it pays for itself in the long run.
It depends on what the software will do, and how (and if) users or systems need to interact with it. Depending on those things, there may be one more, often overlooked, option: set it up as a scheduled task. This is often a very good alternative to a windows service, if the software is of the kind that will act on certain time intervals (check for a change in a database, act on the changed data and send it somewhere, for instance).
If you will have other systems talking directly to your software, I would imagine that a WCF application hosted in IIS would be a rather straighforward way. We use both those approaches in my current assignment; WCF services for looking up and storing data, and scheduled tasks for data calculations that run on a regular basis.
The scheduled task has one upside compared to the others in one specific field; it uses system resources only when running.
You mentioned starting up a process "on demand". WAS - Windows Activation Service, or sometimes called Windows Process Activation Servvice, though it is never abbreviated "WPAS" - is the thing inside Windows that provides on-demand process activation. The way it works - when a message arrives, WAS can start a worker process to handle the message. WAS was, prior to IIS7, fairly tightly integrated into IIS. It was used primarily to activate processes that did web work - like an ASP.NET worker process. With IIS7, WAS is generalized so that it can activate worker processes based on non-HTTP as well as HTTP messages. If you write your app to receive messages through WCF, you can get activation essentially "for free". That applies if it is HTTP, TCP, MSMQ; SOAP or otherwise.
The key thing with this on-demand startup though, is that it is tied to the communication. In fact the process lifecycle model for WAS is tied to communication as well. By default if there are no incoming messages after a while, the process will be shut down by WAS. That may or may not be what you want.
As for process hosting - COM+ offers a hosting environment but it is primarily intended for use as a host for processes that communicate. This may not be the perfect fit for you.
If you have compute engines, you may just want to run a Windows Service. A service like that can be started and stopped either administratively or programmatically. In the latter case, you could imagine a WAS-activated worker process programmatically starting a windows service.
You could also imagine writing a simple Windows Service that watches a location (filesystem, message queue, etc) for a message, and when that file or message arrives, the Windows Service starts up a compute engine process, which itself is NOT a Windows Service, but is just a process.
Speaking of MSMQ - That is basically the same model as MSMQ triggers. You can configure MSMQ to start a process when a message arrives on a particular queue.
There are lots of options.

What are the advantages of using WCF over frameworks like MassTransit or hand written MSMQ client?

I am looking at using MSMQ as a solution to do asynchronous execution in my upcoming project. I want to know the differences between using WCF and frameworks like MassTransit or even hand written MSMQ client to place/read task off MSMQ.
Basically the application will be several websites (internal through LAN or external through the Internet) reading/writing data through a service layer (be it WCF or normal web service). Then this service layer will do one of two things: 1. write data to database 2. and/or trigger the background process by placing a message in the queue. 3. obviously it can also retrieve data from database. The little agent (a windows service) on the other side of the queue will monitor the queue and execute based on the task command.
This architecture will be quite easy to scale (add more queues and agents) and easy to implement compared to RPC or distributed execution or whatever. And the agent processing doesn’t need to be real time. And the agent and service layer are separate applications except they share the common domain objects and Repositories etc.
What do you think? Architecture suggestions for the above requirements are welcomed. Thank you!
WCF adds an abstraction over MSMQ. In fact, once you define compatible contracts (operations must be OneWay), you can switch out MSMQ in the config, transparently. (For instance, you could switch to normal HttpWS or a NetTcp binding.)
You should evaluate the other WCF benefits, like security and so on, to see how those fit in with your needs. Again, they should be reasonably transparent of the fact you're using MSMQ underneath. For instance, adding SOAP security and so on should "just work", independent of using MSMQ.
(Although, IIRC, you still need to login to the desktop on each machine that uses MSMQ, with the service account that will use MSMQ, to generate the certificate in the machines local profile. And then, it doesn't work very well from IIS6, since user profiles aren't loaded. A real pain in general, but nothing to do with WCF specifically.)
Apart from that:
Have you looked at SQL Server Service Broker? After using MSMQ + WCF and SSSB, I think that SSSB is vastly easier to configure and manage. SSSB works with T-SQL commands over any SQL client (I use it from Mono, on Linux, with transactions). It'll also give you transactional send/receive, even remotely (I think MSMQ 4 now allows this). It really takes a lot of the pain away from message queuing, and if you're using SQL Server already...
SSSB is often overlooked since the SQL Management Studio doesn't have GUI designers for it all, but it isn't hard and is a great option. The one downside is that if you want local send capability (i.e., queue message when network is down), you'll need to run a local SQL Express instance.
Your architecture seems sound and reasonable. However you should consider using the WCF net MSMQ transport over hand coded MSMQ classes. WCF wraps this common functionality into a nice programming model. Also I believe there is some improvements in the protocol used by wcf compared to basic System.Messaging
Have a look at the value-add over plain MSMQ:
http://readthedocs.org/docs/masstransit/en/latest/overview/valueadd.html
In summary, you get a lot of messaging concepts clearly presented in the API with MassTransit; to an extent you wouldn't have if you hand-coded it or used WCF.