How to cache in WCF multithreaded - wcf

So, in my WCF service, I will be caching some data so future calls made into the service can obtain that data.
what is the best way in WCF to cache data? how does one go about doing this?
if it helps, the WCF service is multithreaded (concurrency mode is multiple) and ReleaseServiceInstanceOnTransactionComplete is set to false.
the first call to retrieve this data may not exist therefore it will go and fetch data from some source (could be DB, could be file, could be wherever) but thereafter it should cache it and be made available (ideally with an expiry system for the object)
thoughts?

Some of the most common solutions for a WCF service seem to be:
Windows AppFabric
Memcached
NCache
Try reading Caching Solutions

An SOA application can’t scale effectively when the data it uses is kept in a storage that is not scalable for frequent transactions. This is where distributed caching really helps. coming back to your question and its answer by ErnieL, here is a brief comparison of these solutions,
as Far as Memcached is concerned, If your application needs to function on a cluster of machines then it is very likely that you will benefit from a distributed cache, however if your application only needs to run on a single machine then you won't gain any benefit from using a distributed cache and will probably be better off using the built-in .Net cache.
Accessing a memcached cache requires interprocess / network communication, which will have a small performance penalty over the .Net caches which are in-process. Memcached works as an external process / service, which means that you need to install / run that service in your production environment. Again the .Net caches don't need this step as they are hosted in-process.
if we compare the features of NCache and Appfabric, NCache folks are very confident over the range of features which they ve compared to AppFabric. you can find enough material here regarding the comparison of these two products, like this one......
http://distributedcaching.blog.com/2011/05/26/ncache-features-that-app-fabric-does-not-have/

Related

vb.net - passing parameter to an application which is already running [duplicate]

Both Pipes and ASP.NET Core gRPC support local and remote IPC/RPC (with some platform limitations for gRPC)
When would I use one technology (Pipes) or the other (gRPC)?
Observations, thoughts and considerations I'm keeping in mind:
gRPC seems to be geared towards replacing WCF in some future iteration.
local deployments and with machine restrictions (running as non-admin/user, machine firewalls, different platforms/OS)
network traversal, and compatibility with same-machine -> multi-machine (frontend/backend arrays) for load and expansion
Spanning secure zones (where a Proxy is used, or other TLS cipher/order/registry setting) affects the ability for HTTP/2 to work
Pipes (named pipes?) have a different surface area and port (do they also use port 135, or NetBIOS over TCP (not sure of name))... how is it scanned and secured?
"memory mapped files" seem to be a challenge to get working, however it seems to work in ASP.NET Core with gRPC in the UDS configuration. Is this a correct inference?
Right now my scenario is to have two console apps communicate with each other, same machine or remote. Adding Asp.NET Core Web is an optional front end alternative for my scenario.
Simple IPC
Depends on how much communication is going to happen. If your communication is limited to simple collaborative signal passing or sharing some data between two processes you can safely use NamedPipeClientStream and NamedPipeServerStream on local system or local network but if you plan for the same on different systems then I would suggest using TcpClient and TcpListener.
Comprehensive IPC
WCF or now its replacement gRPC is for scenario where a complete API/Framework need to be executed remotely. For example I have an entire library of classes which I need to call from a different process (which mostly run on a different system); in that case gRPC kind of solutions make more sense.
Only you can decide.
This is a design decision which is highly unique for your application; your future plans and your system environment and any third person can only give you clues but ultimately you are the only person who can make the right decision.

Hosting dataservice in IIS or Local Services

Normally I host my WCF services in IIS but I've been told by a colleague that services run better (performance wise) when hosted in Local services.
Is this true? What are the pros and cons for each?
Without knowing how your services are designed, how much CPU/memory they consume, how many concurrent clients they support, how they are accessed, etc., it's hard to make a general statement about which method is better/faster. So I'll share my previous experience.
Initially we hosted our WCF services in local Windows Services, after doing some rudimentary performance testing against the two WCF deployment methods. Hosting them in Win Svcs was slightly faster (not noticeably). .NET 4.0/REST/wsHTTPBinding/total 3,000 conc. calls/load balanced sever farm/memory intensive/default WCF settings (initially didn't tweak them).
Then we noticed the memory on our WCF servers was maxing out several days after starting the service and when it happened our services generated strange exceptions sporadically. We turned on perf counters on our servers. That's when we learned that perf profiling WCF services hosted in Win Svc didn't give us a whole lot of insight because many perf counters simply didn't return any info at all, which was confirmed by Microsoft Tech Support team. We also used ANTS to look for memory leaks but didn't find any major issue in our code. We then started tweaking WCF settings (e.g. maxbufferpoolsize) attentively with the help from Microsoft consultants. Ultimately we came to the conclusion that GC wasn't happening frequently enough to free up allocated memory. We even tried switching to server mode GC from workstation mode, which actually ended up worsening the problem.
As the last resort, we switched to IIS. The performance of the service didn't get any better, which was fully expected. However, some of the IIS-specific perf counters confirmed our suspicion about GC not happening frequently enough. We then found this wonderful setting in IIS that allowed us to specify when and how often to recycle an app pool. Yes, we could have developed a simple custom solution to restart our WCF services but why reinvent the wheel, we thought. Additionally, when you recycle an app pool in IIS, IIS doesn't kill it abruptly. Instead, it creates a new one to handle subsequent requests while the old one stays a live for a configurable amount of time to finish processing all outstanding requests. That built-in capability allowed us to maintain our uptime SLA.
Based on my experience, I would suggest you keep them in IIS unless you really really really need to squeeze for that last bit of juice from your servers.

Long communication time of WCF Web Services within the same server

Even if this question is a year old, I am still searching a good answer for this question. I appreciate any information that will lead me to fully understand this issue regarding low performances of communicating web services hosted on the same machine.
I am currently developing a system with several WCF Web Services that communicate intensively.
They are running under IIS7, on the same machine, each service being in a different Application Pool, with multiple workers in the Web Garden.
During the individual evaluation of each Web Service, I can serve 10000-20000 requests per minute, quickly and without any issues for the resource consumption (processor and memory).
When I test the whole system or just a subsystem formed by two Web Services I can't serve more than 2000 requests/minute.
I also observed that communication time between Web Service is a big issue (sometimes more than 10 seconds). But when testing with only 1000 requests per minute everything goes smoothly (connection time of no more than 60 ms).
I have tested the system both with SOAPUI and JMETER, but the times were computed based on system logs, not from the testing tools.
Memory and network aren't an issue (they are used very little).
Later on, I have tested the performance of 2 communicating WCF web services, hosted on two server and on the same server. It again seems that there is a bottleneck when the services are on the same machine, lowering the number of connection with from ten thousands to thousands; again, no memory or processor limiting.
As a note, I am working with quite big data in some cases and some of the operations needed are long ones.
I used perf.mon to see what's going on, for memory, processes, webservice, aspnet, etc. but I didn't see anything that could indicate what it's going wrong.
I also tried all the performance settings and tuning options I could find on the Internet.
Does someone know what can be wrong? Why the communication between Web Services could last so long? Why the Web Service which serves as an entry point in the system can accept 10000 requests/minute when is tested alone, but when communicating with another Web Service barely accepts 2000?
It's an IIS7 problem? Could my system perform better if each Web Service will be deployed on a different server?
I want to understand better how things internally function (IIS and WCF services) to improve performances for current and future systems.
You could try to collect data from WCF performance counters : concurrent calls, instances, duration, ... In addition, WCF throttling provides some properties that you can use to limit how many instances or sessions are created at the application level. Performance of the WCF service can be improved by creating proper instance.
Finally, in load testing, there are many configuations to apply to different component : max concurrent http connection, IIS limits, having many load clients... You load test is invalidated because of this.

Limit WCF service resources usage

I'm developing a web application that needs to perform a task that consumes a lot CPU and Memory, and that also may last several minutes. In order to get a better user experience, I also developed a windows service that hosts a WCF service that performs this "high cost" task and that comunicates with the web app using msmq (message queues).
This worked great until I tried to make a load test... The windows service starts consuming a lot of resource, puttin the CPU to work at 100% and more than 1GB of memory. I've looked for optimizations and I've done a lot of tweaks to the code and I think that it is very efficient, but the task just requires a lot of resources.
The problem is that while the WCF service is working, the CPU gets used at 100% and the web app turns INCREDIBLY SLOW! I don't mind if the task that the WCF service does takes a couple of minutes more, but I want the web app to perform well for users.
So I'm wondering if there is a way to limit the resources that the WCF service can consume, giving priority to the web app.
Thanks in advance.
Juan
The easy solution would be to place the WCF service on a different machine.
The fact that the service is using alot of CPU is probably not related to you using WCF.
There are some ways that you may be able to improve the performance of your web app:
Process only one message at a time.
Break the jobs into smaller parts.
Set priority of the windows service to below normal in the task manager
Install more RAM on the server
I guess this is a problem of your Windows service design. When you decide to host WCF in Windows service you have to control resource utilization = you have to control throttling. You have to create configurable control over internal service processing so that you can change the load based on available resources. If you host WCF in IIS it already provides such control on AppPool level.
There are some freeware tools which allow limiting CPU usage for given process but that is not something I would recommend for production usage.
Best regards, Ladislav

What are the advantages of using WCF over frameworks like MassTransit or hand written MSMQ client?

I am looking at using MSMQ as a solution to do asynchronous execution in my upcoming project. I want to know the differences between using WCF and frameworks like MassTransit or even hand written MSMQ client to place/read task off MSMQ.
Basically the application will be several websites (internal through LAN or external through the Internet) reading/writing data through a service layer (be it WCF or normal web service). Then this service layer will do one of two things: 1. write data to database 2. and/or trigger the background process by placing a message in the queue. 3. obviously it can also retrieve data from database. The little agent (a windows service) on the other side of the queue will monitor the queue and execute based on the task command.
This architecture will be quite easy to scale (add more queues and agents) and easy to implement compared to RPC or distributed execution or whatever. And the agent processing doesn’t need to be real time. And the agent and service layer are separate applications except they share the common domain objects and Repositories etc.
What do you think? Architecture suggestions for the above requirements are welcomed. Thank you!
WCF adds an abstraction over MSMQ. In fact, once you define compatible contracts (operations must be OneWay), you can switch out MSMQ in the config, transparently. (For instance, you could switch to normal HttpWS or a NetTcp binding.)
You should evaluate the other WCF benefits, like security and so on, to see how those fit in with your needs. Again, they should be reasonably transparent of the fact you're using MSMQ underneath. For instance, adding SOAP security and so on should "just work", independent of using MSMQ.
(Although, IIRC, you still need to login to the desktop on each machine that uses MSMQ, with the service account that will use MSMQ, to generate the certificate in the machines local profile. And then, it doesn't work very well from IIS6, since user profiles aren't loaded. A real pain in general, but nothing to do with WCF specifically.)
Apart from that:
Have you looked at SQL Server Service Broker? After using MSMQ + WCF and SSSB, I think that SSSB is vastly easier to configure and manage. SSSB works with T-SQL commands over any SQL client (I use it from Mono, on Linux, with transactions). It'll also give you transactional send/receive, even remotely (I think MSMQ 4 now allows this). It really takes a lot of the pain away from message queuing, and if you're using SQL Server already...
SSSB is often overlooked since the SQL Management Studio doesn't have GUI designers for it all, but it isn't hard and is a great option. The one downside is that if you want local send capability (i.e., queue message when network is down), you'll need to run a local SQL Express instance.
Your architecture seems sound and reasonable. However you should consider using the WCF net MSMQ transport over hand coded MSMQ classes. WCF wraps this common functionality into a nice programming model. Also I believe there is some improvements in the protocol used by wcf compared to basic System.Messaging
Have a look at the value-add over plain MSMQ:
http://readthedocs.org/docs/masstransit/en/latest/overview/valueadd.html
In summary, you get a lot of messaging concepts clearly presented in the API with MassTransit; to an extent you wouldn't have if you hand-coded it or used WCF.