AppFabric cache listener - wcf

I have several WCF services on a webfarm and they capture every single request object sent to them and cache it, and write it to DB once a specified amount of requests is reached. This is done so that I minimize calls to the DB. I am not using AppFabric caching for it, I am using the in-memory cache which means the cache is separate for each node. It all works fine.
I want to install AppFabric on the server and write the requests to that cache. Now my question is can I do some sort of programming (a DLL perhaps) on AppFabric itself, which periodically reads from this cache, writes to DB and flushes it out? So that all my services will do is to put the requests on cache. This will enable my services to perform better.
Is this even possible?

Yes, AppFabric already offers Write-Behind technique where your create an assembly that implements the DataCacheStoreProvider abstract base class and register it on AppFabric and then you write items to the cache and on an interval basis the items added, update will be written to the backend database according to your provider implementation.
To get more details about creating the provider check the following link

Related

Ignite Client connection and Client Cache

I would like to know answers for below questions:
1) In case if Ignite server is restarted, I need to restart the client (web applications). Is there any way client can reconnect to server on server restart. I know when server restarts it allocates a different ID and because of this the current existing connection becomes stale. Is there way to overcome this problem and if so, which version of Ignite supports this feature. Currently I utilize version 1.7
2) Can I have client cache like how Ehcache provides. I don’t want client cache as a front–end to a distributed cache. When I looked at the Near Cache API, it doesn’t have cache name properties like cache configuration and it acts only as a front-end to a distributed cache. Is it possible to create client only cache in Ignite
3) If I have a large object to cache, I find Serialization and Deserialization takes a longer time in Ignite and retrieving from distributed cache is slow. Is there any way we can speed up large objects retrieval from Ignite DataGrid.
This topic is discussed on Apache Ignite users mailing list: http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Client-Reconnect-and-Client-Cache-td10018.html

How to cache in WCF multithreaded

So, in my WCF service, I will be caching some data so future calls made into the service can obtain that data.
what is the best way in WCF to cache data? how does one go about doing this?
if it helps, the WCF service is multithreaded (concurrency mode is multiple) and ReleaseServiceInstanceOnTransactionComplete is set to false.
the first call to retrieve this data may not exist therefore it will go and fetch data from some source (could be DB, could be file, could be wherever) but thereafter it should cache it and be made available (ideally with an expiry system for the object)
thoughts?
Some of the most common solutions for a WCF service seem to be:
Windows AppFabric
Memcached
NCache
Try reading Caching Solutions
An SOA application can’t scale effectively when the data it uses is kept in a storage that is not scalable for frequent transactions. This is where distributed caching really helps. coming back to your question and its answer by ErnieL, here is a brief comparison of these solutions,
as Far as Memcached is concerned, If your application needs to function on a cluster of machines then it is very likely that you will benefit from a distributed cache, however if your application only needs to run on a single machine then you won't gain any benefit from using a distributed cache and will probably be better off using the built-in .Net cache.
Accessing a memcached cache requires interprocess / network communication, which will have a small performance penalty over the .Net caches which are in-process. Memcached works as an external process / service, which means that you need to install / run that service in your production environment. Again the .Net caches don't need this step as they are hosted in-process.
if we compare the features of NCache and Appfabric, NCache folks are very confident over the range of features which they ve compared to AppFabric. you can find enough material here regarding the comparison of these two products, like this one......
http://distributedcaching.blog.com/2011/05/26/ncache-features-that-app-fabric-does-not-have/

Custom request queueing in WCF environment

I have following problem:
I need to synchronize access to cache objects that are accessed by WCF-service operations. The calling clients provides a parameter that maps to which cache object(s) operation will use.
In short, requests should be queued by the parameter and then executed in synchronous manner.
What would be best way to implement this? I don't want to expose the queue to the client and I want solution that easily scales if there are multiple service instance running in different machines with shared cache.

Common cache for wcf services launched in different servicehosts

I'm hosting my WCF application with IIS. To serve some methods I need a reference to helper object which is heavily initialized. A good scenario is to initialise it once and put in a cache, then all requests just use the object from cache. How can I do caching?
The easyiest way is to use static field of mywebmethod. IIS creates several ServiceHosts to serve requests. And in every servicehost static fields will be different.
I aso tried using System.Web.HttpRuntime.Cache. Again, I have some independent caches.
To clarify, I need to cache not the result of the request, but some intermediate data needed to process request.
So what can be a solution?
Running diferent services in separate AppDomains gives you crash-protection and some other, security-related benefits. If you are sure you need shared statics, consider using self-hosted servies.
I can think of only one way to achieve this using IIS: implement a ServiceHostFactory, that will return custom ServiceHost that will start and stop multiple ServiceHosts under the hood. But it's waaay too hacky to be a piece of production code )
Update I stumbled upon this today, and this answer looks like a total mess. Different Service host do share one AppDomain if they reside inside the same IIS site, so static fields should be the same for all services.

Apache HTTP Web Server Requests

When an http request is processed by the Apache web server it typically forks a new process unless one is using something like fastcgi.
My question is related to "simultaneous requests" when using fastcgi.
If I'm using fastcgi and I have a tree-like datastructure in main memory, do I need to worry about concurrent read/write access to the tree?
Or can I just rely on the fact that requests are processed in the order they arrive.
What if one request tries to access the disk and it blocks? Are the other requests processed or do they wait in a queue?
If I'm not using fastcgi, things seem clearer since I have to reload the tree data structure from a database to manipulate it and then write it back to a database - no concurrency required.
Essentially, do I need to worry about multiple readers/writes to my main memory data structures with Apache?
Thanks in advance.
When an http request is processed by the Apache web server it typically forks a new process
No, usually one of pre-forked processes accepts the connection and executes it. There
is no fork-per-request
If your FastCGI application is single threaded, you should not worry about concurrecy, also
if you run in mod-prefork. But if you manage your data structures in shared memory you
should worry about concurrency.