Using Redis as a cache storage for for multiple application on the same server - redis

I want to use Redis as a cache storage for multiple applications on the same physical machine.
I know at least two ways of doing it:
by running several Redis instances on different ports;
by using different Redis databases for different applications.
But I don't know which one is better for me.
What are advantages and disadvantages of these methods?
Is there any better way of doing it?

Generally, you should prefer the 1st approach, i.e. dedicated Redis servers. Shared databases are managed by the same Redis process and can therefore block each other. Additionally, shared databases share the same configuration (although in your case this may not be an issue since all databases are intended for caching). Lastly, shared databases are not supported by Redis Cluster.
For more information refer to this blog post: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances

We solved this problem by namespacing the keys. Intially we tried using databases where each database ID would be used a specific applications. However, that idea was not scalable since there is a limited number of databases, plus in Premium offerings (like Azure Cache for Redis Premium instances with Sharding enabled), the concept of database is not used.
The solution we used is attaching a unique prefix for all keys. Each application would be annotated with a unique moniker which would be prefixed infront of each key.
To reduce churn, we have built a framework (URP). If you are using StackExchange.Redis then yuo will be able to use the URP SDK directly. If it helps, I have added some of the references.
Source Code and Documentation - https://github.com/microsoft/UnifiedRedisPlatform.Core/wiki/Management-Console
Blog Post (idea) - https://www.devcompost.com/post/__urp

You can use different cache manager for each application will also work same way I am using.
like :
#Bean(name = "myCacheManager")
public CacheManager cacheManager(RedisTemplate<String, Object> redisTemplate) {
RedisCacheManager cacheManager = new RedisCacheManager(redisTemplate);
return cacheManager;
}
#Bean(name ="customKeyGenerator")
public KeyGenerator keyGenerator() {
return new KeyGenerator() {
#Override
public Object generate(Object o, Method method, Object... objects) {
// This will generate a unique key of the class name, the method name,
// and all method parameters appended.
StringBuilder sb = new StringBuilder();
sb.append(o.getClass().getName());
sb.append(method.getName());
for (Object obj : objects) {
sb.append(obj.toString());
}
return sb.toString();
}
};
}

Related

SpringData Gemfire inserting fake date on Dev env

I am developing some app using Gemfire and it would be great to be able to provide some fake data while in Dev environment.
So instead of doing it in the code like I do today, I was thinking about using spring application-context.xml do pre-load some dummy data in the region I am currently working on. Something close to what DBUnit does but for DEV not Test scope.
Later I could just switch envs on Spring and that data would not be loaded.
Is it possible to add data using SpringData Gemfire to a local data grid?
Thanks!
There is no direct support in Spring Data GemFire to load data into a GemFire cluster. However, there are several options afforded to a SDG/GemFire developer to load data.
The most common approach is to define a GemFire CacheLoader attached to the Region. However, this approach is "lazy" and only loads data from a (potentially) external data source on a cache miss. Of course, you could program the logic in the CacheLoader to "prefetch" a number of entries in a somewhat "predictive" manner based on data access patterns. See GemFire's User Guide for more details.
Still, we can do better than this since it is more likely that you want to "preload" a particular data set for development purposes.
Another, more effective technique, is to use a Spring BeanPostProcessor registered in your Spring ApplicationContext that post processes your "Region" bean after initialization. For instance...
Where the RegionPutAllBeanPostProcessor is implemented as...
package example;
public class RegionPutAllBeanPostProcessor implements BeanPostProcessor {
private Map regionData;
private String targetRegionBeanName;
protected Map getRegionData() {
return (regionData != null ? regionData : Collections.emptyMap());
}
public void setRegionData(final Map regionData) {
this.regionData = regionData;
}
protected String getTargetRegionBeanName() {
Assert.state(StringUtils.hasText(targetRegionBeanName), "The target Region bean name was not properly specified!");
return targetBeanName;
}
public void setTargetRegionBeanName(final String targetRegionBeanName) {
Assert.hasText(targetRegionBeanName, "The target Region bean name must be specified!");
this.targetRegionBeanName = targetRegionBeanName;
}
#Override
public Object postProcessBeforeInitialization(final Object bean, final String beanName) throws BeansException {
return bean;
}
#Override
#SuppressWarnings("unchecked")
public Object postProcessAfterInitialization(final Object bean, final String beanName) throws BeansException {
if (beanName.equals(getTargetRegionBeanName()) && bean instanceof Region) {
((Region) bean).putAll(getRegionData());
}
return bean;
}
}
It is not too difficult to imagine that you could inject a DataSource of some type to pre-populate the Region. The RegionPutAllBeanPostProcessor was designed to accept a specific Region (based on the Region beans ID) to populate. So you could defined multiple instances each taking a different Region and different DataSource (perhaps) to populate the Region(s) of choice. This BeanPostProcess just take a Map as the data source, but of course, it could be any Spring managed bean.
Finally, it is a simple matter to ensure that this, or multiple instances of the RegionPutAllBeanPostProcessor is only used in your DEV environment by taking advantage of Spring bean profiles...
<beans>
...
<beans profile="DEV">
<bean class="example.RegionPutAllBeanPostProcessor">
...
</bean>
...
</beans>
</beans>
Usually, loading pre-defined data sets is very application-specific in terms of the "source" of the pre-defined data. As my example illustrates, the source could be as simple as another Map. However, it would be a JDBC DataSource, or perhaps a Properties file or well, anything for that matter. It is usually up to the developers preference.
Though, one thing that might be useful to add to Spring Data GemFire would be to load data from a GemFire Cache Region Snapshot. I.e. data that may have been dumped from a QA or UAT environment, or perhaps even scrubbed from PROD for testing purposes. See GemFire Snapshot Service for more details.
Also see the JIRA ticket (SGF-408) I just filed to add this support.
Hopefully this gives you enough information and/or ideas to get going. Later, I will add first-class support into SDG's XML namespace for preloading data sets.
Regards,
John

Making Backward-Compatible WCF Services

TLDR: How do I create WCF services that are backward compatible -- that is, when I deploy a new version of the service on the server-side, all the clients on the older versions can still use the service.
I'm creating a web service that will allow the client applications to fetch a listing of plugins. I will at least have one operation like FindPlugins(string nameOrDescription) which will, on the server, do a search and return a list of objects.
Unfortunately, I cannot guarantee that my clients will all be updated with each new release of my service; nay, I am sure that many of them will be trailing the latest version, and will have old versions -- how old, I cannot be sure, but I know they will be old :)
If I create a new service operation, change the schema, or make some sort of breaking operation on the server side, I'm done. I need to engineer backward compatibility at all times.
Here's one example. Say I return a list of Plugins, each which has a name and description, and I deploy v0.1 of my service. Then, I add a download link, and deploy that as v0.2 of my service.
Some options which I see are:
Force clients to update to the latest service (not feasible)
Break the service for old clients (not feasible)
Append a version number to each operation and only consume the version-specific operations (eg. FindPluginsV1, FindPluginsV2) -- doesn't seem practical with multiple operations
Provide a new service with each new version -- doesn't seem practical
WCF is backwards-compatible by default.
The following MSDN link contains a list of all the possible changes of a WCF contract and describes their effect on old clients:
WCF Essentials: Versioning Strategies
Most importantly, the following operations will not cause old clients to break:
Service contracts (methods)
Adding method parameters: The default value will be used when called from old clients.
Removing methods parameters: The values sent by old clients will be ignored silently.
Adding new methods: Obviously, old clients won't call them, since they don't know them.
Data contracts (custom classes for passing data)
Adding non-required properties.
Removing non-required properties.
Thus, unless you mark the new DownloadLink field as IsRequired (default is false), your change should be fine.
If you look at this article http://blogs.msdn.com/b/craigmcmurtry/archive/2006/07/23/676104.aspx
The first example the guy gives will satisfy your requirements. It has the benefit that existing clients will not break, and you can add as many new service operations as you want this way.
[ServiceContract]
public interface IMyServiceContract
{
[OperationContract(IsOneWay=true)]
public void MyMethod(MyDataContract input);
}
[ServiceContract]
public interface IMyAugmentedServiceContract: IMyServiceContract
{
[OperationContract(IsOneWay=true)]
public void MyNewMethod(MyOtherDataContract input);
}
The change your service implementation:
public class MyOriginalServiceType: IAugmentedServiceContract { }

RavenDb session management for WCF and integration testing

Put simply, I have a WCF service that manages apples. Apart from other functionality, it has two methods to add and remove apples from storage. I am writing an integration test to check if someone is getting advantage of the job and nicks apples. Raven DB in my WCF service has an audit role, it just records actions and apples. In the methods of WCF service there is some other processing: cleaning, validation, packaging etc.
My audit integration test can be expresses as
Empty storage (RavenDB in-memory mode)
Bob comes and puts 10 apple (open session, add, dispose session)
Jake comes and takes 4 apples (open session, remove, dispose session)
Check that 6 apples left
As these are two different people (two WCF calls) it make sense to use different instances of session. However, with Raven DB I get
Exception
Apple is not associated with the session, cannot delete unknown entity
instance
If I now run similar integration test where two different people just add apples to the storage, the total storage content corresponds to truth. This is confusing bit: adding works across session, removing doesn't work. In this post Ayende says session micro-managing is not the way to go, but it seems natural to me to use different sessions in my integration testing. Hope analogy with apples doesn't put you off.
Question: How do I use sessions in integration testing with RavenDB?
Sample code (from notepad)
public void Remove(Apple apple)
{
using (var session = Store.OpenSession())
{
session.Delete(apple);
session.SaveChanges();
}
}
public void Add(Apple apple)
{
using (var session = Store.OpenSession())
{
session.Store(apple);
session.SaveChanges();
}
}
...
var apples = new apples[10];
//init
MyRavenDB.Add(apples);
MyRavenDB.Remove(apples.Take(4)); //throws here
//verify
In RavenDB, "The session manages change tracking for all of the entities that it has either loaded or stored".
I suspect the Apple reference you are passing to Remove() method, did not originate from RavenDB Document Store, hence the error.
Try this:
public void Remove(Apple apple)
{
using (var session = Store.OpenSession())
{
var entity = session.Load<Apple>(apple.Id);
session.Delete(entity);
session.SaveChanges();
}
}
You are passing entities over the wire, and that is generally a big no-no.
Do it like this:
public void Remove(string appleId)
That would give you much better sematnics.

Using SharpArch Nhibernate with different types of SessionStorage

I have a server application where I have 3 scenarios in which I seem to need different kind of nhibernate sessions:
Calls to the repository directly from the server itself (while bootstrapping)
Calls to the repository coming from a Ria Service (default ASP.NET Memberschip Service)
Calls to the repository coming from a WCF Service
Currently I have set up my nhibernate config with sharparch like this
/// <summary>
/// Due to issues on IIS7, the NHibernate initialization cannot reside in Init() but
/// must only be called once. Consequently, we invoke a thread-safe singleton class to
/// ensure it's only initialized once.
/// </summary>
protected void Application_BeginRequest(object sender, EventArgs e)
{
NHibernateInitializer.Instance().InitializeNHibernateOnce(
() => InitializeNHibernateSession());
BootStrapOnce();
}
private void InitializeNHibernateSession()
{
NHibernateSession.Init(
wcfSessionStorage,
new string[] { Server.MapPath("~/bin/bla.Interfaces.dll") },
Server.MapPath("~/Web.config"));
}
This works for the third scenario, but not for the first two.
It seems to need some wcf-session-specific context.
The SharpArch Init method seems to have protection from re-initializing it with another type of sessionstorage;
What is the best way to create a different session for three different kinds of contexts?
To me it looks like this post seems related to this one which has helped me looking in the right direction, but I have not found a solution so far.
I'm not sure you are going to be able to do what you are wanting with S#. The reason being is that you are really wanting to have 3 separate Nhibernate sessions, each with it's own storage mechanism. The current implementation only allows for one storage mechanism, regardless of the number of sessions.
I can easily get you #'s 1 and 3, but not two since I've never used RIA services. In the case of 1 and 3, you would need to take the WCF service out of the site and have it in it's own site. No way of really getting around that as their session lifecycles are different.
Your other option would be to come up with your own Session Management for NHibernate and not use the default S# one. You could look at the code for the S# version and create your own based on that.

Best way to share data between .NET application instance?

I have create WCF Service (host on Windows Service) on load balance server. Each of this service instance maintain list of current user. E.g. Instance A has user A001, A002, A005, instance B has user A003, A004, A008 and so on.
On each service has interface that use to get user list, I expect this method to return all user in all service instance. E.g. get user list from instance A or instance B will return A001, A002, A003, A004, A005 and A008.
Currently I think that I will store the list of current users on database but this list seem to update so often.
I want to know, is it has another way to share data between WCF service that suit my situation?
Personally, the database option sounds like overkill to me just based on the notion of storing current users. If you are actually storing more than that, then using a database may make sense. But assuming you simply want a list of current users from both instances of your WCF service, I would use an in-memory solution, something like a static generic dictionary. As long as the services can be uniquely identified, I'd use the unique service ID as the key into the dictionary and just pair each key with a generic list of user names (or some appropriate user data structure) for that service. Something like:
private static Dictionary<Guid, List<string>> _currentUsers;
Since this dictionary would be shared between two WCF services, you'll need to synchronize access to it. Here's an example.
public class MyWCFService : IMyWCFService
{
private static Dictionary<Guid, List<string>> _currentUsers =
new Dictionary<Guid, List<string>>();
private void AddUser(Guid serviceID, string userName)
{
// Synchronize access to the collection via the SyncRoot property.
lock (((ICollection)_currentUsers).SyncRoot)
{
// Check if the service's ID has already been added.
if (!_currentUsers.ContainsKey(serviceID))
{
_currentUsers[serviceID] = new List<string>();
}
// Make sure to only store the user name once for each service.
if (!_currentUsers[serviceID].Contains(userName))
{
_currentUsers[serviceID].Add(userName);
}
}
}
private void RemoveUser(Guid serviceID, string userName)
{
// Synchronize access to the collection via the SyncRoot property.
lock (((ICollection)_currentUsers).SyncRoot)
{
// Check if the service's ID has already been added.
if (_currentUsers.ContainsKey(serviceID))
{
// See if the user name exists.
if (_currentUsers[serviceID].Contains(userName))
{
_currentUsers[serviceID].Remove(userName);
}
}
}
}
}
Given that you don't want users listed twice for a specific service, it would probably make sense to replace the List<string> with HashSet<string>.
A database would seem to offer a persistent store which may be useful or important for your application. In addition it supports transactions etc which may be useful to you. Lots of updates could be a performance problem, but it depends on the exact numbers, what the query patterns are, database engine used, locality etc.
An alternative to this option might be some sort of in-memory caching server like memcached. Whilst this can be shared and accessed in a similar (sort of) way to a database server there are some caveats. Firstly, these platforms are generally not backed by some sort of permanent storage. What happens when the memcached server dies? Second they may not be ACID-compliant enough for your use. What happens under load in terms of additions and updates?
I like the in memory way. Actually I am designing a same mechanism for one my projects I'm working now. This is good for scenarios where you don't have opportunities to access database or some people are really reluctant to create a table to store simple info like a list of users against a machine name.
Only update I'd do there is a node will only return the list of its available users to its peer and peer will combine that with its existing list. Then return its existing list to the peer who called. Thats how all the peers would be in sync with same list.
The DB option sounds good. If there are no performance issues it is a simple design that should work. If you can afford to be semi realtime and non persistent one way would be to maintain the list in memory in each service and then each service updates the other when a new user joins. This can be done as some kind of broadcast via a centralised service or using msmq etc.
If you reconsider and host using IIS you will find that with a single line in a config file you can make the ASP Global, Application and Session objects available. This trick is also very handy because it means you can share session state between an ASP application and a WCF service.