Fast Multitenant Caching - Local Caching in Addition to Distributed Caching. Or are they the same thing? - redis

I am working on converting an existing single tenant application to a multitenant one. Distributed caching is new to me. We have an existing primitive local cache system using the .NET cache that generates cloned objects from existing cached ones. I have been looking at utilizing Redis.
Can Redis cache and invalidate locally in addition to over the wire thus replacing all benefit of the local primitive cache? Or would it potentially be an ideal approach to have a tiered approach utilizing Redis distributed cache if the local one doesn't have the objects we need? I believe the latter would imply requiring expiration notifications to happen against the local caches when data is updated otherwise servers may have out of date inconsistent data
It seems like a set of local caches with expiration notifications would also qualify as a distributed cache, so I am a bit confused here on how Redis might be configured and if it would be distributed accross the servers serving the requests or live in it's own cluster.
When I say local I am implying not having to go over the wire for data.

Related

Roll back Gcloud Redis upgrade

I like to upgrade the redis memorystore instance in our gcloud because 5.x (at least in Github) appears to have reached its end of life. It's being use for simple key value pairs, so I don't expect anything unexpected during the upgrade to 6.x. However management is nervous and wants a way to rollback the upgrade if there are issues. Is there a way to do this? The documentation appears to say that rollback is not possible. I plan to do the usual backup and then upgrade. The instance is just the basic.
In order to Upgrade the redis memorystore instance, follow the best practices mentioned in the Public Documentation as the following :
We recommend exporting your instance data before running a version upgrade operation.
Note that upgrading an instance is irreversible. You cannot downgrade the Redis version of a Memorystore for a Redis instance.
For Standard Tier instances, to increase the speed and reliability of your version upgrade operation, upgrade your instance during
periods of low instance traffic. To learn how to monitor instance
traffic, see Monitoring Redis instances.
As mentioned in the documentation which recommends you to enable RDB Snapshots.
Memorystore for Redis is primarily used as an in-memory cache. When
using Memorystore as a cache, your application can either tolerate
loss of cache data or can very easily repopulate the cache from a
persistent store.
However, there are some use cases where downtime for a Memorystore
instance, or a complete loss of instance data, can cause long
application downtimes. We recommend using the Standard Tier as the
primary mechanism for high availability. Additionally, enabling RDB
snapshots on Standard Tier instances provides extra protection from
failures that can cause cache flushes. The Standard Tier provides a
highly available instance with multiple replicas, and enables fast
recovery using automatic failover if the primary fails.
In some scenarios you may also want to ensure data can be recovered
from snapshot backups in the case of catastrophic failure of Standard
Tier instances. In these scenarios, automated backups and the ability
to restore data from RDB snapshots can provide additional protection
from data loss. With RDB snapshots enabled, if needed, a recovery is
made from the latest RDB snapshot.
For more information, you can refer to the documentation related to version upgrade behavior.

Topology for using Ignite in embedded, full replication mode, with native persistence (Kubernetes)

One of my backend service uses Ignite. The backend service itself is stateless, meaning, the service doesn't have any internal or shared states, and can scale up and down on the need. The backend service is deployed in Kubernetes.
I have used the Ignite, now, in the embedded way, with cache mode set to REPLICATED, and native persistence enabled. Additionally I have also enabled the baselineAutoAdjustEnabled. The reason for using replicated mode is to have the data available to all the backend instances locally & consistent way.
The application seems to work correctly -
as and when data is modifies the data gets replicated across all the instances of the backend
data replication works even if a new instance joins the topology after (long time) the initial baseline is set & activated.
The question now is, is this is the right approach in running the Ignite in embedded (server mode), with full replication, and native persistence?
In general your design looks well. But be careful with using of baselineAutoAdjustEnabled. This feature works only in case of topology changes do not leads to lost partitions. But in your case, as you use only replicated caches, any node stoping should not lead to lost partitions.
Embeded mode in this case does not have any matter.

How to better utilize local cache with load balancing strategies?

I have an Authentication service where I need to cache some user information for better performance. I chose to use local cache because Authentication service probably will be called on each request so I want it to be super fast. Compared to remote cache options local cache is a lot faster (local cache access is below 1ms while remote cache access is around 25ms).
The problem is I can not cache as much information as a distributed cache without running out of memory (talking about millions of users). I can either leave it as it is and when local cache reaches the memory limit it would evict some other data but that would be bad optimization of the cache. Or I can use some kind of load balancer strategy where users will be redirected to same Authentication service instances based on their IP address or other criteria thus the cache hits will be a lot higher.
It kind of defeats the purpose of having stateless services however I think I can slightly compromise from this principle in network layer if I want both consistency and availability. And as for Authentication both are crucial for full security (user info always has to be up-to-date and available).
What kind of load balancing techniques out there for solving this kind of problem? Can there be other solutions?
Note: Even though this question is specific to Authentication I think many other services that are frequently accesses and requires speed can benefit a lot from using local caches.
So - to answer the question here - load balancers can handle this with their hashing algorithms.
I'm using Azure a lot so I'm giving Azure Load Balancer as an example:
Configuring the distribution mode
Load balancing algorithm
From the docs:
Hash-based distribution mode
The default distribution mode for Azure
Load Balancer is a five-tuple hash.
The tuple is composed of the:
Source IP
Source port
Destination IP
Destination port
Protocol type
The hash is used to map traffic to the available servers. The
algorithm provides stickiness only within a transport session. Packets
that are in the same session are directed to the same datacenter IP
behind the load-balanced endpoint. When the client starts a new
session from the same source IP, the source port changes and causes
the traffic to go to a different datacenter endpoint.

Ignite Client connection and Client Cache

I would like to know answers for below questions:
1) In case if Ignite server is restarted, I need to restart the client (web applications). Is there any way client can reconnect to server on server restart. I know when server restarts it allocates a different ID and because of this the current existing connection becomes stale. Is there way to overcome this problem and if so, which version of Ignite supports this feature. Currently I utilize version 1.7
2) Can I have client cache like how Ehcache provides. I don’t want client cache as a front–end to a distributed cache. When I looked at the Near Cache API, it doesn’t have cache name properties like cache configuration and it acts only as a front-end to a distributed cache. Is it possible to create client only cache in Ignite
3) If I have a large object to cache, I find Serialization and Deserialization takes a longer time in Ignite and retrieving from distributed cache is slow. Is there any way we can speed up large objects retrieval from Ignite DataGrid.
This topic is discussed on Apache Ignite users mailing list: http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Client-Reconnect-and-Client-Cache-td10018.html

How to cache in WCF multithreaded

So, in my WCF service, I will be caching some data so future calls made into the service can obtain that data.
what is the best way in WCF to cache data? how does one go about doing this?
if it helps, the WCF service is multithreaded (concurrency mode is multiple) and ReleaseServiceInstanceOnTransactionComplete is set to false.
the first call to retrieve this data may not exist therefore it will go and fetch data from some source (could be DB, could be file, could be wherever) but thereafter it should cache it and be made available (ideally with an expiry system for the object)
thoughts?
Some of the most common solutions for a WCF service seem to be:
Windows AppFabric
Memcached
NCache
Try reading Caching Solutions
An SOA application can’t scale effectively when the data it uses is kept in a storage that is not scalable for frequent transactions. This is where distributed caching really helps. coming back to your question and its answer by ErnieL, here is a brief comparison of these solutions,
as Far as Memcached is concerned, If your application needs to function on a cluster of machines then it is very likely that you will benefit from a distributed cache, however if your application only needs to run on a single machine then you won't gain any benefit from using a distributed cache and will probably be better off using the built-in .Net cache.
Accessing a memcached cache requires interprocess / network communication, which will have a small performance penalty over the .Net caches which are in-process. Memcached works as an external process / service, which means that you need to install / run that service in your production environment. Again the .Net caches don't need this step as they are hosted in-process.
if we compare the features of NCache and Appfabric, NCache folks are very confident over the range of features which they ve compared to AppFabric. you can find enough material here regarding the comparison of these two products, like this one......
http://distributedcaching.blog.com/2011/05/26/ncache-features-that-app-fabric-does-not-have/