ElasticCache with Redis - Very Slow performance - redis

We have implemented caching with AWS ElastiCache + redis with both Encryption in-transit and Encryption at-rest enabled using Spring-data-redis and Lettuce with SSL.
Spring 4.3.12.RELEASE
Spring-data-redis 1.8.8.RELEASE
aws-java-sdk 1.11.228
Lettuce (Redis java Client) 4.4.2.Final
Code for Implementation is provided here. We are caching data retrieved from SQL queries.
The application runs very slow with above implementation in comparison to when caching is not implemented.
Appreciate any help, to improve performance.
Thanks, Raj

There are could be several reasons for slowness:
Network performance. If the size of objects is large, the network performance of the cache instances as well as client instances matters.
A large number of fields in the objects. Spring data stores each field separately in redis and then assembles the object on retrieval.

Related

Syncronize multiple instances of Spring Cache with a Redis lock

I'm building a Spring Boot application that uses Spring Cache with a Redis backing store and needs to synchronize the updates made to the cache.
The caching is not made on the fly, but by an scheduled process that updates the cache periodically.
The algorithm I came up with is:
periodically the instances will check if the Redis cache is older than some predetermined time
if that's the case, the instance will try to acquire a lock on some Redis key
if the instance successfully locks the key, it will then proceed with the update
if some other instance already locked the key, move on
all instances can still read the cache
Everything is more or less already built, all I need is to implement the locking/releasing mechanism.
Spring Cache is using Lettuce to interact with Redis, what is the best way to get an connection to Redis and manage the locking mechanism?
As you may already be aware, Spring's Cache Abstraction provides simple coordination amongst multiple Threads in a single Spring [Boot] application process using the sync attribute on the #Cacheable annotation (see ref doc).
NOTE: Despite the comment ("... use the sync attribute to instruct the underlying cache provider to lock the cache entry while the value is being computed. As a result, only one thread is busy computing the value, while the others are blocked until the entry is updated in the cache.") in the documentation, the locking mechanics is handled by the core framework itself, and in most cases, not the provider. Anyway...
However, this "coordination" is only per-process and will not work for multiple Spring [Boot] application instances, or (OS) JVM processes. In this case, you need some form of distributed locking across your multiple Spring [Boot] application instances to coordinates access to shared cache entries stored in the single Redis server (cluster) shared by your Spring [Boot] application instances.
I am no Redis expert (I am still learning), but I am familiar with similar NoSQL stores (Apache Geode/VMware GemFire, Hazelcast, etc) and distributed locking mechanisms. I see that distributed locking is possible to achieve with Redis as well. In a quick search, I found "Distributed Locking" in Redis, and specifically, "Building a lock in Redis". This is probably the best way to go.
In addition, if you want to make this distributed locking automatically/transparently available through Spring's Cache Abstraction, then you could possibly create a custom AOP Aspect and weave this Aspect together with the framework provided Caching Aspect (Interceptor), being conscious of ordering, as 1 idea.
Alternatively, you could implement wrapper implementations for the Spring Cache and CacheManager SPI interfaces that implement distributed locking on top of the core Redis Cache and CacheManager provider implementations provided by Spring Boot/Spring Data Redis.
Of course, there are multiple ways to go about this. Just tossing out more ideas, but have a look at the distributed locking information in the book.

Can I replace Redis cache with Cosmos DB?

Can i use azure cosmos db instead of redis cache for server side caching , i feel that cosmos Db also provides key value storage, has geo replication , read write access and lower latency than redis cache
If you're still reading this 2 years later note the following. The answer is yes but the real story is that they work better together. Azure Cache for Redis now has an Enterprise Tier through the same Marketplace tile. This gives you the ability to deploy Redis in an Active-Active model across multiple regions where all instances are readable and writeable with conflict resolution built into the different datatypes that Redis supports. Couple that with higher performance through the redis enterprise proxy and up to 5 9's of availability gives you additional options to choose from. Azure Cache for Redis Enterprise (ACRE) in front of Cosmos is a real option as ACRE has sub-millisecond latency capabilities. Note, I work for Redis Labs and have seen this work and deployed it myself.
Redis is an in-memory datastore hence it's primary use-case is in-memory caching. Since it is a Key-value store, it has generally limited query ability, only allowing queries by primary key.
While, CosmosDB is Globally distributed, horizontally scalable, multi-model database service. It becomes handy in scenarios where you need the ability to query over heterogeneous data.
Those two are totally for different purposes, even Microsoft has redis cache as a service apart from CosmosDB only to serve this purpose.
Cosmos is probably going to be more expensive, from a cost perspective, than using Redis - depending on your throughput.
The one big benefit you can achieve with Cosmos is multi-read regions so your availability could increase and also the latency to your users if they're reading from a Cosmos region closer to them.

since redis is single-threaded, then our concurrent requests become serialized requests when accessing redis. What is the significance of using redis?

We usually use redis for caching in the Spring‘s project. My problem is that since redis is single-threaded, then our concurrent requests become serialized requests when accessing redis. then,what is the significance of using redis?
Is it only because of "It's not very frequent that CPU becomes your bottleneck with Redis, as usually Redis is either memory or network bound.
......
using pipelining Redis running on an average Linux system can deliver even 1 million requests per second......
"?
I am learning redis, Redis document FAQ
You've basically asked two questions in one question:
What is the significance of using Redis.
Well, Redis is known to be fast because it keeps the data in memory. If you ask whether being a single-threaded application is very restrictive - well, its a product, that works like this by design, maybe it could be even more performant if it was multithreaded, it depends on actual implementation under the hood after all.
In any case, it offers much more than just a "get data in memory":
- Many primitives to work with
- Configurable persistence
- Replication of data
And much more
If the question is whether the in-memory cache will be faster (you've mentioned Spring framework, so you're at Java Land) - then yes.
In fact, Spring Cache support Guava Cache (spring 5/spring boot 2 use Caffeine for the same purpose instead) - and yes it will be faster in a head-to-head comparison with Redis. But what if you have a distributed application with many instances and one instance calculated something and put it to cache, how do you get the same information from another instance without distributing the information between the instances. Well, there are tools like Hazelcast but it's out of scope for this question, the point is that when the application is beyond basic, the tasks like cache synchronization /keeping it up-to-date becomes much less obvious.
If you can deliver 1 million operations per second.
Now this question is too vague to answer:
What is the hardware that runs Redis?
What are the network configurations? (after all Redis calls are done over the network)
How often do you persist on disk (Redis has configurations for that)
Do you use replication and split the load between many Redis servers reaching an overall much faster throughput?
What commands exactly are being running under that hood?
In any case, when it comes to benchmarking you can set up your system in the option way and use the tool offered by Redis itself:
Redis Benchmarking Chapter in Redis tutorial
The tool is called redis-benchmark you can run it with various parameters and see how fast redis really is:
Here is an example (I encourage you to read the full article in the link):
$ redis-benchmark -t set,lpush -n 100000 -q
SET: 74239.05 requests per second
LPUSH: 79239.30 requests per second
This says: Connect to redis server available on localhost, run (-n) 100000 requests in a quiet mode (-q parameter) and run only tests specific for two commands: set and lpush

Evcache vs redis

I have read that netflix uses evcache , which is a wrapper over memcache and evcache proves better than memcache
In general it is said that redis server as a better cache than memcache, was trying to find the comparisons of redis and evcache.
Does redis scale as well as evcache or memcache? I am assuming that evcache scaling is tried and tested (hence works good for netflix)
EVCache is a functionality add wrapper over memcache. It is an application that Netflix devs wrote to add functionality they need in their cache layer while using memcache as the underlying data store. You can write your own EVCache to use redis as the data store
Comparing redis to Evcache is not the correct comparison as they operate on two different layers.
Does redis scale as well as evcache or memcache?
Redis can scale to many hundreds of thousands of requests per second.
In general, redis is preferred over memcache because of its many in built data structures
Redis is single threaded so once CPU usage hits 80+% it is better to run another instance instead of giving it a bigger server

Redis: Efficient cluster of servers for large key set

I have a very large set of keys, 200M keys, with small values, <100 bytes, to store and I'm trying to use Redis. The problem is such that I have 10 Redis DB to split the keys over, but currently I'm on a single server with those 10 Redis DB. By a Redis DB I mean using SELECT. From my calculations it looks like I'm going to blow out memory. I think I'll need over 4TB of memory for this case! What are my options? First, my calculation is based on 10000 keys with 100 byte values taking 220MB of RAM (this is from a table I found). So simply put (2*10^8 / 10^4) * 220MB = 4.4TB.
If my calculation looks correct, what are my options? I've read on different posts that Redis VM is no longer an option. Can I use a Redis cluster? This still appears to require too many servers to be practical. I understand I could switch to another DB, but I'd like that to be the last resort option.
Firstly, using shared databases (i.e. the SELECT command) isn't a recommended practice since all of these databases are essentially managed by the same Redis process. It is preferable having 10 separate Redis processes (even on the same server) in order to avoid contention (more info here).
Next, there are ways to reduce the memory footprint of your database. You could, for example, perform client-side compression (see here) or consider other optimizations such as using Hashes to keep multiple values (as described here).
That said, a Redis server is ultimately bound by the amount of RAM that the host provides. Once you've reached that limit you'll need to shard your database and use a Redis cluster. Since you're already using multiple databases this shouldn't pose a big challenge as your code should already be compatible with that to a degree. Sharding can be done in one of three approaches: client, proxy or Redis Cluster. Client-side sharding can be implemented in your code or by the Redis client that you're using (if the client library that you're using supports that). Redis Cluster (v3) is expected to be released in the very near future and already has a stable release candidate. As for proxy-based sharding, there are several open source solutions out there, including Twitter's twemproxy, Netflix's dynomite and codis. Additional information about sharding and partitioning can be found here.
Disclaimer: I work at Redis Labs. Lastly, AFAIK there's only one Redis-as-a-Service provider that already provides built-in support for clustering Redis. Redis Labs' Redis Cloud is a fully-managed service that can scale seamlessly to any required capacity. Our clusters support both the '{}' hashtag standard as well as sharding by RegEx - more about this can be found here.
You can use LMDB with Dynomite to store data beyond your memory capacity. LMDB uses both disk and memory to store data. Dynomite make LMDB to be distributed.
We have done a POC with this combo and they work nicely together.
For more information, please check out our open issue here:
https://github.com/Netflix/dynomite/issues/254