Can I replace Redis cache with Cosmos DB? - redis

Can i use azure cosmos db instead of redis cache for server side caching , i feel that cosmos Db also provides key value storage, has geo replication , read write access and lower latency than redis cache

If you're still reading this 2 years later note the following. The answer is yes but the real story is that they work better together. Azure Cache for Redis now has an Enterprise Tier through the same Marketplace tile. This gives you the ability to deploy Redis in an Active-Active model across multiple regions where all instances are readable and writeable with conflict resolution built into the different datatypes that Redis supports. Couple that with higher performance through the redis enterprise proxy and up to 5 9's of availability gives you additional options to choose from. Azure Cache for Redis Enterprise (ACRE) in front of Cosmos is a real option as ACRE has sub-millisecond latency capabilities. Note, I work for Redis Labs and have seen this work and deployed it myself.

Redis is an in-memory datastore hence it's primary use-case is in-memory caching. Since it is a Key-value store, it has generally limited query ability, only allowing queries by primary key.
While, CosmosDB is Globally distributed, horizontally scalable, multi-model database service. It becomes handy in scenarios where you need the ability to query over heterogeneous data.
Those two are totally for different purposes, even Microsoft has redis cache as a service apart from CosmosDB only to serve this purpose.

Cosmos is probably going to be more expensive, from a cost perspective, than using Redis - depending on your throughput.
The one big benefit you can achieve with Cosmos is multi-read regions so your availability could increase and also the latency to your users if they're reading from a Cosmos region closer to them.

Related

Cons of using MemoryCache as a temporary copy of DB table

I have a site where you can list your car for sale. There is a list and a map with filtering on car types and other car specifications. My idea was to cache cars table and use that to filter on when user is searching for a car on the website. Currently, especially when zooming in/out on the map, each time user does that, http request is made and it's querying the database, and that can be slow and heavy on the server.
As an experiment with 1 000 items, I have cached map data (trimmed data with only basic info) and it's working fine. I was thinking of doing a basically copy of cars table instead with all needed joins added in Memory Cache and use that instead of querying the DB every request for both list and the map. I would have Cron Job every 5 minutes (as data can change, but it doesn't have to be immediate) to update Memory Cache with latest cars data from DB.
What would be the cons of using this approach in long term and for using it for example storing 100 000 records? Beside server needing more RAM, would there be any concerns about scalability or usability of this approach? Would it be better to use Redis instead?
I do have in place now "search as you type" service, but I don't really need that functionality as filtering is pretty exact, I have added it more as a caching server but I think I would be better off just using Memory Cache until a real need for that kind of service is required.
Thank you
Since memory isn’t infinite, we need to limit the number of items stored in the In-Memory cache.
MemoryCache VS Redis
MemoryCache
MemoryCache is embedded in the process , hence can only be used as a plain key-value store from that process.
Redis
Redis is a remote data structure server. It is certainly slower than just storing the data in local memory.
I conclude that MemoryCache is running in the web server of the current application, and it is limited by the performance of the web server. Of course, it will be very fast under the same configuration. I think the disadvantage is that the stored data cannot be shared with other applications.
If redis is used, reading data directly from memory is not as fast as memorycache, but it has high reliability and high scalability.
Related Post:
1. How to update redis after updating database?
2. how to keep caching up to date
3. How can MySQL update data in real time in redis cache?

how to achieve multi tenancy in redis?

Since I am fairly new with redis, I am trying to explore options and see how can I achieve multi tenancy with redis.
I read some documentation on redisLabs official page and looks like redis cluster mode supports multi tenancy out of the box with redis enterprise.
I am wondering if such a solution for multi tenancy is available in sentinel mode as well?
I may be completely confused with the multi tenancy that redis enterprise provides. May be it works in a sentinel mode also but nothing seems very clear to me.
Can someone throw some light on multi tenancy in redis and what mode supports it?
If you are going to use redis-cluster, then only one DB is supported.
Redis Cluster does not support multiple databases like the stand alone version of Redis. There is just database 0 and the SELECT command is not allowed.
If you are not going to use cluster mode, then you may take a look on the message posted by the creator of Redis about multiple databases (years ago)
I understand how this can be useful, but unfortunately I consider
Redis multiple database errors my worst decision in Redis design at
all... without any kind of real gain, it makes the internals a lot
more complex. The reality is that databases don't scale well for a
number of reason, like active expire of keys and VM. If the DB
selection can be performed with a string I can see this feature being
used as a scalable O(1) dictionary layer, that instead it is not.
With DB numbers, with a default of a few DBs, we are communication
better what this feature is and how can be used I think. I hope that
at some point we can drop the multiple DBs support at all, but I think
it is probably too late as there is a number of people relying on this
feature for their work.
Salvatore's message
Redis cluster documentation
What i may suggest is prefixing. We are using this method in a SaaS application and all different data types are prefixed with related customer name. We handle some of the operations on application layer.
If you want to go single instance/multiple database then you need to manage them on your codebase via using select command. There may be some libraries to manage them. One of the critical thing is that;
All databases are still persisted in the same RedisDB / Append Only file.

REDIS Server Configuration

I am working on a migration project from Oracle to Redis, my Oracle DB size is 1 TB, can you please suggest the hardware configuration for Redis. I am planning to have a master with 2 slaves for the Redis server.
What is the best option for Redis to have high availability?
Is the master-slave architecture is fine? If yes can I have all the master and slaves on the same server? If yes what are the disadvantages will occur?
Please suggest me the best option for high availability for my Redis server.
Considering the data size you can utilize redis cluster to store your data.
When designed properly, this is expected to provide the high availability and partitioning your data among multiple masters in the cluster.
To identify its suitability, you need to perform some kind of benchmarks with the real data and real queries expected from your application.
You can use redis-benckmark utility provided by redis out of the box and simulate the expected data and calls to get a picture of what's expected

Is there a database based key eviction policy in redis when RAM is full

I am using 5 databases in my redis server. I want to evict keys belonging to a particular DB using LRU mechanism. Is it possible ?
I read this: how-to-make-redis-choose-lru-eviction-policy-for-only-some-of-the-keys.
But all my databases are using time to live for their entries. So cant use volatile-lru policy.
I tried volatile-ttl policy but other databases are having less ttl for their keys. So they will get evicted which I dont want.
That's one of the effects of using numbered/shared database - they all share the same configuration and resources. You should consider using separate Redis servers, one for each of your databases, to have better control over what gets evicted and when. Even more importantly, using dedicated instances allows you to better utilize the cores that you server has.

Redis: Efficient cluster of servers for large key set

I have a very large set of keys, 200M keys, with small values, <100 bytes, to store and I'm trying to use Redis. The problem is such that I have 10 Redis DB to split the keys over, but currently I'm on a single server with those 10 Redis DB. By a Redis DB I mean using SELECT. From my calculations it looks like I'm going to blow out memory. I think I'll need over 4TB of memory for this case! What are my options? First, my calculation is based on 10000 keys with 100 byte values taking 220MB of RAM (this is from a table I found). So simply put (2*10^8 / 10^4) * 220MB = 4.4TB.
If my calculation looks correct, what are my options? I've read on different posts that Redis VM is no longer an option. Can I use a Redis cluster? This still appears to require too many servers to be practical. I understand I could switch to another DB, but I'd like that to be the last resort option.
Firstly, using shared databases (i.e. the SELECT command) isn't a recommended practice since all of these databases are essentially managed by the same Redis process. It is preferable having 10 separate Redis processes (even on the same server) in order to avoid contention (more info here).
Next, there are ways to reduce the memory footprint of your database. You could, for example, perform client-side compression (see here) or consider other optimizations such as using Hashes to keep multiple values (as described here).
That said, a Redis server is ultimately bound by the amount of RAM that the host provides. Once you've reached that limit you'll need to shard your database and use a Redis cluster. Since you're already using multiple databases this shouldn't pose a big challenge as your code should already be compatible with that to a degree. Sharding can be done in one of three approaches: client, proxy or Redis Cluster. Client-side sharding can be implemented in your code or by the Redis client that you're using (if the client library that you're using supports that). Redis Cluster (v3) is expected to be released in the very near future and already has a stable release candidate. As for proxy-based sharding, there are several open source solutions out there, including Twitter's twemproxy, Netflix's dynomite and codis. Additional information about sharding and partitioning can be found here.
Disclaimer: I work at Redis Labs. Lastly, AFAIK there's only one Redis-as-a-Service provider that already provides built-in support for clustering Redis. Redis Labs' Redis Cloud is a fully-managed service that can scale seamlessly to any required capacity. Our clusters support both the '{}' hashtag standard as well as sharding by RegEx - more about this can be found here.
You can use LMDB with Dynomite to store data beyond your memory capacity. LMDB uses both disk and memory to store data. Dynomite make LMDB to be distributed.
We have done a POC with this combo and they work nicely together.
For more information, please check out our open issue here:
https://github.com/Netflix/dynomite/issues/254