Actually want to understand in which scenario we should use Redis for caching and in which aerospike.
searched lots on google but did not get satisfied answer.
Related
I've done some projects with Redis and MongoDB but I'm not comfortable at all. I'm currently using MongoDB for storing player datas and Redis for temporary and sorted datas. I'd want to use Redis more to my projects.
My questions
Should I use Redis more for persistent datas? I'd like to know a question about this case; if I make a project that ban players from the game server, is Redis good option to use for this case?
What are the best use cases for Redis?
As I mention it above, I use MongoDB for storing player datas and map for cache their information when they're online. From what I know redis is one of the best NoSQL database for caching. Should I use Redis for caching player datas?
If you have any other idea about the topic, I'd like to know that with details.
Should I use Redis more for persistent datas?
Redis is way more than Cache and is acting as Main database in many enterprises, and also supports few methods persistency like RDB and AOF.
if I make a project that ban players from the game server, is Redis good option to use for this case?
Redis support a nice set of plugins (Modules), one of them is RedisBloom, especially suited for quick filtering.
I have a very large set of keys, 200M keys, with small values, <100 bytes, to store and I'm trying to use Redis. The problem is such that I have 10 Redis DB to split the keys over, but currently I'm on a single server with those 10 Redis DB. By a Redis DB I mean using SELECT. From my calculations it looks like I'm going to blow out memory. I think I'll need over 4TB of memory for this case! What are my options? First, my calculation is based on 10000 keys with 100 byte values taking 220MB of RAM (this is from a table I found). So simply put (2*10^8 / 10^4) * 220MB = 4.4TB.
If my calculation looks correct, what are my options? I've read on different posts that Redis VM is no longer an option. Can I use a Redis cluster? This still appears to require too many servers to be practical. I understand I could switch to another DB, but I'd like that to be the last resort option.
Firstly, using shared databases (i.e. the SELECT command) isn't a recommended practice since all of these databases are essentially managed by the same Redis process. It is preferable having 10 separate Redis processes (even on the same server) in order to avoid contention (more info here).
Next, there are ways to reduce the memory footprint of your database. You could, for example, perform client-side compression (see here) or consider other optimizations such as using Hashes to keep multiple values (as described here).
That said, a Redis server is ultimately bound by the amount of RAM that the host provides. Once you've reached that limit you'll need to shard your database and use a Redis cluster. Since you're already using multiple databases this shouldn't pose a big challenge as your code should already be compatible with that to a degree. Sharding can be done in one of three approaches: client, proxy or Redis Cluster. Client-side sharding can be implemented in your code or by the Redis client that you're using (if the client library that you're using supports that). Redis Cluster (v3) is expected to be released in the very near future and already has a stable release candidate. As for proxy-based sharding, there are several open source solutions out there, including Twitter's twemproxy, Netflix's dynomite and codis. Additional information about sharding and partitioning can be found here.
Disclaimer: I work at Redis Labs. Lastly, AFAIK there's only one Redis-as-a-Service provider that already provides built-in support for clustering Redis. Redis Labs' Redis Cloud is a fully-managed service that can scale seamlessly to any required capacity. Our clusters support both the '{}' hashtag standard as well as sharding by RegEx - more about this can be found here.
You can use LMDB with Dynomite to store data beyond your memory capacity. LMDB uses both disk and memory to store data. Dynomite make LMDB to be distributed.
We have done a POC with this combo and they work nicely together.
For more information, please check out our open issue here:
https://github.com/Netflix/dynomite/issues/254
I tried to find more info about it online, but cant seem to find a fitting answer.
Our new application uses HA loadbalancers on top to distribute visitors to clustered ampq and clustered mysql and everything works flawlessly.
Now we have decided that we need to store our sessions on REDIS and according to everyone out there.. REDIS seems to be a good choice.
But what I dont understand is, since Redis doesnt support cluster yet in production. How do people achieve HA with Redis? Its all great to setup a Master-Slave REDIS setup, but that means I can only write to the master. What happens if the master die? And even with Redis Sentinel promoting slaves to master.. the replication from master to slave can have a delay and reply me with stale data. How do people prevent that?
But to keep it short, I just dont "see" it. Please enlightenment me! Thank you
Have a look at Twemproxy. It was deisnged to partition data amongst multiple redis masters, so there's no single point of failure; currently, it's the recommended approach to partition redis based on this (scroll to bottom).
Bonus Alert: Here's an interesting article on how to use redis slaves and sentinel with twemproxy, so they all play nice.
try redis-mgr: https://github.com/idning/redis-mgr
redis+twemproxy+sentinel deploy/auto-failover/monitor/migrate/rolling-upgrade
Redis 3.x has clustering functionality in the core.
http://redis.io
I'm interested in SignalR + Redis solution for implementing a server application that is scalable. And my concern is that Redis cluster is not production ready yet! So my question is:
Is Redis a bottleneck in SignalR + Redis when it comes to scaling out? If it is, is there any Linux-based solution that solves the problem?
On a single redis server you can easily handle up to 10K concurrent clients using pubsub. If you are still evaluating what to use, this should be more than you need at your current stage.
Redis cluster is supposed to be production ready by the end of the year or early 2014. You can actually download it and try it already. Lots of people are using it now and reporting the odd bug. The creator of redis is focused on making the cluster work and as of now it is very mature.
By using the proxy you could have up to 1000 nodes simultaneously, with over 10K clients on pubsub, so 10 million of concurrent users. The limit of the cluster is theoritecally of 16384 nodes, but a maximum of 1000 is recommended right now.
Unless you are of facebook scale, you can probably use redis for your case use (and even when you are twitter scale, given twitter uses redis intensively for storing all the timelines on redis)
I've been asked to add some references on a comment, so here you are the relevant links:
On the number of concurrent connections per redis process http://redis.io/topics/clients
On how twitter is using redis http://highscalability.com/blog/2013/7/8/the-architecture-twitter-uses-to-deal-with-150m-active-users.html
On cluster size/specs http://redis.io/topics/cluster-spec
Is Redis a bottleneck in SignalR + Redis when it comes to scaling out? If it is, is there any Linux-based solution that solves the problem?
I don't think so. Check the below article on how to scale out using Redis
http://www.asp.net/signalr/overview/performance-and-scaling/scaleout-with-redis
I have a single page app (Rails + Backbone.js + Postgres on Heroku), and as some of my queries are starting to slow down for users with lots of data (there are multiple queries per object), I want to start caching the JSON I'm sending the client.
I'm already using Redis with Resque, so I'm not sure if I should be using the same redis instance for both Resque and general data caching. Is that a reason to go with Memcached?
I guess I'm looking for general input from those with experience with either so I can quickly decide on one of the two and start caching stuff (sorry if a clear-cut answer cannot be given).
Thanks for any help.
Both will cache strings just fine. Although I think that using redis for a simple cache is an overkill. I'd go with memcached.
Blog post from Salvatore on caching with Redis.