I'm currently wirting keys from multiple servers to redis which logstash is picking up.
However those which aren't picked up are lying around until i delete them manually.
This can happen due to misconfig or malicious executions. Due to the fact that those keys are actively written, the TTL doesn't work for me.
Is there any way redis supports a "limit" which keys are allowed to be created, or is lua scipting efficient getting all keys except some defined ones and deleting them?
Working with redis 3.2 (RHEL 7); potential to go up to v. 5.
Apparently this can be done with ACLs by specifying which keys are allowed to be written by a user:
https://redis.io/topics/acl
Afaik this is available from redis version 6, so i have to build it for Rhel7.
Related
I have a Redis cluster that is still active and not sure if it is still been used. Logged in and tried command KEYS * and found around 30k+ keys.
How to identify if it is still been used if not can process to terminate?
First of all, you shouldn't use KEYS command in production, it can jam your Redis. Use SCAN instead.
If you mean you would like to know if some queries are still being made against Redis, you can use the Monitor command. Be careful though, it reduces your throughput by about 50%, so only use when necessary.
I am using 5 databases in my redis server. I want to evict keys belonging to a particular DB using LRU mechanism. Is it possible ?
I read this: how-to-make-redis-choose-lru-eviction-policy-for-only-some-of-the-keys.
But all my databases are using time to live for their entries. So cant use volatile-lru policy.
I tried volatile-ttl policy but other databases are having less ttl for their keys. So they will get evicted which I dont want.
That's one of the effects of using numbered/shared database - they all share the same configuration and resources. You should consider using separate Redis servers, one for each of your databases, to have better control over what gets evicted and when. Even more importantly, using dedicated instances allows you to better utilize the cores that you server has.
We're having a severe config/product bug on our installation. We've been experiencing concurrency related errors that we've been blaming to Jedis usage, but it seems that it might be a product / config issue.
This is a single redis installation with over 4M keys. Whenever we run a long running command from redis-cli, like a keys *, our client code (Jedis based) starts to throw errors, like trying to cast a string into a binary (typical concurrency errors in Jedis conf). The worst scenario is that sometimes it seems that it returns wrong keys. We were using a Jedis instance in each actor instance, so it shouldn't be an issue but we changed to JedisPool nevertheless. But the problem remained (we are using Jedis 2.6.2).
But the main thing was when trying from different redis-cli. We run KEYS * that stays a long time running and then a GET command which returned. It was our understanding that the KEYS * should block everyone, but the GET command keeps working. This also happens with a SLEEP command.
Is this related to a config setting or this is something that shouldn't happen, or the KEYS command isn't blocking and my problem lies elsewhere?
Redis.io documentation for KEYS clearly states that KEYS is a debug command and should not be used in production:
Warning: consider KEYS as a command that should only be used in
production environments with extreme care. It may ruin performance
when it is executed against large databases. This command is intended
for debugging and special operations, such as changing your keyspace
layout. Don't use KEYS in your regular application code. If you're
looking for a way to find keys in a subset of your keyspace, consider
using SCAN or sets.
I have a very large set of keys, 200M keys, with small values, <100 bytes, to store and I'm trying to use Redis. The problem is such that I have 10 Redis DB to split the keys over, but currently I'm on a single server with those 10 Redis DB. By a Redis DB I mean using SELECT. From my calculations it looks like I'm going to blow out memory. I think I'll need over 4TB of memory for this case! What are my options? First, my calculation is based on 10000 keys with 100 byte values taking 220MB of RAM (this is from a table I found). So simply put (2*10^8 / 10^4) * 220MB = 4.4TB.
If my calculation looks correct, what are my options? I've read on different posts that Redis VM is no longer an option. Can I use a Redis cluster? This still appears to require too many servers to be practical. I understand I could switch to another DB, but I'd like that to be the last resort option.
Firstly, using shared databases (i.e. the SELECT command) isn't a recommended practice since all of these databases are essentially managed by the same Redis process. It is preferable having 10 separate Redis processes (even on the same server) in order to avoid contention (more info here).
Next, there are ways to reduce the memory footprint of your database. You could, for example, perform client-side compression (see here) or consider other optimizations such as using Hashes to keep multiple values (as described here).
That said, a Redis server is ultimately bound by the amount of RAM that the host provides. Once you've reached that limit you'll need to shard your database and use a Redis cluster. Since you're already using multiple databases this shouldn't pose a big challenge as your code should already be compatible with that to a degree. Sharding can be done in one of three approaches: client, proxy or Redis Cluster. Client-side sharding can be implemented in your code or by the Redis client that you're using (if the client library that you're using supports that). Redis Cluster (v3) is expected to be released in the very near future and already has a stable release candidate. As for proxy-based sharding, there are several open source solutions out there, including Twitter's twemproxy, Netflix's dynomite and codis. Additional information about sharding and partitioning can be found here.
Disclaimer: I work at Redis Labs. Lastly, AFAIK there's only one Redis-as-a-Service provider that already provides built-in support for clustering Redis. Redis Labs' Redis Cloud is a fully-managed service that can scale seamlessly to any required capacity. Our clusters support both the '{}' hashtag standard as well as sharding by RegEx - more about this can be found here.
You can use LMDB with Dynomite to store data beyond your memory capacity. LMDB uses both disk and memory to store data. Dynomite make LMDB to be distributed.
We have done a POC with this combo and they work nicely together.
For more information, please check out our open issue here:
https://github.com/Netflix/dynomite/issues/254
I am using redis version 2.8.3. I want to build a redis cluster. But in this cluster there should be multiple master. This means I need multiple nodes that has write access and applying ability to all other nodes.
I could build a cluster with a master and multiple slaves. I just configured slaves redis.conf files and added that ;
slaveof myMasterIp myMasterPort
Thats all. Than I try to write something into db via master. It is replicated to all slaves and I really like it.
But when I try to write via a slave, it told me that slaves have no right to write. After that I just set read-only status of slave in redis.conf file to false. Hence, I could write something into db.
But I realize that, it is not replicated to my master replication so it is not replicated to all other slave neigther.
This means I could'not build an active-active cluster.
I tried to find something whether redis has active-active cluster capability. But I could not find exact answer about it.
Is it available to build active-active cluster with redis?
If it is, How can I do it ?
Thank you!
Redis v2.8.3 does not support multi-master setups. The real question, however, is why do you want to set one up? Put differently, what challenge/problem are you trying to solve?
It looks like the challenge you're trying to solve is how to reduce the network load (more on that below) by eliminating over-the-net reads. Since Redis isn't multi-master (yet), the only way to do it is by setting up each app server with a master and a slave (to the other master) - i.e. grand total of 4 Redis instances (and twice the RAM).
The simple scenario is when each app updates only a mutually-exclusive subset of the database's keys. In that scenario this kind of setup may actually be beneficial (at least in the short term). If, however, both apps can touch all keys or if even just one key is "shared" for writes between the apps, then you'll need to bake locking/conflict resolution/etc... logic into your apps to consolidate local master and slave differences (and that may be a bit of an overkill). In either case, however, you'll end up with too many (i.e. more than 1) Redises, which means more admin effort at the very least.
Also note that by colocating app and database on the same server you're setting yourself for near-certain scalability failure. What will happen when you need more compute resources for your apps or Redis? How will you add yet another app server to the mix?
Which brings me back to the actual problem you are trying to solve - network load. Why exactly is that an issue? Are your apps so throughput-heavy or is the network so thin that you are willing to go to such lengths? Or maybe latency is the issue that you want to resolve? Be the case as it may be, I recommended that you consider a time-proven design instead, namely separating Redis from the apps and putting it on its own resources. True, network will hit you in the face and you'll have to work around/with it (which is what everybody else does). On the other hand, you'll have more flexibility and control over your much simpler setup and that, in my book, is a huge gain.
Redis Enterprise has had this feature for quite a while, but if you are looking for an open source solution KeyDB is a fork with Active Active support (called Active Replica).
Setting it up is just a little more work than standard replication:
Both servers must have "active-replica yes" in their respective configuration files
On server B execute the command "replicaof [A address] [A port]"
Server B will drop its database and load server A's dataset
On server A execute the command "replicaof [B address] [B port]"
Server A will drop its database and load server B's dataset (including the data it just transferred in the prior step)
Both servers will now propagate writes to each other. You can test this by writing to a key on Server A and ensuring it is visible on B and vice versa.
https://github.com/JohnSully/KeyDB/wiki/KeyDB-(Redis-Fork):-Active-Replica-Support