I want to generate a unique id in my application . Is it ok to use redis auto increment id for this purpose ? Will it be a unique id even when in a cluster ?
yes, as the INCR documentation states it is an atomic operation and thus provides this guarantee
Redis can do id generation based on INCR command but may not be a good solution.
As Redis will not guarantee ACID for update(INCR), it may lose it consistency when Redis restart of failover. Both RDB and AOF are doing persistence in async way and data can be lost. For ID generator, it may generate duplicate IDs after restart(or failover in Cluster/Sentinel.
In your case if you don't care this scenario, or you think manual recovery is acceptable, you can use Redis as generator since it fast enough for most case(compare with MySQL or other database).
Or there are still some useful ID generation algorithm like SnowFlake which will guarantee the global incremental and no duplicate exists.
Related
I want to delete multiple redis keys using a single delete command on redis client.
Is there any limit in the number of keys to be deleted?
i will be using del key1 key2 ....
There's no hard limit on the number of keys, but the query buffer limit does provide a bound. Connections are closed when the buffer hits 1 GB, so practically speaking this is somewhat difficult to hit.
Docs:
https://redis.io/topics/clients
However! You may want to take into consideration that Redis is single-threaded: a time-consuming command will block all other commands until completed. Depending on your use-case this may make a good case for "chunking" up your deletes into groups of, say, 1000 at a time, because it allows other commands to squeeze in between. (Whether or not this is tolerable is something you'll need to determine based on your specific scenario.)
I have keys I want to keep indefinitely in redis provided I have enough memory. However, if redis runs low on memory, then I'd like it to remove the oldest keys first. I looked at the "eviction policy" options and it appears redis doesn't support this out of the box. https://support.redislabs.com/hc/en-us/articles/203290657-What-eviction-policies-do-you-support-
How could I implement this myself using commands available as part of the redis-client api?
Here's some pseudocode that might work to give a flavor for what I need:
1. Get the first N keys from a list sorted by key date asc.
2. Delete the oldest keys.
3. Repeat until memory is no longer constrained.
The eviction policy determines what happens when a database reaches its memory limit. To make room for new data, older data is evicted (removed) according to the selected policy.
You can select the policies from the reference link below based on your requirement. The one I am using in the example below is "allkeys-lru"
reference link
Example -
127.0.0.1:6379> CONFIG SET maxmemory-policy allkeys-lru
OK
Example in Python -
import redis
client = redis.Redis(host='localhost', port=6379, db=0)
client.config_set('maxmemory-policy', "allkeys-lru")
there are two systems sharing a redis database, one system just read the redis, the other update it.
the read system is so busy that the redis can't handle it, to reduce the count of requests to redis, I find "mget", but I also find "multi".
I'm sure mget will reduce the number of requests, but will "multi" do the same? I think "multi" will force the redis server to keep some info about this transaction and collect requests in this transaction from the client one by one, so the total number of requests sent stays the same, but the results returned will be combined together, right?
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Short Answer: You should use MGET
MULTI is used for transaction, and it won't reduces the number of requests. Also, the MULTI command MIGHT be deprecated in the future, since there's a better choice: lua scripting.
So If I just read KeyA, keyB, keyC in "multi" when the other write system changed KeyB's value, what will happen?
Since MULTI (with EXEC) command ensures transaction, all of the three GET commands (read operations) executes atomically. If the update happens before the read operation, you'll get the old value. Otherwise, you'll get the new value.
By the way, there's another option to reduce RTT: PIPELINE. However, in your case, MGET should be the best option.
I have a Redis Cluster consisting of multiple nodes. I want to update 3 different keys in a single atomic operation. My Lua script is like:
local u1 = redis.call('incrby', KEYS[1], ARGV[1])
local u2 = redis.call('incrby', KEYS[2], ARGV[1])
local u3 = redis.call('incrby', KEYS[3], ARGV[1])
And I fired it with:
EVAL script 3 key1 key2 key3 arg
But I got the error message:
WARN Resp(AppErr CROSSSLOT Keys in request don't hash to the same slot)
The above operations cannot be done, and the updates will fail. It seems I cannot modify the keys in different nodes with a single Lua script. But according to the doc:
All Redis commands must be analyzed before execution to determine
which keys the command will operate on. In order for this to be true
for EVAL, keys must be passed explicitly. This is useful in many ways,
but especially to make sure Redis Cluster can forward your request to
the appropriate cluster node.
Note this rule is not enforced in order
to provide the user with opportunities to abuse the Redis single
instance configuration, at the cost of writing scripts not compatible
with Redis Cluster.
So I think as long as I follow the key passing rule, the script should be compatible with Redis Cluster. I wonder what's the problem here and what should I do to update all keys in a single script.
I'm afraid you've misunderstood the documentation. (And I agree that it's not very clear.)
Redis operations, whether commands or Lua scripts, can only work when all the keys are on the same server. The purpose of the key passing rule is to allow Cluster servers to figure out where to send the script and to fail fast if all the keys don't come from the same server (which is what happened in your case).
So it's your responsibility to make sure that all the keys you want to operate on are located on the same server. The way to do that is to use hash tags to force keys to hash to the same slot. See the documentation for more details on that.
My problem is: i have a set of values that each of them has to have an expire value.
code:
set a:11111:22222 someValue
expire a:11111:22222 604800 \\usually equal a week
In a perfect world i would have put all those values in a hash and give each of them it's appropriate expire value, but redis does not allow expire on a hash fields.
problem is that i also have a process that need to get all those keys about once an hour
keys a:*
this command is really expensive and according to redis documentation can cause performance issues. I have about 25000-30000 keys at each given moment.
Does someone knows how can i solve such a problem?
thumbs up it guarantee (-;
Roy
Let me propose an alternative solution.
Rather than asking Redis to scan all the keys, why not perform a background dump, and parse the dump to extract the keys? This way, there is zero impact on the Redis instance itself.
Parsing the dump file is not as scary as it sounds, because you can use the excellent redis-rdb-tools package:
https://github.com/sripathikrishnan/redis-rdb-tools
You can either convert the dump file into a json file, and then parse the json file, or use the Python API to extract the keys by yourself.
As you've already mentioned, using keys is not a good solution to get your keys:
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using sets.
Source: Redis docs for KEYS
As the docs are suggesting, you should build your own indices!
A common way of building an index is to use a sorted set. You can read more on how it's working on my question over here.
Building references to your a:* keys using a sorted set, will also allow you to only select the required keys in relation to a date or any other int value, in case you're filtering the results!
And yes: it would be awesome if hashes could expire. Sadly it looks like its not going to happen, but there are in fact creative alternatives to take care about it by yourself.
Why don't you use a sorted set.
Here is some data creation sequence.
redis 127.0.0.1:6379> setex a:11111:22222 604800 someValue
OK
redis 127.0.0.1:6379> zadd user:index 1385112435 a:11111:22222 // 1384507635 + 604800
(integer) 1
redis 127.0.0.1:6379> setex a:11111:22223 604800 someValue2
OK
redis 127.0.0.1:6379> zadd user:index 1385113289 a:11111:22223 // 1384508489 + 604800
(integer) 1
redis 127.0.0.1:6379> zrangebyscore user:index 1385112435 1385113289
1) "a:11111:22222"
2) "a:11111:22223"
This is no select performance issue.
but, It spends more memory and insert cost.