Is it possible to get 10 random keys in Redis? The 'RANDOMKEY' returns only one random key, I need 10 random keys without running 10 'RANDOMKEY' command.
Any help would be very much appreciated :)
You can achieve that with Lua script:
local res={}
for i = 1,3 do
res[i] = redis.call("randomkey")
end
return res
If you want to ensure the returned keys are unique, you need to remove duplicate items in the script. I'll leave that as an exercise.
Related
I have a dozen of REDIS Keys of the type SET, say
PUBSUB_USER_SET-1-1668985588478915880,
PUBSUB_USER_SET-2-1668985588478915880,
PUBSUB_USER_SET-3-1668988644477632747,
.
.
.
.
PUBSUB_USER_SET-10-1668983464477632083
The set contains a userId and the problem statement is to check if the user is present in any of the set or not
The solution I tried is to get all the keys and append with a delimiter (, comma) and pass it as an argument to lua script wherein with gmatch operator I split the keys and run sismember operation until there is a hit.
local vals = KEYS[1]
for match in (vals..","):gmatch("(.-)"..",") do
local exist = redis.call('sismember', match, KEYS[2])
if (exist == 1) then
return 1
end
end
return 0
Now as and when the number of keys grows to PUBSUB_USER_SET-20 or PUBSUB_USER_SET-30 I see an increase in latency and in throughput.
Is this the better way to do or Is it better to batch LUA scripts where in instead of passing 30keys as arguments I pass in batches of 10keys and return as soon as the user is present or is there any better way to do this?
I would propose a different solution instead of storing keys randomly in a set. You should store keys in one set and you should query that set to check whether a key is there or not.
Lets say we've N sets numbered s-0,s-1,s-2,...,s-19
You should put your keys in one of these sets based on their hash key, which means you need to query only one set instead of checking all these sets. You can use any hashing algorithm.
To make it further interesting you can try consistent hashing.
You can use redis pipeline with batching(10 keys per iteration) to improve the performance
i have key-values like following example
KEY VALUE
key1 1
key2 2
key3 3
. .
. .
keyN N
each of my key needs to map a unique number so i am mapping my keys to auto incremented numbers then inserting it to Redis via redis mass insertion which works very well and then using GET command for internal processing of all the key value mapping.
but i have more than 1 billion key so i was wondering is there even more efficient(mainly lesser memory usage) way for using Redis for this scenario?
Thanks
You can pipeline commands into Redis to avoid the round-trip times like this:
{ for ((i=0;i<10000000;i++)) ; do printf "set key$i $i\r\n"; done ; sleep 1; } | nc localhost 6379
That takes 80 seconds to set 10,000,000 keys.
Or, if you want to avoid creating all those processes for printf, generate the data in a single awk process:
awk 'BEGIN{for(i=0;i<10000000;i++){printf("set key%d %d\r\n",i,i)}}'; sleep 1; } | nc localhost 6379
That now takes 17 seconds to set 10,000,000 keys.
The auto-increment key allows a unique number to be generated when a new record is inserted into a table/redis.
There is other way using UUID.
But I think auto-increment is far better due to reason like it need four time more space, ordering cannot be done based on key,etc
I'm doing exactly the same thing.
here is an simple example.
if you have a better one, welcome to discuss :)
1. connect to redis
import redis
pool = redis.ConnectionPool(host=your_host, port=your_port)
r = redis.Redis(connection_pool=pool)
2.define a function to incr, use pipe
def my_incr(pipe):
next_value = pipe.hlen('myhash')
pipe.multi()
pipe.hsetnx(
name='myhash',
key=newkey, value=next_value
)
3.make the function become a transaction
pipe = r.pipeline()
newkey = 'key1'
r.transaction(my_incr, 'myhash')
In order to be more memory efficient, you can use HASH to store these key-value pairs. Redis has special encoding for small HASH. It can save you lots of memory.
In you case, you can shard your keys into many small HASHs, each HASH has less than hash-max-ziplist-entries entries. See the doc for details.
B.T.W, with the INCR command, you can use Redis to create auto-incremented numbers.
I would like to answer my own question.
If you have sorted key values, the most efficient way to bulk insert and then read them is using a B-Tree based database.
For instance, with MapDB I am able to insert it very quickly and it takes up less memory.
I need to manage below data using redis.
Strat_range|End_range|circle|operator|operator_id|circle_id
918005000000|918005099999|UP EAST|BSNL|4|22
919967200000|919967299999|MAHARASHTRA|AIRTEL|15|20
I have one api for operator detection. I am passed mobile number via api & check number above series & which series match at configure & return line.
For example.
Mobile number :- 919967288367
This number match in second series. So that we return below out put.
919967200000|919967299999|MAHARASHTRA|AIRTEL|15|20
We need this match directly on value. because we not using loop for performance basis.
I have 10000 series.
Please help any.
The simplest approach is to use a Sorted Set, where members are your rows and their scores is the range's start.
E.g.:
ZADD ops 918005000000 "918005099999|UP EAST|BSNL|4|22" 919967200000 "919967299999|MAHARASHTRA|AIRTEL|15|20"
To query a number:
ZREVRANGEBYSCORE ops 919967288367 -inf LIMIT 0 1
The solution is simple! Store the series as keys instead as values.
Details:
Store these series as Redis keys and values.
Key: 918005000000 Value:918005099999|UP EAST|BSNL|4|22
Key:919967200000 Value:919967299999|MAHARASHTRA|AIRTEL|15|20
Now, given a mobile number 919967288367, form a key pattern like
9199672* and get all the matching keys.
From this set of keys select the smallest key and get it's value from redis.
For example i have an array/json with 100000 entries cached with Redis / Predis. Is it posible to update or delete 1 or more entries or do i have to generate the whole array/json of 100000 entries? And how can I achieve that?
It is about how you store them if you are storing it as a string then no,
set key value
get key -> will return you value
Here value is your json/array with 10000 entries.
Instead if you are storing it in a hash . http://redis.io/commands#hash
hmset key member1 value1 member2 value2 ...
then you can update/delete member1 separately.
If you are using sets/lists you can achieve it with similar commands like lpush/lpop, srem etc.
Do read the commands section to know more about redis data structures which will give you more flexibility in selecting your structure.
Hope this helps
If you are using cache service, you have to:
get data from cache
update some entries
save data back in cache
You could use advanced Redis data structures like Hashes, but you it is not supported by Cache service, you would need to write you own functions.
Thanks Karthikeyan Gopall, i made an example:
Here i changed field1 value and it works :)
$client = Redis::connection();
$client->hmset('my:hash', ['field1'=>'value1', 'field2'=>'value2']);
$changevalue= Redis::hset('my:hash' , 'field1' , 'newvaluesssssssssss');
$values1 = Redis::hmget('my:hash' , 'field1');
$values2 = Redis::hmget('my:hash' , 'field2');
print_r($values1);
print_r($values2);
I am interested in creating several different redis based counters in my web application. A lot of this stuff is basically for metrics etc, but that doesn't make a difference. My question is essentially the following, is it possible to avoid doing:
if $redis.get(key) != null
// increment key
else
// create key with a counter of 1
Ideally something like this would be more optimal
$redis.incr(key, 1) // increment key by 1, and if it does not exist, start it at the value 1
am I overlooking the redis documentation? Is there a way to do this currently?
there is a INCR command, which if the key does not exists sets the value of the key to 1
$redis.incr()
should work.
see http://redis.io/commands/incr