For example i have an array/json with 100000 entries cached with Redis / Predis. Is it posible to update or delete 1 or more entries or do i have to generate the whole array/json of 100000 entries? And how can I achieve that?
It is about how you store them if you are storing it as a string then no,
set key value
get key -> will return you value
Here value is your json/array with 10000 entries.
Instead if you are storing it in a hash . http://redis.io/commands#hash
hmset key member1 value1 member2 value2 ...
then you can update/delete member1 separately.
If you are using sets/lists you can achieve it with similar commands like lpush/lpop, srem etc.
Do read the commands section to know more about redis data structures which will give you more flexibility in selecting your structure.
Hope this helps
If you are using cache service, you have to:
get data from cache
update some entries
save data back in cache
You could use advanced Redis data structures like Hashes, but you it is not supported by Cache service, you would need to write you own functions.
Thanks Karthikeyan Gopall, i made an example:
Here i changed field1 value and it works :)
$client = Redis::connection();
$client->hmset('my:hash', ['field1'=>'value1', 'field2'=>'value2']);
$changevalue= Redis::hset('my:hash' , 'field1' , 'newvaluesssssssssss');
$values1 = Redis::hmget('my:hash' , 'field1');
$values2 = Redis::hmget('my:hash' , 'field2');
print_r($values1);
print_r($values2);
Related
I have a dozen of REDIS Keys of the type SET, say
PUBSUB_USER_SET-1-1668985588478915880,
PUBSUB_USER_SET-2-1668985588478915880,
PUBSUB_USER_SET-3-1668988644477632747,
.
.
.
.
PUBSUB_USER_SET-10-1668983464477632083
The set contains a userId and the problem statement is to check if the user is present in any of the set or not
The solution I tried is to get all the keys and append with a delimiter (, comma) and pass it as an argument to lua script wherein with gmatch operator I split the keys and run sismember operation until there is a hit.
local vals = KEYS[1]
for match in (vals..","):gmatch("(.-)"..",") do
local exist = redis.call('sismember', match, KEYS[2])
if (exist == 1) then
return 1
end
end
return 0
Now as and when the number of keys grows to PUBSUB_USER_SET-20 or PUBSUB_USER_SET-30 I see an increase in latency and in throughput.
Is this the better way to do or Is it better to batch LUA scripts where in instead of passing 30keys as arguments I pass in batches of 10keys and return as soon as the user is present or is there any better way to do this?
I would propose a different solution instead of storing keys randomly in a set. You should store keys in one set and you should query that set to check whether a key is there or not.
Lets say we've N sets numbered s-0,s-1,s-2,...,s-19
You should put your keys in one of these sets based on their hash key, which means you need to query only one set instead of checking all these sets. You can use any hashing algorithm.
To make it further interesting you can try consistent hashing.
You can use redis pipeline with batching(10 keys per iteration) to improve the performance
I have a data structure that looks like the following
data[price] = amount
so if I entered this in redis as
hset mydata 300 500
I can increment the key like so
hincrby mydata 300 500
I am using the Python redis Package. From Python I can get the minimum keys by like so
min(redis.hgetall(keyname))
But this is an operation in order of O(N) and the official documentation cautions against using this in production. The other approach I have considered is sorting the keys and getting the minimum value. Something that looks like:
redis.sort(keyName)[0] #pseudocode
But I am not sure how to do this. I'd appreciate any ideas to either sort numeric keys in hset or get minimum value of the keys
I have the n number of keys in my redis server with some data. Now i want to check what all keys got created in last two months. How to check this. Is there any way to sort all cache keys in redis-cli by creation time or anything?
Redis doesn't store this information. You need to do this explicitly. There are many ways you can achieve this. Some of them are:
SET time or date or datetime string when setting key
ex:
SET key1 data
SET key1:date "12-JULY-2018"
Make data Object type - Add an explicit key of created at and then store it to Redis. Then sort it in your own application.
Create Sets/Lists of each hour/day/month and keep pushing all the keys to those lists. You can then retrieve keys for each hour/day/month. Now get data using these keys.
ex:
SET key1 data // At this point date is "12-JULY-2018"
SADD "JULY-SET" key1
Now you can get all keys of JULY by doing this:
SMEMBERS "JULY-SET"
On my current project I'm implementing autocompletion service on top of Redis, for it I use such approach (this article describes it more widely):
1) for storing dump of the data I have hash in which I put searchable objects as a values, for instance
HSET data 1 "{\"name\":\"Kill Bill\",\"year\":2003}"
HSET data 2 "{\"name\":\"King Kong\",\"year\":2005}"
2) for storing all possible sequences of input characters (that I generate in advance) which could be used in search I use sorted sets, like
ZADD search:index:k 0 1
ZADD search:index:ki 0 1
ZADD search:index:kil 0 1
ZADD search:index:kill 0 1
Where value stored in sorted set (in my example '1') is key for data from hash. So, for searching some data (for example where name started with 'ki') we need to make two steps:
data_keys = REDIS.zrevrange('search:index:ki', 0, -1)
matching_data = REDIS.hmget(data, *data_keys)
The issue I tried to solve - how automatically remove all data from sorted sets related to hash values when I removed it? In relational databases I can use cascade deletion for such cases, but how can I handle it in Redis?
Your design appears awkward to me, I'm unsure what you're actually trying to do with Redis and perhaps that could be the topic of another question.
That said, to address your question, Redis does offer a "cascading delete"-like behavior. Instead, if you're deleting hash "1", iterate the prefix and ZREM it from the relevant sorted sets.
Note: do not use a Lua script for this task, as it will generate key names (i.e. sorted sets by prefix) and that is against the recommendations (will not work on a cluster)
i have key-values like following example
KEY VALUE
key1 1
key2 2
key3 3
. .
. .
keyN N
each of my key needs to map a unique number so i am mapping my keys to auto incremented numbers then inserting it to Redis via redis mass insertion which works very well and then using GET command for internal processing of all the key value mapping.
but i have more than 1 billion key so i was wondering is there even more efficient(mainly lesser memory usage) way for using Redis for this scenario?
Thanks
You can pipeline commands into Redis to avoid the round-trip times like this:
{ for ((i=0;i<10000000;i++)) ; do printf "set key$i $i\r\n"; done ; sleep 1; } | nc localhost 6379
That takes 80 seconds to set 10,000,000 keys.
Or, if you want to avoid creating all those processes for printf, generate the data in a single awk process:
awk 'BEGIN{for(i=0;i<10000000;i++){printf("set key%d %d\r\n",i,i)}}'; sleep 1; } | nc localhost 6379
That now takes 17 seconds to set 10,000,000 keys.
The auto-increment key allows a unique number to be generated when a new record is inserted into a table/redis.
There is other way using UUID.
But I think auto-increment is far better due to reason like it need four time more space, ordering cannot be done based on key,etc
I'm doing exactly the same thing.
here is an simple example.
if you have a better one, welcome to discuss :)
1. connect to redis
import redis
pool = redis.ConnectionPool(host=your_host, port=your_port)
r = redis.Redis(connection_pool=pool)
2.define a function to incr, use pipe
def my_incr(pipe):
next_value = pipe.hlen('myhash')
pipe.multi()
pipe.hsetnx(
name='myhash',
key=newkey, value=next_value
)
3.make the function become a transaction
pipe = r.pipeline()
newkey = 'key1'
r.transaction(my_incr, 'myhash')
In order to be more memory efficient, you can use HASH to store these key-value pairs. Redis has special encoding for small HASH. It can save you lots of memory.
In you case, you can shard your keys into many small HASHs, each HASH has less than hash-max-ziplist-entries entries. See the doc for details.
B.T.W, with the INCR command, you can use Redis to create auto-incremented numbers.
I would like to answer my own question.
If you have sorted key values, the most efficient way to bulk insert and then read them is using a B-Tree based database.
For instance, with MapDB I am able to insert it very quickly and it takes up less memory.