I'm a newcomer to Redis and I'm looking for some specific help around sets. To give some background: I'm building a web-app which consists of a large number of card decks which each have a set of individual cards with unique ids. I want users to have a set of 5 cards drawn for them at random from a specific deck.
My plan is to have all of the card ids of a given deck stored as a set in Redis; then I want to use the SPOP function to draw individual cards and remove them from the set so that they are not drawn again within that hand. It would seem to make sense to do this by copying the deck's 'master set' of card IDs into a new temporary set, performing the popping on the copy and then deleting the copied set when I'm done.
But: I can't find any Redis function to command a set copy - the closest thing I can see would be to also create an empty set and then 'join' the empty set and the 'master copy' of the set into a new (if temporary) set with SUNIONSTORE, but that seems hacky. I suppose an alternative would be to copy the set items out into my 'host language' (node.js) and then manually insert the items back into a new Redis set, but this also seems clunky. There's probably a better third option that I haven't even thought of.
Am I doing something wrong - am I not 'getting' Redis, or is the command-set still a bit immature?
redis> sadd mydeck 1
(integer) 1
redis> sadd mydeck 2
(integer) 1
redis> sadd mydeck 3
(integer) 1
redis> smembers mydeck
1) "1"
2) "2"
3) "3"
redis> sunionstore tempdeck mydeck
(integer) 3
redis> smembers mydeck
1) "1"
2) "2"
3) "3"
redis> smembers tempdeck
1) "1"
2) "2"
3) "3"
Have fun with Redis!
Salvatore
Related
I am new to redis and was thinking of implementing some project in order to get familiar with it. One interesting project that came in my mind was using redis as a cache for real time gps location. The only part which confuses me is its implementation. I read about support of geospatial data by redia, but lets sey if I want to keep on updating the location points against some key, that doesn't seem to be possible.
One way that I started out with was to use hash structure for storing lat and long of a device that needs to be tracked, and keep on setting its value which in turn updates the value, and keeping all these hashes in a set. But that doesn't seem to be a hood approach and it also wont allow me to use geospatial queries provided by redis.
Any leads on how coukd it be implemented in an efficient way.
You can simply use GEOADD repeatedly on the same device id with different coordinates. This will "move" the object's location in the geo set and will immediately affect the next radius queries.
127.0.0.1:6379> GEOADD foo 34 32 bar
(integer) 1
127.0.0.1:6379> GEORADIUS foo 34 32 100 m
1) "bar"
# Let's "move" bar in foo to new coordinates
127.0.0.1:6379> GEOADD foo 35 36 bar
(integer) 0
127.0.0.1:6379> GEORADIUS foo 34 32 100 m
(empty list or set)
127.0.0.1:6379> GEORADIUS foo 35 36 100 m
1) "bar"
And if you want the coordinates, that's also easy:
127.0.0.1:6379> GEORADIUS foo 35 36 100 m WITHCOORD
1) 1) "bar"
2) 1) "34.99999791383743286"
2) "35.99999953955607168"
I'm tracking members in multiple Sorted Sets in Redis as a way to do multi-column indexing on the members. As an example, let's say I have two Sorted Sets, lastseen (which is epoch time) and points, and I store usernames as members in these Sorted Sets.
I'm wanting to first sort by lastseen so I can get the users seen within the last day or month, then I'm wanting to sort the resulting members by points so I effectively have the members seen within the last day or month sorted by points.
This would be easy if I could store the result of a call to ZREVRANGEBYSCORE to a new Sorted Set (we'll call the new Sorted Set temp), because then I could sort lastseen with limits, store the result to temp, use ZINTERSTORE against temp and points with a weight of zero for out (stored to result), and finally use ZREVRANGEBYSCORE again on result. However, there's no built-in way in Redis to store the result of ZRANGE to a new Sorted Set.
I looked into using the solution posted here, and while it does seem to order the results correctly, the resulting scores in the Sorted Set can no longer be used to accurately limit results based on time (ie. only want ones within the last day).
For example:
redis> ZADD lastseen 12345 "foo"
redis> ZADD lastseen 12350 "bar"
redis> ZADD lastseen 12355 "sucka"
redis> ZADD points 5 "foo"
redis> ZADD points 3 "bar"
redis> ZADD points 9 "sucka"
What I'd like to end up with, assuming my time window is between 12349 and 12356, is the list of members ['sucka', 'bar'].
The solutions I can think of are:
1) Your wish was to ZREVRANGEBYSCORE and somehow save the temporary result. Instead you could copy the zset (which can be done with a ZINTERSTORE with only one set as an argument), then do a ZREMRANGEBYSCORE on the new copy to get rid of the times you're not interested in, then do the final ZINTERSTORE.
2) Do it in a loop on the client, as Eli suggested.
3) Do the same thing in a Lua script.
These are all potentially expensive operations, so what's going to work best will depend on your data and use case. Without knowing more, I would personally lean towards the Lua solution.
For queries that get this complex, you want to supplement Redis' built-in commands with another processing language. The easiest way to do that is calling from within whatever your backend language is and using that to process. An example in Python using redis-py is:
import redis
finish_time, start_time = 12356, 12349
r = redis.Redis(host='localhost', port=6379, db=0, password='some_pass')
entries_in_time_frame = r.zrevrangebyscore('lastseen', finish_time, start_time)
p = r.pipeline()
for entry in entries_in_time_frame:
p.zscore('points', entry)
scores = zip(entries_in_time_frame, p.execute())
sorted_entries = [tup[0] for tup in sorted(scores, key=lambda tup: tup[1])]
>>> ['sucka', 'bar']
Note the pipeline, so we're only ever sending two calls to the Redis server, so network latency shouldn't slow us down much. If you need to go even faster (perhaps if what's returned by the first ZREVRANGEBYSCORE is very long), you can rewrite the same logic as above as a Lua script. Here's a working example (note my lua is rusty, so this can be optimized):
local start_time = ARGV[1]
local finish_time = ARGV[2]
local entries_in_time_frame = redis.call('ZREVRANGEBYSCORE', KEYS[1], finish_time, start_time)
local sort_function = function (k0, k1)
local s0 = redis.call('ZSCORE', KEYS[2], k0)
local s1 = redis.call('ZSCORE', KEYS[2], k1)
return (s0 > s1)
end
table.sort(entries_in_time_frame, sort_function)
return entries_in_time_frame
You can call it like so:
redis-cli -a some_pass EVAL "$(cat script.lua)" 2 lastseen points 12349 12356
Returning:
1) "bar"
2) "foo"
I use Sets type of Redis to store number of items Notification Ids, For example :
SADD bookNotify:user:1 "1"
SADD bookNotify:user:1 "2"
SADD bookNotify:user:1 "3"
SADD bookNotify:user:1 "4"
SADD bookNotify:user:1 "8"
How I can remove last of three items?What is the best structure and data type for CRUD notification in redis?
Since Redis' Sets are unordered, the very notion of "last elements" is meaningless for these.
I recommend looking into Sorted Sets (follow the trail of ZADD), perhaps using epoch values as scores.
I'm trying to add a value to a list but only if it hasn't been added yet.
Is there a command to do this or is there a way to test for the existence of a value within a list?
Thanks!
I need to do the same.
I think about to remove the element from the list and then add it again. If the element is not in the list, redis will return 0, so there is no error
lrem mylist 0 myitem
rpush mylist myitem
As Tommaso Barbugli mentioned you should use a set instead a list if you need only unique values.
see REDIS documentation SADD
redis> SADD myset "Hello"
(integer) 1
redis> SADD myset "World"
(integer) 1
redis> SADD myset "World"
(integer) 0
redis> SMEMBERS myset
1) "World"
2) "Hello"
If you want to check the presence of a value in the set you may use SISMEMBER
redis> SADD myset "one"
(integer) 1
redis> SISMEMBER myset "one"
(integer) 1
redis> SISMEMBER myset "two"
(integer) 0
It looks like you need a set or a sorted set.
Sets have O(1) membership test and enforced uniqueness.
If you can't use the SETs (in case you want to achieve some blocking POP/PUSH list features) you can use a simple script:
script load 'local exists = false; for idx=1, redis.call("LLEN",KEYS[1]) do if (redis.call("LINDEX", KEYS[1], idx) == ARGV[1]) then exists = true; break; end end; if (not exists) then redis.call("RPUSH", KEYS[1], ARGV[1]) end; return not exists or 0'
This will return the SHA code of the script you've added.
Just call then:
evalsha 3e31bb17571f819bea95ca5eb5747a373c575ad9 1 test-list myval
where
3e31bb17571f819bea95ca5eb5747a373c575ad9 (the SHA code of the script you added)
1 — is number of parameters (1 is constant for this function)
test-list — the name of your list
myval - the value you need to add
it returns 1 if the new item was added or 0 if it was already in the list.
Such feature is available in set using hexistshexists command in redis.
Checking a list to see if a member exists within it is O(n), which can get quite expensive for big lists and is definitely not ideal. That said, everyone else seems to be giving you alternatives. I'll just tell you how to do what you're asking to do, and assume you have good reasons for doing it the way you're doing it. I'll do it in Python, assuming you have a connection to Redis called r, some list called some_list and some new item to add called new_item:
lst = r.lrange(list_name, -float('Inf'), float('Inf'))
if new_item not in lst:
r.rpush(list_name, new_item)
I encountered this problem while adding to a task worker queue, because I wanted to avoid adding many duplicate tasks. Using a Redis set (as many people are suggesting) would be nice, but Redis sets don't have a "blocking pop" like BRPOPLPUSH, so they're not good for task queues.
So, here's my slightly non-ideal solution (in Python):
def pushOnlyNewItemsToList(redis, list_name, items):
""" Adds only the items that aren't already in the list.
Though if run simultaneously in multiple threads, there's still a tiny chance of adding duplicate items.
O(n) on the size of the list."""
existing_items = set(redis.lrange(list_name,0,-1))
new_items = set(items).difference(existing_items)
if new_items:
redis.lpush(list_name, *new_items)
Note the caveats in the docstring.
If you need to truly guarantee no duplicates, the alternative is to run LREM, LPUSH inside a Redis pipeline, as in 0xAffe's answer. That approach causes less network traffic, but has the downside of reordering the list. It's probably the best general answer if you don't care about list order.
I have just started with Redis. My DB contains about 1 billion records. Using HKEYS * results in an out of memory error.
Is there a way to iterate through keys? Something like HKEYS * but with a limit n?
Edit:
I am now using a loop which matches a pattern
for c in '1234567890abcedf':
r.keys(c + '*')
Available since Redis 2.8.0 are the cursor based Redis iteration commands (SCAN, HSCAN etc) that let you iterate efficiently over billions of keys.
For your specific case, the start using HSCAN instead of HKEYS/HGETALL. It is efficient, cheap on server resources and scales very well. You can even add a pattern to HSCAN unlike HKEYS.
e.g.
127.0.0.1:6379> HMSET hash0 key0 value0 key1 value1 entry0 data0 entry1 data1
OK
127.0.0.1:6379> HSCAN hash0 0 MATCH key*
1) "0"
2) 1) "key0"
2) "value0"
3) "key1"
4) "value1"
127.0.0.1:6379> HSCAN hash0 0
1) "0"
2) 1) "key0"
2) "value0"
3) "key1"
4) "value1"
5) "entry0"
6) "data0"
7) "entry1"
8) "data1"
You can't iterate over redis keys directly, but you can accomplish something very similar by transactionally writing the key portion of your key-value pair to a sorted set at the same time you write your key-value pair.
Downstream, you would "iterate" over your keys by reading n keys from the sorted set, and then transactionally removing them from the sorted set at the same time as you remove the associated key-value pair.
I wrote up an example with some C# code here: http://rianjs.net/2014/04/how-to-iterate-over-redis-keys/
You could do this in any language that has a redis library that supports transactions.
For iterating through keys:
SCAN cursor [MATCH pattern] [COUNT count]
http://redis.io/commands/scan
For iterating through the values of a hash
HSCAN key cursor [MATCH pattern] [COUNT count]
http://redis.io/commands/hscan
Sorry, at the current time, year 2012, the simple answer is no, however, with lua scripting you could do it, although that is not direct redis in the strictest sense.