How to delete a Redis Stream? - redis

I've created a Redis stream:
XADD mystream * foo bar
And I've associated it with a consumer group:
XGROUP CREATE mystream mygroup $
Now I want to delete it, so that Redis acts as though the stream had never existed. How do I delete it?
I've tried using XTRIM:
XTRIM mystream MAXLEN 0
This successfully puts the length of the stream to zero. But it doesn't fully delete the stream, as attempts to XREADGROUP still succeed and do not return the typical error when this method is called without the group existing:
XREADGROUP GROUP mygroup myconsumer COUNT 1 STREAMS mystream >
Actual output:
(nil)
Expected output:
NOGROUP No such key 'mystream' or consumer group 'mygroup' in XREADGROUP with GROUP option

Just use the DEL command:
DEL mystream

Pretty straightforward answer, right away from the first search online.
Just execute:
DEL stream_name
XTRIM only removes the data within the stream, but does not delete the stream itself or any groups associated to it.

Related

Redis XADD: ERR Invalid stream ID specified as stream command argument

Why am I getting this error for xadd.
Redis: 6.2
127.0.0.1:6379> xadd hello 1658902141-* key val
(error) ERR Invalid stream ID specified as stream command argument
127.0.0.1:6379> xadd hello 1658902141000-* key val
(error) ERR Invalid stream ID specified as stream command argument
127.0.0.1:6379> XADD mystream 1526919030474-* message " World!"
(error) ERR Invalid stream ID specified as stream command argument
Event id with timestamp-* format, is a new feature of Redis-7.0. You should check your Redis version. If it's an older version, you cannot use this feature.
If you want to achieve the goal, i.e. fix timestamp, while increase the counter, you have to do it on client side. Or use '*' as ID, to make Redis automatically generating id with an increasing timestamp.

Redis Streams inconsistent behavior of blocking XREAD after XDEL

Calling XREAD after XDEL will not block on the stream, but return immediately. Expected behavior is for XREAD to block again.
127.0.0.1:6379> XADD my-stream * field1 string1
"1554300150697-0"
127.0.0.1:6379> XREAD BLOCK 5000 STREAMS my-stream 1554300150697-0
(nil)
(5.07s)
127.0.0.1:6379> XADD my-stream * field2 string2
"1554300285984-0"
127.0.0.1:6379> XREAD BLOCK 5000 STREAMS my-stream 1554300150697-0
1) 1) "my-stream"
2) 1) 1) "1554300285984-0"
2) 1) "field2"
2) "string2"
127.0.0.1:6379> XDEL my-stream 1554300285984-0
(integer) 1
127.0.0.1:6379> XLEN my-stream
(integer) 1
127.0.0.1:6379> XREAD BLOCK 5000 STREAMS my-stream 1554300150697-0
1) 1) "my-stream"
2) (empty list or set)
127.0.0.1:6379>
As you can see above, the first time XREAD is called it blocks for 5s - expected.
The second call to XREAD returns immediately, giving the new entry - expected.
The third call to XREAD return immediately with (empty list or set) - not expected!
Expected: The command should block for 5s.
I'm not sure if this is a bug or if there's something that I'm missing out. Please advise.
Thank you
It looks like you're running into this known bug.
See the second comment in particular, which points out that the partial fix supplied does not fix the issue you're running into:
It's not an entire fix for the blocking issue, since it only fixes the blocking behaviour for empty streams.
If the stream still contains some entries, but none with a larger ID than requested by the last-received-id parameter, then the request is still answered synchronously with an empty result list.
Looking through 5.0.4's source code I've found a way to (re)set ->last_id member through an undocumented command: XSETID
Although in the source code https://github.com/antirez/redis/blob/f72f4ea311d31f7ce209218a96afb97490971d39/src/t_stream.c#L1837 it says the syntax is XSETID <stream> <groupname> <id>, it's in fact XSETID <stream> <id>(there's an open issue on this one: https://github.com/antirez/redis/issues/5519 but I hope they'd add a new command for groups, like XGROUPSETID, and let this one as it is), which was exactly what I was looking for, so doing:
XSETID my-stream 1554300150697-0
would make:
127.0.0.1:6379> XREAD BLOCK 5000 STREAMS my-stream 1554300150697-0
(nil)
(5.08s)
127.0.0.1:6379>
to work as expected - it will block.
For anyone using this solution(which is more like a workaround in my opinion): Please use it with caution because in a high throughput machine/system/environment Redis could generate/add a new my-stream entry with the same ID as the deleted one 1554300285984-0 leading to possible duplicate data on client's side.

Redis mass insertion: protocol vs inline commands

For my task I need to load a bulk of data into Redis as soon as possible. It looks like this article is right about my case: https://redis.io/topics/mass-insert
The article starts from giving an example of using multiple inline SET commands with redis-cli. Then they proceed to generating Redis protocol and again use it with redis-cli. They don't explain the reasons or benefits of using Redis protocol.
Using of Redis protocol is a bit harder and it generates a bit more traffic. I wonder, what are the reasons to use Redis protocol rather than simple one-line commands? Probably despite the fact the data is larger, it is easier (and faster) for Redis to parse it?
Good point.
Only a small percentage of clients support non-blocking I/O, and not
all the clients are able to parse the replies in an efficient way in
order to maximize throughput. For all this reasons the preferred way
to mass import data into Redis is to generate a text file containing
the Redis protocol, in raw format, in order to call the commands
needed to insert the required data.
What I understood is that you emulate a client when you use Redis protocol directly, which would benefit from the highlighted points.
Based on the docs you provided, I tried these scripts:
test.rb
def gen_redis_proto(*cmd)
proto = ""
proto << "*"+cmd.length.to_s+"\r\n"
cmd.each{|arg|
proto << "$"+arg.to_s.bytesize.to_s+"\r\n"
proto << arg.to_s+"\r\n"
}
proto
end
(0...100000).each{|n|
STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}"))
}
test_no_protocol.rb
(0...100000).each{|n|
STDOUT.write("SET Key#{n} Value#{n}\r\n")
}
ruby test.rb > 100k_prot.txt
ruby test_no_protocol.rb > 100k_no_prot.txt
time cat 100k.txt | redis-cli --pipe
time cat 100k_no_prot.txt | redis-cli --pipe
I've got these results:
teixeira: ~/stackoverflow $ time cat 100k.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.168s
user 0m0.025s
sys 0m0.015s
(5 arquivo(s), 6,6Mb)
teixeira: ~/stackoverflow $ time cat 100k_no_prot.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.433s
user 0m0.026s
sys 0m0.012s

Redis delete all keys except keys that start with

My redis collection contains many keys
I want to be able to flush them all except all the keys that start with:
"configurations::"
is this possible?
You can do this
redis-cli KEYS "*" | grep -v "configurations::" | xargs redis-cli DEL
List all keys into the redis, remove from the list keys that contains "configurations::" and delete them from the redis
Edit
As #Sergio Tulentsev notice it keys is not for use in production. I used this python script to remove keys on prodution redis. I stoped replication from master to slave before call the script.
#!/usr/bin/env python
import redis
import time
pattern = "yourpattern*"
poolSlave = redis.ConnectionPool(host='yourslavehost', port=6379, db=0)
redisSlave = redis.Redis(connection_pool=poolSlave)
poolMaster = redis.ConnectionPool(host='yourmasterhost', port=6379, db=0)
redisMaster = redis.Redis(connection_pool=poolMaster)
cursor = '0'
while cursor != 0:
cursor, data = redisSlave.scan(cursor, pattern, 1000)
print "cursor: "+str(cursor)
for key in data:
redisMaster.delete(key)
print "delete key: "+key
# reduce call per second on production server
time.sleep(1)
The SCAN & DEL approach (as proposed by #khanou) is the best ad-hoc solution. Alternatively, you could keep an index of all your configurations:: key names with a Redis Set (simply SADD the key's name to it whenever you create a new configurations:: key). Once you have this set you can SSCAN it to get all the relevant key names more efficiently (don't forget to SREM from it whenever you DEL though).
Yes, it's possible. Enumerate all the keys, evaluate each one and delete if it fits the criteria for deletion.
There is no built-in redis command for this, if this is what you were asking.
It might be possible to cook up a Lua script that will do this (and it'll look to your app that it's a single command), but still it's the same approach under the hood.

AccuRev : How to get all files changed?

I am looking to get list of files changes between a timestamp.
For example 2013/11/11 11:10:00-now.
accurev hist command given the files changed on that particular stream but it does not include the changes came from parent stream.
Is there a way to get the list of changes flew from parent streams?
Change the basis time of your child stream to the date of 2013/11/11 11:10:00. Then perform a diff by files across the child and parent stream.
Accurev 6 has added some new arguments for the diff command so the following should do the trick:
accurev diff -a -i -v MyStream -V MyStream -t "2013/11/11 11:10:00-now"
Alternatively you could try the accurev.py script, from the ac2git repo, which will return to you all the transactions that could have affected your stream. Run it like this:
python accurev.py deep-hist -p MyDepot -s MyStream -t "2013/11/11 11:10:00-now"