Why redis pubsub working is independent of database? - redis

I am newbie to Redis and trying to understand concept of Redis PubSub.
Step- 1:
root#01a623a828db:/data# redis-cli -n 1
127.0.0.1:6379[1]> subscribe foo
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "foo"
3) (integer) 1
In 1st step, subscribed database 1
Step- 2:
root#01a623a828db:/data# redis-cli -n 4
127.0.0.1:6379[4]> publish foo 2
(integer) 1
In 2nd step, published message on database 4
Step- 3:
root#01a623a828db:/data# redis-cli -n 1
127.0.0.1:6379[1]> subscribe foo
Reading messages... (press Ctrl-C to quit)
..........................................
1) "message"
2) "foo"
3) "2"
In 3rd step, on database 1 got the message which was published on database 4 in 2nd Step.
I tried to find out the reason behind it but I found same answer everywhere- "Pub/Sub has no relation to the key space. It was made to not interfere with it on any level, including database numbers. Publishing on db 10, will be heard by a subscriber on db 1. If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production)- This is as per official documentation of Redis PubSub."
Ques-
Why redis pubsub working architecture is independent of database?
How to implement "If you need scoping of some kind, prefix the channels with the name of the environment (test, staging, production)"?
"Publishing on db 10, will be heard by a subscriber on db 1."- It is not inline with statement
It was made to not interfere with it on any level, including database numbers.

it's a matter of design choice really.
If you need scoping, you can always prefix the pattern. eg: pattern productupdate on test env will be watched via test:productupdate and on staging env, it will be staging:productupdate
It seems to inline well with the statement. the database number doesn't matter here.

Related

Redis memory usage CLI does not work on cluster

I using the go-redis library to check memory usage of a specific key on a Redis cluster.
The library fails sporadically with error "redis: nil" which usually means that it accesses the wrong redis instance to find the key.
The go-redis library is using the Redis CLI: "command" to get the list of arguments for each command, and to find where is the redis key position in the arguments list.
Specifically for the memory CLI, the output of the "command" CLI is:
157) 1) "memory"
2) (integer) -2
3) 1) readonly
2) random
4) (integer) 0
5) (integer) 0
6) (integer) 0
The Redis document: https://redis.io/commands/command
items 4 and 5 are the positions of the first key in arguments, and the last key in arguments.
But the values are zero?
According to the memory CLI document: https://redis.io/commands/memory-usage
The items 4 and 5 should both have the value 3.
Is this a bug in the output of the redis "command" CLI, or am I misunderstanding this?
This is a design issue in redis, see https://github.com/redis/redis/issues/7493
The final action was that a merge of a pull request:
https://github.com/go-redis/redis/pull/1400

Rpush not adding particular key

I am facing a quite strange issue in our deployment
After certain time in operation
I could not add a particular list with a particular keyname or listname to Redis using RPUSH.
Example
RPUSH client-send-process-servername TEST
I am unable to add that key to Redis database.
Anyways the output after executing that command i get
(Integer) 1
But i could not see the list
redis-cli keys *client-send*
Return empty list
However when this problem appears
I am able successfully execute the following command.
RPUSH client-send-process-ser TEST
And
redis-cli keys *client-send*
NOTE: one ASTRICK before n aftet client-send string
Which is not displayed here
It is listing the Queue
"client-send-process-ser"
However strangely i could not add a List with specific key say
client-send-process-servername
Any ideas to debug further.. where to look and what to look.
Redis Server Version is 2.8
I tired enabling debug logs in redis and tried to use redis-monitor command.
However nothing relevant could be found explaining this issue. I am eager to find a solution. Please if some one could help me to pursue further would be a great help.
It seems that this problem is not reproducible. However I've tested it in my local pc and found that it is working fine.
Example:
➜ ~ redis-cli
127.0.0.1:6379> RPUSH client-send-process-servername TEST
(integer) 1
127.0.0.1:6379> keys *client-send*
1) "client-send-process-servername"
127.0.0.1:6379> keys client-send*
1) "client-send-process-servername"
127.0.0.1:6379> RPUSH client-send-process-ser TEST
(integer) 1
127.0.0.1:6379> keys client-send*
1) "client-send-process-ser"
2) "client-send-process-servername"
127.0.0.1:6379>
If there is any option, you can try updating your redis server. current updated versions is 5.0.7 and yours is 2.8.

CONFIG GET command is not allowed from script

I am working in a scenario with one Redis Master and several replicas, distributed in two cities, for geographic redundancy. My point is, for some reason, in a Lua script I need to know in which city the active instance is running. I thought, quite simple :
redis.call("config", "get", "bind"), and knowing the server IP I will determine the city. Not quite:
$ cat config.lua
redis.call("config", "get", "bind")
$ redis-cli --eval config.lua
(error) ERR Error running script (call to f_9bfbf1fd7e6331d0e3ec9e46ec28836b1d40ba35): #user_script:1: #user_script: 1: This Redis command is not allowed from scripts
Why is "config" not allowed from scripts? First, even though it's no deterministic, there are no write commands.
Second, I am using version 5.0.4, and from version 5 the replication is supposed to be done by effects and not by script propagation.
Any ideas?

Redis mass insertion: protocol vs inline commands

For my task I need to load a bulk of data into Redis as soon as possible. It looks like this article is right about my case: https://redis.io/topics/mass-insert
The article starts from giving an example of using multiple inline SET commands with redis-cli. Then they proceed to generating Redis protocol and again use it with redis-cli. They don't explain the reasons or benefits of using Redis protocol.
Using of Redis protocol is a bit harder and it generates a bit more traffic. I wonder, what are the reasons to use Redis protocol rather than simple one-line commands? Probably despite the fact the data is larger, it is easier (and faster) for Redis to parse it?
Good point.
Only a small percentage of clients support non-blocking I/O, and not
all the clients are able to parse the replies in an efficient way in
order to maximize throughput. For all this reasons the preferred way
to mass import data into Redis is to generate a text file containing
the Redis protocol, in raw format, in order to call the commands
needed to insert the required data.
What I understood is that you emulate a client when you use Redis protocol directly, which would benefit from the highlighted points.
Based on the docs you provided, I tried these scripts:
test.rb
def gen_redis_proto(*cmd)
proto = ""
proto << "*"+cmd.length.to_s+"\r\n"
cmd.each{|arg|
proto << "$"+arg.to_s.bytesize.to_s+"\r\n"
proto << arg.to_s+"\r\n"
}
proto
end
(0...100000).each{|n|
STDOUT.write(gen_redis_proto("SET","Key#{n}","Value#{n}"))
}
test_no_protocol.rb
(0...100000).each{|n|
STDOUT.write("SET Key#{n} Value#{n}\r\n")
}
ruby test.rb > 100k_prot.txt
ruby test_no_protocol.rb > 100k_no_prot.txt
time cat 100k.txt | redis-cli --pipe
time cat 100k_no_prot.txt | redis-cli --pipe
I've got these results:
teixeira: ~/stackoverflow $ time cat 100k.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.168s
user 0m0.025s
sys 0m0.015s
(5 arquivo(s), 6,6Mb)
teixeira: ~/stackoverflow $ time cat 100k_no_prot.txt | redis-cli --pipe
All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 100000
real 0m0.433s
user 0m0.026s
sys 0m0.012s

In Lettuce(4.x) for Redis how to reduce round trips and use output of one command as input for another command, especially for Georadius

I have seen this pass results to another command in redis
and using via command line this command works well :
src/redis-cli keys '*' | xargs src/redis-cli mget
However how can we achieve the same effect via Lettuce (i started trying out 4.0.2.Final)
Also a solution to this is particularly important in the following scenario :
Say we are using geolocation capabilities, and we add a set of locations of "my-location-category"
using GEOADD
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1" 8.3796281 48.9978127 "location-id:2" 8.665351 49.553302 "location-id:3"
Next, say we do a GeoRadius to get locations within 10 km radius of 8.6582361 49.5285495 for "category-1"
Now when we get "location-id:1" & "location-id:3"
Given that I already set values for above keys "location-id:1" & "location-id:3"
I want to pipe commands to do the GEORADIUS as well as do mget on all the matching results.
Does Redis provide feature to do that?
and / or how can we achieve this via the Lettuce client library without first manually iterating through results of GEORADIUS and then do manual mget.
That would be more efficient performance for the program that uses it.
Does anyone know how we can do this ?
Update
This is the piped command for the scenario I discussed above :
src/redis-cli GEORADIUS "category-1" 8.6582361 49.5285495 10 km | xargs src/redis-cli mget
Now we need to know how to do this via Lettuce
IMPORTANT: never use KEYS, always use SCAN instead if you must.
This isn't really a question about Lettuce nor Java so I can actually answer it :)
What you're trying to do is use the results from a read operation (GEORADIUS) as input (key names) for another read operation (MGET). This type of flow can't be pipelined, well, just because of that - pipelining means that you don't need the answers for operations right away but in you case you do.
However.
Since you're reading String keys with MGET, you might as well just denormalize everything (remember, we're NoSQL) and store the contents of these keys in the Sorted Set's members, e.g.:
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1:moredata:evenmoredata:{maybe a JSON document here}:orperhapsmsgpack"
This will allow you to get the locations and their "data" with one GEORADIUS call. Of course, any updates to location:1's data will need to be done across all categories.
A note about Lua scripts: while a Lua script could definitely save on the back and forth in this case, any such script will be against best practices/not cluster safe.
After digging around and studying Lua script, my conclusion is that removing round-trips in such a way can only be done via Lua scripts as suggested by Itamar Haber.
I ended up creating a lua script file (myscript.lua) as below
local locationKeys = redis.call('GEORADIUS', 'category-1', '8.6582361', '49.5285495', '10', 'km' )
if unpack(locationKeys) == nil then
return nil
else
return redis.call('MGET', unpack(locationKeys))
end
** of course we should be sending in parameters to this... this is just a poc :)
now you can execute it via command
src/redis-cli EVAL "$(cat myscript.lua)" 0
Then to reduce the network-overhead of sending across the entire script to Redis for execution, we have the option of registering the script with Redis.
Redis will give us a sha1 digested code for future references for that script, which can be used for next calls to that script.
This can be done as below :
src/redis-cli SCRIPT LOAD "$(cat myscript.lua)"
this should give back a sha1 code something like this : 49730aa2ed3034ee48f818e486tpbdf1b500b19e
next calls can be done using this code
eg
src/redis-cli evalsha 49730aa2ed3034ee48f818e486b2bdf1b500b19e 0
The sad part however here is that the sha1 digest is remembered only so long as the instance of redis is running. If it is restarted, that the sha1 digest is lost. Then you do the SCRIPT LOAD once again. And if nothing changes in the script, then the sha1-digest code will be the same.
Ideally while using through client api, we should first attempt evalsha, if that returns a "No matching script" error, then as a fallback do script load, and procure the sha1 code once again, and create an internal map of that and use that sha1 code for further calls.
This can well be done via Lettuce. I could find the methods for those. Hope this gives a good insight into solution for the problem.