How to know connected slaves to a master in redis? - redis

In redis-cli in master system, what command is used to know the list of slaves connected to the master.
I only got command to know the status of a server.
To know the status of current server, open redis-cli:
> role
1) "master"
2) (integer) 196364
3) 1)
1) "192.168.1.90"
2) "6379"
3) "196364"
2)
1) "192.168.1.7"
2) "6379"
3) "196364"

The easiest way to list all replicas connected to a Redis master is by with the CLIENT LIST command, i.e.:
CLIENT LIST TYPE replica
Note: the TYPE subscommand was added in v5.

Related

Watch updates in redis graph

I've recently discovered that redis has a property graph model implementation called redis graph and it's amazing.
One thing that I really miss for my use-case though, is the ability to "watch" the data. In typical redis data structures I can enable Keyspace notifications or client tracking and be notified on the data mutations I'm interested in, pull data from the server or mark my local cache as "dirty".
I don't know how that would work for a property graph since relations are much more complex (and the key feature for that matter), but is there a way to watch or synchronize with data stored in redis graph?
Keyspace notifications can be enabled for modules like this:
redis.cloud> CONFIG SET notify-keyspace-events AKE
The 'A' part includes modules—if the module publishes anything. Unfortunately, I tried this with RedisGraph, and it doesn't.
You can reproduce my test below. In one terminal I launched redis-cli and did this:
127.0.0.1:6379> PSUBSCRIBE *
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "*"
3) (integer) 1
In another I did this:
127.0.0.1:6379> GRAPH.QUERY test 'CREATE (n:foo { alfa: 12, bravo: "test" })'
1) 1) "Labels added: 1"
2) "Nodes created: 1"
3) "Properties set: 2"
4) "Cached execution: 0"
5) "Query internal execution time: 0.204701 milliseconds"
127.0.0.1:6379> GRAPH.QUERY test 'MATCH (n) RETURN n'
1) 1) "n"
2) 1) 1) 1) 1) "id"
2) (integer) 0
2) 1) "labels"
2) 1) "foo"
3) 1) "properties"
2) 1) 1) "alfa"
2) (integer) 12
2) 1) "bravo"
2) "test"
3) 1) "Cached execution: 0"
2) "Query internal execution time: 1.106191 milliseconds"
127.0.0.1:6379> GRAPH..DELETE test
"Graph removed, internal execution time: 0.064498 milliseconds"
The first terminal returned nothing in response to this.
No. Unfortunately, currently, there is no way to trigger a keyspace notification whenever nodes or relationships are created, removed, or updated.
We plan to add such functionality in the future, but there is no specific date we can share right now.

Why do I see SET in slave's slowlog?

My setting is Redis master-slave replication. I am sure the slaves are read only because when I connect to slave and try to write data, "(error) READONLY You can't write against a read only slave." is returned.
However, when I check the slowlog there are SET commands, eg:
127.0.0.1:6379> slowlog get 1
1) 1) (integer) 1360
2) (integer) 1544276677
3) (integer) 10653
4) 1) "SET"
2) "some value"
Anyone can explain this? Thanks in advance.
The Redis replica is replaying commands sent from the master, so the SET command must have originated from it.
It is unclear why that command ended in the slowlog, but it could be because of any number of reasons (IO or CPU blockage). If this happened once I wouldn't worry about it, but if it is pathological you may want to review your replica's infrastructure and configuration.

redis key after redis server shutdown and restart not available

To add keys to redis I did the following via the redis CLI:
127.0.0.1:6379> KEYS *
1) "key1"
2) "key2"
3) "key3"
127.0.0.1:6379> SET name "rahul"
OK
127.0.0.1:6379> KEYS *
1) "key1"
2) "name"
3) "key2"
4) "key3"
127.0.0.1:6379>
To validate the persistence of the data in my redis data store, I re-started the server, upon checking the keys, I found few keys to be missing :
127.0.0.1:6379> KEYS *
1) "key3"
2) "key2"
3) "key1"
127.0.0.1:6379>
Are there any specific naming conventions for redis keys. I was using a Windows system. Any idea of what has gone wrong. TIA.
If you do a graceful shutdown values will be written to disk before the service is shutdown. If it's a abrupt shutdown or power failure values will be lost. For that you can enable persistance (RDB or AOF). By default redis follows RDB snapshotting, by default it takes snapshot based on three conditions
1) atleast one keys changed for 15 mins.
2) atleast ten keys changed for 5 mins.
3) atleast 10,000 keys changed for 1 min.
You can change these values in redis.conf file under SNAPSHOTTING.
Try reading the redis.conf file fully, it will give you more detailed explanations.

Spring Session Token

Explored spring session and redis it looks really good.
Trying to solve one question for a long time , how to retrieve list of session token from redis db based on the spring session token value in the hash .
I know its not a relational database and there is no straightforward way to achieve but is that a way to figure this out which is really important for us to solve problems
I read in blogs we need to keep a set to track , are there any ways to acheive this when using spring session. i am not even sure how to do this
Any help is highly appreciated .
Thank you
Useful Commands:
redis-cli : To enter into redis console
Example:
root#root>redis-cli
127.0.0.1:6379> _
keys * :Shows all keys stored in redis DB
Example:
127.0.0.1:6379>keys *
“spring:session:expirations:1440354840000“
“spring:session:sessions:3b606f6d-3d30-4afb-bea6-ef3a4adcf56b“
monitor : To monitor the redis DB
127.0.0.1:6379> monitor
OK
1441273902.701071 [0 127.0.0.1:49137] "PING"
1441273920.000888 [0 127.0.0.1:49137] "SMEMBERS"
hgetall SESSION_ID :To check all the keys stored inside a session
example: :
127.0.0.1:6379>hgetall spring:session:sessions:3b606f6d-3d30-4afb-bea6-ef3a4adcf56b
flushall Remove all keys from the DB.
Example :
127.0.0.1:6379> flushall
ok
Open redis-cli then run
127.0.0.1:6379> keys *
1) "spring:session:expirations:1435594380000"
2) "spring:session:sessions:05adb1d7-c7db-4ffb-99f7-47d7bd1867ee"
127.0.0.1:6379> type spring:session:sessions:05adb1d7-c7db-4ffb-99f7-47d7bd1867ee
hash
127.0.0.1:6379> hgetall spring:session:sessions:05adb1d7-c7db-4ffb-99f7-47d7bd1867ee
1) "sessionAttr:SPRING_SECURITY_CONTEXT"
2) ""
3) "sessionAttr:javax.servlet.jsp.jstl.fmt.request.charset"
4) "\xac\xed\x00\x05t\x00\x05UTF-8"
5) "creationTime"
6) "\xac\xed\x00\x05sr\x00\x0ejava.lang.Long;\x8b\xe4\x90\xcc\x8f#\xdf\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00xp\x00\x00\x01N?\xfb\xb6\x83"
7) "maxInactiveInterval"
8) "\xac\xed\x00\x05sr\x00\x11java.lang.Integer\x12\xe2\xa0\xa4\xf7\x81\x878\x02\x00\x01I\x00\x05valuexr\x00\x10java.lang.Number\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00xp\x00\x00\a\b"
9) "lastAccessedTime"
10) "\xac\xed\x00\x05sr\x00\x0ejava.lang.Long;\x8b\xe4\x90\xcc\x8f#\xdf\x02\x00\x01J\x00\x05valuexr\x00\x10java.lang.Number\x86\xac\x95\x1d\x0b\x94\xe0\x8b\x02\x00\x00xp\x00\x00\x01N?\xfb\xb6\xa6"
127.0.0.1:6379>

Redis Sentinel : last node doesn't become master

I'm trying to set up an automatic failover system in a 3 nodes redis cluster. I installed redis-sentinel on each of these nodes (juste like this guy : http://www.symantec.com/connect/blogs/configuring-redis-high-availability).
Everything is fine as long as I have two or three nodes. The problem is that whenever there's only onte node remaining and that it's a slave, it does not get elected as master automatically. The quorum is set to 1, therefore the last node detects the odown of the master but can't vote for the failover since there's no majority.
To overcome this (surprising) issue, I wrote a little script that ask the other nodes for their masters, and if they don't answer I set the current node as the master. This script is called within the redis-sentinel.conf file, as a notification script. However ... As soon as the redis-sentinel service is started, this configuration is "erased" ! If I look at the configuration file in /etc, the "sentinel notification-script" line has disappeared (redis-sentinel rewrites its configuration file so why not) BUT the configuration I wrote is no longer available :
1) 1) "name"
2) "mymaster"
3) "ip"
4) "x.x.x.x"
5) "port"
6) "6379"
7) "runid"
8) "somerunid"
9) "flags"
10) "master"
11) "pending-commands"
12) "0"
13) "last-ping-sent"
14) "0"
15) "last-ok-ping-reply"
16) "395"
17) "last-ping-reply"
18) "395"
19) "down-after-milliseconds"
20) "30000"
21) "info-refresh"
22) "674"
23) "role-reported"
24) "master"
25) "role-reported-time"
26) "171302"
27) "config-epoch"
28) "0"
29) "num-slaves"
30) "1"
31) "num-other-sentinels"
32) "1"
33) "quorum"
34) "1"
35) "failover-timeout"
36) "180000"
37) "parallel-syncs"
38) "1"
That is the result of the sentinel-masters command. The only thing is that I previously set the "down-after-milliseconds" to 5000 and the "failover-timeout" to 10000 ...
I don't know if anyone has met anything similar ? Well, should someone has a little idea about wwhat's happening, I'd be glad about it ;)
This is a reason to not place your sentinels on your redis instance nodes. Think of them as monitoring agents. You wouldn't place your website monitor on the same node running your website and expect to catch the node death. The same is expected w/Sentinel.
The proper route to sentinel monitoring is to ideally run them from the clients, and if that isn't possible or workable, then from dedicated nodes as close to the clients as possible.
As antirez said, you need to have enough sentinels to have the election. There are two elections: 1: deciding on the new master and 2: deciding which sentinel handles the promotion. In your scenario you only have one sentinel, but to elect a sentinel to handle the promotion your sentinel needs votes from a quorum of Sentinels. This number is a majority of all sentinels seen. In your case it needs two sentinels to vote before an election can take place. This quorum number is not configurable and unaffected by the quorum setting. This is in place to reduce the chances of multiple masters.
I would also strongly advise against setting a quorum to be less than half+1 of your sentinels. This can lead to split brain operation where you have two masters. Or in your case you could have three. If you lost connectivity between your master and the two slaves but clients still had connectivity your settings could trigger split brain - where a slave was promoted and new connections talked to that master while existing ones continue talking to the original. Thus you have valid data in two masters which likely conflict with each other.
The author of that Symantec article only consider the Redis daemon dying, not the node. Thus it really isn't an HA setup.
the quorum is only used to reach the ODOWN state, that triggers the failover. For the failover to actually happen the slave must be voted by a majority, so a single node can't get elected. If you have such a requirement, and you don't care about only the majority side being able to continue in your cluster (this means unbound data loss in the minority side if clients get partitioned with a minority where there is a master), you can just add sentinels in your clients machines as well, this way the total num of Sentinels is, for example, 5, and even if two Redis nodes are down, the only remaining node plus two sentinels running client side are enough to get majority of 3. Unfortunately the Sentinel documentation is not complete enough to explain this stuff. There is all the info to get the picture right, but no examples for a faster reading / deploying.