I read the redis stream docs that when acknowlege the redis stream element, the redis just mark the element to deleted, but how to show the deleted flag? is it possible to show the redis stream queue deleted flag?
Related
We use spring Redis data indexes to fetch some data apart from the key.
We are using Amazon Elastic Cache in a clustered mode.
It seems the Indexed entries are not getting cleared even if the original entries are getting cleared.
In our Redis Configuration we have subscribed to keyspace events on Startup. But it seems keyspace events don't work reliably as internally Spring data Redis subscribes to any random node.
Please check some links below for details
Spring Redis - Indexes not deleted after main entry expires
https://github.com/spring-projects/spring-data-redis/issues/1111
One recommendation is to subscribe to all the master nodes. I am not sure how to subscribe to all the nodes from Spring Data Redis.
Best Regards,
Saurav
I think you need the following:
#EnableRedisRepositories(enableKeyspaceEvents = EnableKeyspaceEvents.ON_STARTUP)
Check this question and its answer for further detail.
I am running Redis in sentinels mode, It has happened many times that I write data in Redis but while reading same key I don't get expected value.
I am wondering if it is possible when I write data it is written on Master and while reading it goes to slave but since Replication in Redis is asynchronous in nature all slaves are not updated and hence I don't get updated value/ valid value.
I am using redisson client and three servers for sentinel configuration.
It's not possible. To overcome this you may choose from follow options:
set readMode config parameter to MASTER
use RBatch object with defined syncSlaves setting BatchOptions.syncSlaves(2, 10, TimeUnit.SECONDS)
the redis.conf says:
1) Disk-backed: The Redis master creates a new process that writes the RDB
file on disk. Later the file is transferred by the parent
process to the slaves incrementally
I just dont know what does "transferred by the parent process to the slaves" mean?
thank you
It is simple. First read the RDB file into a buffer, and use socket.write to send this to salve's port which is listenning this.
The implemention is more complex than what I said. But this is what redis do. You can refer the replication.c in redis/src for more details.
EDITED:
Yes, the disk-less mechanism just use the child process directly sends the RDB over the wire to slaves, without using the disk as intermediate storage.
Actually, if you use disk to save the RDB and redis master can serve many slaves at the same time without queuing. Once the disk-less replication serve on slave, and if another slave comes and want do a full sync, it need to be queued to wait for the first slave to finish. So there are another settings repl-diskless-sync-delay to wait more slave to do this parallel.
And these two method only occur after something wrong happens. In the normal case, the redis master and salve through a well connected wire to replicate the redis command the slave to keep the same between the master and slave. And if the wire is break or the slave fall down, then need do a partial resync action to obtain the part slave missed. If the psync is not possible to achieve, it will try do full resync. The full resync is what we talked about.
This is how a full synchronization works in more details:
The master starts a background saving process in order to produce an RDB file. At the same time it starts to buffer all new write commands received from the clients. When the background saving is complete, the master transfers the database file to the slave, which saves it on disk, and then loads it into memory. The master will then send all buffered commands to the slave. This is done as a stream of commands and is in the same format of the Redis protocol itself.
And the disk-less replication is just a new feature which supports the full-resync in that case to deal with the slow disk stress. More about it refer to https://redis.io/topics/replication. such as how do psync and why psync will fail, you can find answer from this article.
It seems that the only way to sync data between redis servers is to use the command slaveof, but how to know whether the data has been replicated successfully? I mean, I want to be notified just after the sync done.
I've read some resource code of redis, mainly replication.c, and find nothing official. The only way I know for now, is to use redis command info, and check a specific flag by polling, which looks bad.
Is there any better way to do this?
The way you're trying, i.e. slaveof, is to sync data between Redis master and Redis slave. Whenever some data has been written to master, it will be sync to slave. So, technically, the sync will never be DONE.
If what you want is a snapshot of current data set, you can use the BGSAVE command to save the data set into an RDB file. With the LASTSAVE command, you can check if the BGSAVE has been done. Then copy the file to another host, and load it with Redis.
I'm using Redis for storing simple key, value pairs; where, value is also of string type. In my Redis cluster, I've a master and two slaves. I want to propagate any of the changes to the data from one of the slaves to any other store (actually, oracle database). How can I do that reliably? The sink database only needs to be eventually consistent. Some delay is allowed.
Strategies I can think of:
a) Read the AOF file written by the slave machine and propagate the changes. (Requires parsing the AOF file and getting notified of every change to the file.)
b) Use rpoplpush. The reliable queue pattern provided. But, how to make the slave insert to that queue whenever it gets some set event from the master?
Any other possibility?
This is a very common problem faced by Redis developers. In a nutshell, it is the fact that:
Want to know all changes sinse last
Keep this change data atomic
I believe that any decision one way or another will be around these issues. So, yes AOF is one of best choises in this case, but here is not any production ready instruments for that. Yes, it is not very complex solution in case of one server but then using master/slave or cluster it can be very complex.
Using Keyspace notifications
Look's like Keyspace Notifications feature may be alternative. Keyspace notifications is a feature available since 2.8.0 and available in Redis cluster too. From original documentation:
Keyspace notifications allows clients to subscribe to Pub/Sub channels in order to receive events affecting the Redis data set in some way.Examples of the events that is possible to receive are the following:
All the commands affecting a given key.
All the keys receiving an LPUSH operation.
All the keys expiring in the database 0.
Events are delivered using the normal Pub/Sub layer of Redis, so clients implementing Pub/Sub are able to use this feature without modifications.
Because Redis Pub/Sub is fire and forget currently there is no way to use this feature if you application demands reliable notification of events, that is, if your Pub/Sub client disconnects, and reconnects later, all the events delivered during the time the client was disconnected are lost. This can be improved by duplicating the employees who serve this Pub/Sub channel:
The group of N workers subscribe to notification and put data to SET based "sync" list. This allow us control overhead and do not write same data to our sync list.
The other group of workers pop record with SPOP and write it other store.
Using manual update list
The other way is using special "sync" SET based list with every write operation (as i understand SET/HSET in your case). Something like:
MULTI
SET myKey value
SADD myKey
EXEC
Each time you modify your key you add key name to SET. So in other process or worker you can SPOP that key, read value and update source.
Also you can use RPOPLPUSH/LPOPRPUSH besides of SPOP in some kind of in progress list to protect your key would missed if worker failed. In this case each worker first RPOPLPUSH/LPOPRPUSH from sync set to in progress set, push data to storage and remove key from in progress set.