I am writing a script that needs to dequeue items from a live redis queue and enqueue on a remote server
What is the most efficient way for doing this
I need something like the redis MIGRATE command , but I can not lock the incoming queue at the source.
Using migrate() command in redis solves this requirement
I now rename the queue and fire migrate. There is no lock period because the migrate happens on a different key
Related
I have tried below approach .
Step 1) Took a snapshot of AWS Redis and copied it in Azure
Step 2) Listened to the key space notifications from AWS Redis and put it in a Queue and read these events from the queue and applied it on Azure Redis.
Problems faced:
1)Every operation which effects AWS Redis data, needs one query to AWS Redis to get the effected data as it is not present in notification.
2) Some operations like 'rename' is being delivered in two notifications, handling these two notifications to construct the actual operation will be bit difficult even in Queue.
I have a question related to a tricky situation in an event-driven system that I want to ask for advise. Here is the situation:
In our system, I use redis as a memcached database, and kafkaa as message queues. To increase the performance of redis, I use lua scripting to process data, and at the same time, push events into a blocking list of redis. Then there will be a process to pick redis events in that blocking list and move them to kafka. So in this process, there are 3 steps:
1) Read events from redis list
2) Produce in batch into kafka
3) Delete corresponding events in redis
Unfortunately, if the process dies between 2 and 3, meaning that after producing all events into kafka, it doesn't delete corresponding events in redis, then after that process is restarted, it will produce duplicated events into kafka, which is unacceptable. So does any one has any solution for this problem. Thanks in advance, I really appreciate it.
Kafka is prone to reprocess events, even if written exactly once. Reprocessing will almost certainly be caused by rebalancing clients. Rebalancing might be triggered by:
Modification of partitions on a topic.
Redeployment of servers and subsequent temporary unavailabilty of clients.
Slow message consumption and subsequent recreation of client by the broker.
In other words, if you need to be sure that messages are processed exactly once, you need to insure that at the client. You could do so, by setting a partition key that ensures related messages are consumed in a sequential fashion by the same client. This client could then maintain a databased record of what he has already processed.
It seems that the only way to sync data between redis servers is to use the command slaveof, but how to know whether the data has been replicated successfully? I mean, I want to be notified just after the sync done.
I've read some resource code of redis, mainly replication.c, and find nothing official. The only way I know for now, is to use redis command info, and check a specific flag by polling, which looks bad.
Is there any better way to do this?
The way you're trying, i.e. slaveof, is to sync data between Redis master and Redis slave. Whenever some data has been written to master, it will be sync to slave. So, technically, the sync will never be DONE.
If what you want is a snapshot of current data set, you can use the BGSAVE command to save the data set into an RDB file. With the LASTSAVE command, you can check if the BGSAVE has been done. Then copy the file to another host, and load it with Redis.
I'm using Redis for storing simple key, value pairs; where, value is also of string type. In my Redis cluster, I've a master and two slaves. I want to propagate any of the changes to the data from one of the slaves to any other store (actually, oracle database). How can I do that reliably? The sink database only needs to be eventually consistent. Some delay is allowed.
Strategies I can think of:
a) Read the AOF file written by the slave machine and propagate the changes. (Requires parsing the AOF file and getting notified of every change to the file.)
b) Use rpoplpush. The reliable queue pattern provided. But, how to make the slave insert to that queue whenever it gets some set event from the master?
Any other possibility?
This is a very common problem faced by Redis developers. In a nutshell, it is the fact that:
Want to know all changes sinse last
Keep this change data atomic
I believe that any decision one way or another will be around these issues. So, yes AOF is one of best choises in this case, but here is not any production ready instruments for that. Yes, it is not very complex solution in case of one server but then using master/slave or cluster it can be very complex.
Using Keyspace notifications
Look's like Keyspace Notifications feature may be alternative. Keyspace notifications is a feature available since 2.8.0 and available in Redis cluster too. From original documentation:
Keyspace notifications allows clients to subscribe to Pub/Sub channels in order to receive events affecting the Redis data set in some way.Examples of the events that is possible to receive are the following:
All the commands affecting a given key.
All the keys receiving an LPUSH operation.
All the keys expiring in the database 0.
Events are delivered using the normal Pub/Sub layer of Redis, so clients implementing Pub/Sub are able to use this feature without modifications.
Because Redis Pub/Sub is fire and forget currently there is no way to use this feature if you application demands reliable notification of events, that is, if your Pub/Sub client disconnects, and reconnects later, all the events delivered during the time the client was disconnected are lost. This can be improved by duplicating the employees who serve this Pub/Sub channel:
The group of N workers subscribe to notification and put data to SET based "sync" list. This allow us control overhead and do not write same data to our sync list.
The other group of workers pop record with SPOP and write it other store.
Using manual update list
The other way is using special "sync" SET based list with every write operation (as i understand SET/HSET in your case). Something like:
MULTI
SET myKey value
SADD myKey
EXEC
Each time you modify your key you add key name to SET. So in other process or worker you can SPOP that key, read value and update source.
Also you can use RPOPLPUSH/LPOPRPUSH besides of SPOP in some kind of in progress list to protect your key would missed if worker failed. In this case each worker first RPOPLPUSH/LPOPRPUSH from sync set to in progress set, push data to storage and remove key from in progress set.
i would like to setup a JMS Queue on a Glassfish v3 Server for saving some protocoll informations on a sql server.
My first try ended up in lot's of deadlocks on the sql server.
My first question is: Are the messages in a queue processes after each other or in parallel. How do it set it up to process the messages after each other. Time does not play a role. I want to bring only a minimum load to the sql server.
The second: Where can i see how much messages are waiting in the queue for processing?
I had a look into the monitoring of glassfish and also the
http://server:adminport/__asadmin/get?monitor=true&pattern=server.applications.ear.test.war.TestMessageDrivenBean.*
But i could not see a "tobeprocessed" value or s.t. like that.
Many thanks,
Hasan
The listener you bind to the queue will process messages as they arrive. It responds to an onMessage event. You don't have to set up anything.
You do have to worry about what happens if the queue backs up because the listener(s) can't keep up.
You should also configure an error queue where messages that can't be processed go.
Have you thought about making the queue and database operation transactional? That way the message is put back on the queue if the database INSERT fails. You'll need an XA JDBC driver and a transaction manager to do it.