I have a key (hash type) in redis
Key is
service_status:cluster_1
Value is like below
{
service_1: normal,
service_2: normal,
service_3: normal,
service_4: normal,
service_5: down
...
}
The system is a monitor system. This data is used to store the services status of one cluster.
There are thousands of services in the cluster, so thousands of update request may hit redis to update the same key at the same time.
My concern is how redis handle this? Will there be some lock since these update pointing to the same data?
Redis is single-threaded so there are no "parallel" updates, and therefore no need for locking. Operations in general and updates to a specific hash key, in particular, are executed one at a time.
Related
I have multiple keys in redis for session management and the lifetime of each key is 10 sec. So, losing these keys are fine for me as I can recreate a session.
However, I have one key, which I cannot afford to lose. Hence I would like that whenever the server restarts, redis reads only this keys from its persistent storage, and keep on persisting these as and when they change.
Is there an inbuilt redis way to achieve this?
Yes, reference
https://redis.io/topics/persistence
redis supports persistence.
I'm quite new to Apache Ignite so please be gentle. My question is simple.
If I have a replicated cache using Apache Ignite. I write to this cache key 123. My cluster has 10 nodes.
First question is:
Does replicated cache mean that before the "put" call comes back the key 123 must be written to all 10 nodes? Or does the call come back immediately and the replication is done behind the scenes?
Second question is:
Lets say key 123 is written on Node 1. It's now being replicated to all other nodes. However a few microseconds later Node 2 tries to write key 123 with a different value. Do I now have a race condition? Or does Ignite somehow handles this situation in such a way where Node 2's attempt to write key 123 won't happen until Node 1's "put" has replicated across all nodes?
For some context, what I'm trying to build is a de-duplication system across a cluster of API machines. I was hoping that I would be able to create a hash of my API request (with only values that make the request unique) and write it to the Ignite Cache. The API request would only proceed if the cache does not already contain the unique hash (possibly created by a different API instance). Of course the cache would have an eviction policy to evict these cache keys after a few seconds because they won't be needed anymore.
REPLICATED cache is the same as PARTITIONED with infinite number of backups and some optimizations. So it has primary partitions that distributed across nodes according to affinity function.
Now when you perform update, request comes to primary node, and primary node, in it's turn, updates all backups. Property CacheConfiguration.setWriteSynchronizationMode() is responsible for the way in which entries will be updated. By default it's PRIMARY_SYNC, which means that thread which calls put() will wait only for primary partition update, and backups will be updated asynchronously. If you set it to FULL_SYNC, thread will be released only when all backups updated.
Answering your second question, there will not be a race condition, because all requests will come to primary node.
Additionally to your clarification, if backup node wasn't updated yet, get() request will go to primary node, so in PRIMARY_SYNC mode you'll never get null if primary partition has a value.
I have multiple keys in redis most of which are insignificant and can be lost in case my redis server goes down.
However I have one or two keys, which I cannot afford to lose.
Hence I would like that whenever the server restarts, redis reads only these few keys from its persistent storage, and keep on persisting these as and when they change.
Does redis have this feature? If yes what command makes a key persisted to file and how to differ b/w persisted and unpersisted keys.
If no, What approach can I use such that I need not make my own persistent file before writing to Redis
Limitations(If the answer is no)
I do not want to change client code that enters in redis.
I cannot add more servers to redis(if any such solution exists, would like to know about it though).
EDIT
Another reason I would not want to save most keys as persistence because it is huge data, hundreds of records per second- Most of which expires in 10 minutes.
I'm using Redis for storing simple key, value pairs; where, value is also of string type. In my Redis cluster, I've a master and two slaves. I want to propagate any of the changes to the data from one of the slaves to any other store (actually, oracle database). How can I do that reliably? The sink database only needs to be eventually consistent. Some delay is allowed.
Strategies I can think of:
a) Read the AOF file written by the slave machine and propagate the changes. (Requires parsing the AOF file and getting notified of every change to the file.)
b) Use rpoplpush. The reliable queue pattern provided. But, how to make the slave insert to that queue whenever it gets some set event from the master?
Any other possibility?
This is a very common problem faced by Redis developers. In a nutshell, it is the fact that:
Want to know all changes sinse last
Keep this change data atomic
I believe that any decision one way or another will be around these issues. So, yes AOF is one of best choises in this case, but here is not any production ready instruments for that. Yes, it is not very complex solution in case of one server but then using master/slave or cluster it can be very complex.
Using Keyspace notifications
Look's like Keyspace Notifications feature may be alternative. Keyspace notifications is a feature available since 2.8.0 and available in Redis cluster too. From original documentation:
Keyspace notifications allows clients to subscribe to Pub/Sub channels in order to receive events affecting the Redis data set in some way.Examples of the events that is possible to receive are the following:
All the commands affecting a given key.
All the keys receiving an LPUSH operation.
All the keys expiring in the database 0.
Events are delivered using the normal Pub/Sub layer of Redis, so clients implementing Pub/Sub are able to use this feature without modifications.
Because Redis Pub/Sub is fire and forget currently there is no way to use this feature if you application demands reliable notification of events, that is, if your Pub/Sub client disconnects, and reconnects later, all the events delivered during the time the client was disconnected are lost. This can be improved by duplicating the employees who serve this Pub/Sub channel:
The group of N workers subscribe to notification and put data to SET based "sync" list. This allow us control overhead and do not write same data to our sync list.
The other group of workers pop record with SPOP and write it other store.
Using manual update list
The other way is using special "sync" SET based list with every write operation (as i understand SET/HSET in your case). Something like:
MULTI
SET myKey value
SADD myKey
EXEC
Each time you modify your key you add key name to SET. So in other process or worker you can SPOP that key, read value and update source.
Also you can use RPOPLPUSH/LPOPRPUSH besides of SPOP in some kind of in progress list to protect your key would missed if worker failed. In this case each worker first RPOPLPUSH/LPOPRPUSH from sync set to in progress set, push data to storage and remove key from in progress set.
I'm using Redis as a session store in my app. Can I use the same instance (and db) of Redis for my job queue? If it's of any significance, it's hosted with redistogo.
It is perfectly fine to use the same redis for multiple operations.
We had a similar use case where we used Redis as a key value store as well as a job queue.
However you may want to consider other aspects like the performance requirements for your application. Redis can ideally handle around 70k operations per second and if at some time in future you think you may hit these benchmarks it's much better to split your operations to multiple redis instances based on the kind of operations you perform. This will allow you to make decisions about availability and replication at a more finer level depending on the requirements. As a simple use case once your key size grows you may be able to flush your session app redis or shard your keys using redis cluster without affecting job queing infrastructure.