When we have a distributed lock with TTL, it is possible that lock will expire because of TTL config and the process which had that lock has not finished computation and it will continue to manipulate the object for which it acquired lock as it doesn't know that lock has already expired. How can we avoid that scenario?
The solution you are looking for is called "fencing token". Long story short - every mutation command/operation should include the token and the executor should check if the token is still valid.
The token is just a number, call it "term", every time a new lock is issued, the term get increased.
The executor has a simple logic, it never accepts commands with an old term.
Arguably, this is the only option to truly avoid any lock related issues. Stuff like having timestamps, or explicit lock releases - all of them are inherently prone to various race issues.
Another pointer I recommend to look at - Red Lock algorithm; and the issues it has - more on this here: https://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html
Related
I am trying to use Redis as a cache that sits in front of an SQL database. At a high level I want to implement these operations:
Read value from Redis, if it's not there then generate the value via querying SQL, and push it in to Redis so we don't have to compute that again.
Write value to Redis, because we just made some change to our SQL database and we know that we might have already cached it and it's now invalid.
Delete value, because we know the value in Redis is now stale, we suspect nobody will want it, but it's too much work to recompute now. We're OK letting the next client who does operation #1 compute it again.
My challenge is understanding how to implement #1 and #3, if I attempt to do it with StackExchange.Redis. If I naively implement #1 with a simple read of the key and push, it's entirely possible that between me computing the value from SQL and pushing it in that any number of other SQL operations may have happened and also tried to push their values into Redis via #2 or #3. For example, consider this ordering:
Client #1 wants to do operation #1 [Read] from above. It tries to read the key, sees it's not there.
Client #1 calls to SQL database to generate the value.
Client #2 does something to SQL and then does operation #2 [Write] above. It pushes some newly computed value into Redis.
Client #3 comes a long, does some other operation in SQL, and wants to do operation #3 [Delete] to Redis knowing that if there's something cached there, it's no longer valid.
Client #1 pushes its (now stale) value to Redis.
So how do I implement my operation #1? Redis offers a WATCH primitive that makes this fairly easy to do against the bare metal where I would be able to observe other things happened on the key from Client #1, but it's not supported by StackExchange.Redis because of how it multiplexes commands. It's conditional operations aren't quite sufficient here, since if I try saying "push only if key doesn't exist", that doesn't prevent the race as I explained above. Is there a pattern/best practice that is used here? This seems like a fairly common pattern that people would want to implement.
One idea I do have is I can use a separate key that gets incremented each time I do some operation on the main key and then can use StackExchange.Redis' conditional operations that way, but that seems kludgy.
It looks like question about right cache invalidation strategy rather then question about Redis. Why i think so - Redis WATCH/MULTI is kind of optimistic locking strategy and this kind of
locking not suitable for most of cases with cache where db read query can be a problem which solves with cache. In your operation #3 description you write:
It's too much work to recompute now. We're OK letting the next client who does operation #1 compute it again.
So we can continue with read update case as update strategy. Here is some more questions, before we continue:
That happens when 2 clients starts to perform operation #1? Both of them can do not find value in Redis and perform SQL query and next both of then write it to Redis. So we should have garanties that just one client would update cache?
How we can be shure in the right sequence of writes (operation 3)?
Why not optimistic locking
Optimistic concurrency control assumes that multiple transactions can frequently complete without interfering with each other. While running, transactions use data resources without acquiring locks on those resources. Before committing, each transaction verifies that no other transaction has modified the data it has read. If the check reveals conflicting modifications, the committing transaction rolls back and can be restarted.
You can read about OCC transactions phases in wikipedia but in few words:
If there is no conflict - you update your data. If there is a conflict, resolve it, typically by aborting the transaction and restart it if still need to update data.
Redis WATCH/MULTY is kind of optimistic locking so they can't help you - you do not know about your cache key was modified before try to work with them.
What works?
Each time your listen somebody told about locking - after some words you are listen about compromises, performance and consistency vs availability. The last pair is most important.
In most of high loaded system availability is winner. Thats this means for caching? Usualy such case:
Each cache key hold some metadata about value - state, version and life time. The last one is not Redis TTL - usually if your key should be in cache for X time, life time
in metadata has X + Y time, there Y is some time to garantie process update.
You never delete key directly - you need just update state or life time.
Each time your application read data from cache if should make decision - if data has state "valid" - use it. If data has state "invalid" try to update or use absolete data.
How to update on read(the quite important is this "hand made" mix of optimistic and pessisitic locking):
Try set pessimistic locking (in Redis with SETEX - read more here).
If failed - return absolete data (rememeber we still need availability).
If success perform SQL query and write in to cache.
Read version from Redis again and compare with version readed previously.
If version same - mark as state as "valid".
Release lock.
How to invalidate (your operations #2, #3):
Increment cache version and set state "invalid".
Update life time/ttl if need it.
Why so difficult
We always can get and return value from cache and rarely have situatiuon with cache miss. So we do not have cache invalidation cascade hell then many process try to update
one key.
We still have ordered key updates.
Just one process per time can update key.
I have queue!
Sorry, you have not said before - I would not write it all. If have queue all becomes more simple:
Each modification operation should push job to queue.
Only async worker should execute SQL and update key.
You still need use "state" (valid/invalid) for cache key to separete application logic with cache.
Is this is answer?
Actualy yes and no in same time. This one of possible solutions. Cache invalidation is much complex problem with many possible solutions - one of them
may be simple, other - complex. In most of cases depends on real bussines requirements of concrete applicaton.
I'm using MSETNX (http://redis.io/commands/msetnx) as a locking system, whereby all keys are locked only if no locks already exist.
If a machine holding a lock dies, that lock will be stuck locked - this is a problem.
My ideal answer would be that all keys expire in 15 seconds by default, so even if a machine dies it's held locks will auto-reset in a short time. This way I don't have to call expire on every key I set.
Is this possible in any way?
To build a reliable lock that is high available please check this document: http://redis.io/topics/distlock
The algorithm is still in beta but was stress-tested in a few sessions and is likely to be far more reliable than a single-instance approach anyway.
There are reference implementations for a few languages (linked in the doc).
Redis doesn't have a built-in way to do MSETNX and expire all keys together atomically. Nor can you set a default expiry tube for keys.
You could consider instead:
1. Using a WATCH/MULTI/EXEC block that wraps multiple 'SET key value EX 15 NX', or
2. Doing this using a Lua server-side script.
I'm using AcquireLock method from ServiceStack Redis when updating and getting the key/value like this:
public virtual void Set(string key, T entity)
{
using (var client = ClientManager.GetClient())
{
using (client.AcquireLock(key + ":locked", DefaultLockingTimeout, DefaultLockExpire))
{
client.Set(key, entity);
}
}
}
I've extended AcqurieLock method to accept extra parameter for expiration of the lock key. So I'm wondering that if I need AcquireLock at all or not? My class uses AcquireLock in every operation like Get<>, GetAll<>, ExpireAt, SetAll<>, etc..
But this approach doesn't work everytime. For example, if the operating in the lock throws an exception, then the key remains locked. For this situation I've added DefaultLockExpire parameter to AcquireLock method to expire the "locked" key.
Is there any better solution, or when do we need acquiring locks like "lock" blocks in multi-thread programming.
As The Real Bill answer has said, you don't need locks for Redis itself. What the ServiceStack client offers in terms of locking is not for Redis, but for your application. In a C# application, you can lock things locally with lock(obj) so that something cannot happen concurrently (only one thread can access the locked section at a time), but that only works if you have one webserver. If you want to prevent something happening concurrently, you need a locking mechanism living outside of the webserver. Redis is a good fit for this.
We have a case where it is checked if a customer has a shopping cart already and if not, create it. Between checking and creating it, there's a time where another request could have also found out that cart doesn't exist and might also proceed to create one. That's a classical case for locking but a simple lock wouldn't work here as the request may have arrived from an entirely different web-server. So for this, we use the ServiceStack Redis client (with some abstraction) to lock using Redis and only allow one request at a time to enter the "create a cart" section.
So to answer your actual question: no, you don't need a lock for getting/setting values to Redis.
I wouldn't use locks for get/set operations. Redis will do those actions atomically, so there is no chance of it getting "changed underneath you" when setting or getting. I've built systems where hundreds of clients are updating/operating on values concurrently and never needed a lock to do those operations (especially an expire).
I don't know how Service Stack redis implements the locking it has so I can't say why it is failing. However, I'm not sure I'd trust it given there is no true locking needed on the Redis side for data operations. Redis is single-threaded so locking there doesn't make sense.
If you are doing complex operations where you get a value, operate on things based on it, then update it after a while and can't have the value change in the meantime I'd recommend reading and groking http://redis.io/topics/transactions to see if what you want is what Redis is good for, whether your code needs refactored to eliminate the problem, or at the least find a better way to do it.
For example, SETNX may be the route you need to get what you want, but without details I can't say it will work.
As #JulianR says, the locking in ServiceStack.Redis is only for application-level distributed locks (i.e. to replace using a DB or an empty .lock file on a distributed file system) and it only works against other ServiceStack.Redis clients in other process using the same key/API to acquire the lock.
You would never need to do this for normal Redis operations since they're all atomic. If you want to ensure a combination of redis operations happen atomically than you would combine them within a Redis Transaction or alternatively you can execute them within a server-side Lua script - both allow atomic execution of batch operations.
I have rather general question, please advice.
I have a servlet.
This servlet has private field.
Private field is a kind of metadata stuff (public class Metadata{//bla-bla-bla}).
When GET request is processed, this metadata is used to perform some operation.
I want to implement POST method in the same servlet. User uploads file and Metadata field is updated.
The problem: concurrent access to this private field with Metadata object shared among sereval web-threads using one servlet instance. POST method operaton (Update Metadata object) can lead to Metadata inconsistent state and concurrent GET request can be failed.
The question: what is the best way to update Metadata object while GET requests are running?
Dummy solution:
During each GET request,, at the very beginning
Synchonize Metadata object and clone it in one block, then release it.
Concurrent GET requests work with clone verstion of Metadata object which is consistent.
During each POST request.
Synchonize Metadata object and update its fields.
Release Metadata object.
Please advice or critisize.
Using synchronized methods set and get in the Metadata class is fine but may slower your web app in case you have multiple readers and (much) less writers:
Java synchronized keyword is used to acquire a exclusive lock on an
object. When a thread acquires a lock of an object either for reading
or writing, other threads must wait until the lock on that object is
released. Think of a scenerio that there are many reader threads that reads a shared
data frequently and only one writer thread that updates shared data.
It’s not necessary to exclusively lock access to shared data while
reading because multiple read operations can be done in parallel
unless there is a write operation.
(Excerpt from that nice post)
So using a multiple read single write strategy may be better in term of performance in some cases as explained also in the same Java5 ReadWriteLock interface doc:
A read-write lock allows for a greater level of concurrency in
accessing shared data than that permitted by a mutual exclusion lock.
It exploits the fact that while only a single thread at a time (a
writer thread) can modify the shared data, in many cases any number of
threads can concurrently read the data (hence reader threads). In
theory, the increase in concurrency permitted by the use of a
read-write lock will lead to performance improvements over the use of
a mutual exclusion lock. In practice this increase in concurrency will
only be fully realized on a multi-processor, and then only if the
access patterns for the shared data are suitable.
Whether or not a read-write lock will improve performance over the use
of a mutual exclusion lock depends on the frequency that the data is
read compared to being modified, the duration of the read and write
operations, and the contention for the data - that is, the number of
threads that will try to read or write the data at the same time. For
example, a collection that is initially populated with data and
thereafter infrequently modified, while being frequently searched
(such as a directory of some kind) is an ideal candidate for the use
of a read-write lock. However, if updates become frequent then the
data spends most of its time being exclusively locked and there is
little, if any increase in concurrency. Further, if the read
operations are too short the overhead of the read-write lock
implementation (which is inherently more complex than a mutual
exclusion lock) can dominate the execution cost, particularly as many
read-write lock implementations still serialize all threads through a
small section of code. Ultimately, only profiling and measurement will
establish whether the use of a read-write lock is suitable for your
application.
A ready to use implementation is the ReentrantReadWriteLock.
Take a look at the previous post for a nice tutorial on how to use it.
We are trying to implement retry logic to recover from transient errors in Azure environment.
We are using long-running sessions to keep track and commit the whole bunch of changes at the end of application transaction (which may spread over several web-requests). Along the way we need to get additional data from database. Our main problem is that we can't easily recover from db error because we can't "replay" all user actions.
So far we used straightforward recovery algorithm:
Try to perform operation in long-running session
In case of error, close the session, open a new one and merge entities into it
Retry the operation
It's very expensive approach in terms of time (merge is really long for big entity hierarchies). So we'd like to optimize things a little.
We'd like to perform query operations in separate session (to keep long running one untouched and safe) and on success, merge results back to the long-running session. Retry is relatively simple here - we just need to open new session and run query once more. However, with this approach we have an issue with initializing lazy properties/collections:
If we do this in separate session, we need to merge results back (a lot of entities) but merge could fail and break the long-running session
We tried different ways of "moving" original entity to different session, loading details and returning it back, but without success (evict, replicate, etc.)
There is known statement that session should be discarded in case of exception. However, the example shows write operation. Is it still true for read ones? I mean if I guarantee that no data is written back to the database, can I reuse the same session to run query again?
Do you have any other suggestions about retry logic with long-running sessions?
IMO there's no way to solve your issue. It's gonna take a lot of time to commit everything or you will have to do a lot of work to break it up into smaller sessions and handle every error that can occur while merging.
To answer your question about using the session after an exception: you cannot trust ANYTHING anymore inside this session, not even loaded entities.
Read this paragraph from Ayende's article about building a simple todo app with a recoveryplan in case of an exception in the session:
Then there is the problem of error handling. If you get an exception
(such as StaleObjectStateException, because of concurrency conflict),
your session and its loaded entities are toast, because with
NHibernate, an exception thrown from a session moves that session into
an undefined state. You can no longer use that session or any loaded
entities. If you have only a single global session, it means that you
probably need to restart the application, which is probably not a good
idea.