We are trying to implement caching for our multi-tenant application. We are planning to create new Redis DB for each tenant.
We have one scenario where we need to use Redis Transactions. While going through this post https://redis.io/topics/transactions, we found that
All the commands in a transaction are serialized and executed
sequentially. It can never happen that a request issued by another
client is served in the middle of the execution of a Redis
transaction. This guarantees that the commands are executed as a
single isolated operation.
Is this read blocking will only apply to database level or at full instance level?
The guarantee you quoted applies to the instance, not the database. A command for DB 2 will not run in the middle of a transaction for DB 1.
You can find more information about multiple databases (including an argument by the creator of Redis against using them at all) in this question.
Related
I have three microservice that needs to communicate between each other. Microservice-1 is incharge of the data and the database(he writes and read to it). I will add a redis cache store for Microservice-1 to cache data there. I want to put a redis-slave for the other 2 microservices to reduce communication with the actual microservice, if the data is already in the cache store. Since all updates to the data, has to go thru the Microservice-1 and he will always update the cache, redis replication will make sure the other two microservices will get it too. Ofcourse, if the data is not in cache, it will call the Microservice-1 for the data, which will update the cache.
Am i missing something, with this approach ?
This will definitely work in the "sunny day" case.
But sometimes there are storms, and in storms there's a chance of losing cache coherency (i.e. the DB and Redis disagree on the data).
For example, lets say that you have Microservice-1 update the DB and then update Redis. What happens if there's a crash between updating the DB and updating Redis?
On the other hand, what if you reverse the ordering (update Redis and then the DB)? Now Redis could be updated and not the DB.
Neither of these in insurmountable, but absent a means of having a transaction which ensures that 0 or 2 of Redis and the DB are updated, there will always be a time window where the change is in one but not the other. In that situation, it's probably worth embracing eventual consistency (e.g. periodically scan the DB and update redis with recently updated records).
As an elaboration on that, a Command Query Responsibility Segregation with Event Sourcing (CQRS/ES) approach may prove useful: Microservice-1 gets split into two services, one which takes commands (requests to update) and another which handles queries. Instead of updating a row in a DB, the command service now appends (in a typical DB, an INSERT) an event which describes what changed. The query service can subscribe to those events and update Redis. Other microservices can also subscribe to the stream of events and update their own views (which can be remixed in any way they want) of Microservice-1's state.
Two issues
Do lua scripts really solve all cases for redis transactions?
What are best practices for asynchronous transactions from one client?
Let me explain, first issue
Redis transactions are limited, with an inability to unwatch specific keys, and all keys being unwatched upon exec; we are limited to a single ongoing transaction on a given client.
I've seen threads where many redis users claim that lua scripts are all they need. Even the redis official docs state they may remove transactions in favour of lua scripts. However, there are cases where this is insufficient, such as the most standard case: using redis as a cache.
Let's say we want to cache some data from a persistent data store, in redis. Here's a quick process:
Check cache -> miss
Load data from database
Store in redis
However, what if, between step 2 (loading data), and step 3 (storing in redis) the data is updated by another client?
The data stored in redis would be stale. So... we use a redis transaction right? We watch the key before loading from db, and if the key is updated somewhere else before storage, storage would fail. Great! However, within an atomic lua script, we cannot load data from an external database, so lua cannot be used here. Hopefully I'm simply missing something, or there is something wrong with our process.
Moving on to the 2nd issue (asynchronous transactions)
Let's say we have a socket.io cluster which processes various messages, and requests for a game, for high speed communication between server and client. This cluster is written in node.js with appropriate use of promises and asynchronous concepts.
Say two requests hit a server in our cluster, which require data to be loaded and cached in redis. Using our transaction from above, multiple keys could be watched, and multiple multi->exec transactions would run in overlapping order on one redis connection. Once the first exec is run, all watched keys will be unwatched, even if the other transaction is still running. This may allow the second transaction to succeed when it should have failed.
These overlaps could happen in totally separate requests happening on the same server, or even sometimes in the same request if multiple data types need to load at the same time.
What is best practice here? Do we need to create a separate redis connection for every individual transaction? Seems like we would lose a lot of speed, and we would see many connections created just from one server if this is case.
As an alternative we could use redlock / mutex locking instead of redis transactions, but this is slow by comparison.
Any help appreciated!
I have received the following, after my query was escalated to redis engineers:
Hi Jeremy,
Your method using multiple backend connections would be the expected way to handle the problem. We do not see anything wrong with multiple backend connections, each using an optimistic Redis transaction (WATCH/MULTI/EXEC) - there is no chance that the “second transaction will succeed where it should have failed”.
Using LUA is not a good fit for this problem.
Best Regards,
The Redis Labs Team
As you now in Redis database when you run KEYS * command the Redis will lock database until keys return all keys.
I want to create 2 separate db in Redis and create some key in each of them ,then select one of them and run keys command on that db.
Will Redis lock all available db till answer ready or only lock selected db?
TL;DR: yes.
Redis doesn't lock - it blocks on (almost1) all commands because it is single threaded. When the server executes a command, be it a simple GET or the evil KEYS, it is busy serving it and does nothing else. The longer it takes a command to complete, the longer the server is blocked.
KEYS is a long-running command because it always traverses the entire keyspace (regardless the pattern), not to mention the potentially-huge reply it makes.
That same single thread of execution also handles numbered, a.k.a. shared, databases. Any operation you perform on one of the databases blocks the entire server, all databases included. More information can be found at: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances/
1 BGSAVE, for example, is one of the few commands that do not block. As of v4, there's also UNLINK and more are planned to be added.
I am currently implementing a server system which has both an SQL database and a Redis datastore. The data written to Redis heavily depends on the SQL data (cache, objects representing logic entities defined by a number of SQL models and their relationships).
While looking for an error handling methodology to wrap client requests, something similar to SQL's transaction commit & rollback (Redis doesn't support rollbacks), I thought of a mechanism which can serve this purpose and I'd appreciate input regarding it.
Basically, I intend to wrap (with before/after middleware) every client request with an SQL transaction and a Redis multi command (pipes commands until exec or discard command is invoked), and allow both transactions to occur only if the request was processed successfully.
The problem is that once you start a Redis multi command, you are not able to preform any reads/writes and actually use their values while processing a request. I reduced the problem just for reads since depending on just-now written values can be optimized out.
My (simple) solution: split the Redis connection into two - a writer and a reader. The writer connection will be the one to be initialized with the multi command and executed/discarded at the end. Of course, all writing will be preformed through it, while reading is done using the reader (executed instantly).
The down side: as opposed to SQL, you can't rely on values written in the same request (transaction). Then again, usually quite easy to overcome.
I'm using AcquireLock method from ServiceStack Redis when updating and getting the key/value like this:
public virtual void Set(string key, T entity)
{
using (var client = ClientManager.GetClient())
{
using (client.AcquireLock(key + ":locked", DefaultLockingTimeout, DefaultLockExpire))
{
client.Set(key, entity);
}
}
}
I've extended AcqurieLock method to accept extra parameter for expiration of the lock key. So I'm wondering that if I need AcquireLock at all or not? My class uses AcquireLock in every operation like Get<>, GetAll<>, ExpireAt, SetAll<>, etc..
But this approach doesn't work everytime. For example, if the operating in the lock throws an exception, then the key remains locked. For this situation I've added DefaultLockExpire parameter to AcquireLock method to expire the "locked" key.
Is there any better solution, or when do we need acquiring locks like "lock" blocks in multi-thread programming.
As The Real Bill answer has said, you don't need locks for Redis itself. What the ServiceStack client offers in terms of locking is not for Redis, but for your application. In a C# application, you can lock things locally with lock(obj) so that something cannot happen concurrently (only one thread can access the locked section at a time), but that only works if you have one webserver. If you want to prevent something happening concurrently, you need a locking mechanism living outside of the webserver. Redis is a good fit for this.
We have a case where it is checked if a customer has a shopping cart already and if not, create it. Between checking and creating it, there's a time where another request could have also found out that cart doesn't exist and might also proceed to create one. That's a classical case for locking but a simple lock wouldn't work here as the request may have arrived from an entirely different web-server. So for this, we use the ServiceStack Redis client (with some abstraction) to lock using Redis and only allow one request at a time to enter the "create a cart" section.
So to answer your actual question: no, you don't need a lock for getting/setting values to Redis.
I wouldn't use locks for get/set operations. Redis will do those actions atomically, so there is no chance of it getting "changed underneath you" when setting or getting. I've built systems where hundreds of clients are updating/operating on values concurrently and never needed a lock to do those operations (especially an expire).
I don't know how Service Stack redis implements the locking it has so I can't say why it is failing. However, I'm not sure I'd trust it given there is no true locking needed on the Redis side for data operations. Redis is single-threaded so locking there doesn't make sense.
If you are doing complex operations where you get a value, operate on things based on it, then update it after a while and can't have the value change in the meantime I'd recommend reading and groking http://redis.io/topics/transactions to see if what you want is what Redis is good for, whether your code needs refactored to eliminate the problem, or at the least find a better way to do it.
For example, SETNX may be the route you need to get what you want, but without details I can't say it will work.
As #JulianR says, the locking in ServiceStack.Redis is only for application-level distributed locks (i.e. to replace using a DB or an empty .lock file on a distributed file system) and it only works against other ServiceStack.Redis clients in other process using the same key/API to acquire the lock.
You would never need to do this for normal Redis operations since they're all atomic. If you want to ensure a combination of redis operations happen atomically than you would combine them within a Redis Transaction or alternatively you can execute them within a server-side Lua script - both allow atomic execution of batch operations.