Do I need a new client connection when using redis transactions? - redis

My application uses a singleton connection of redis everywhere, it's initialized at the startup.
My understanding of MULTI.EXEC() tells that all my WATCHed keys would be UNWATCHed when the MULTI.EXEC() is called anywhere in the application.
This would mean that all keys WATCHed irrespective of which MULTI block they were WATCHed for will be unwatched, beating the whole purpose of WATCHing them.
Is my understanding correct?
How do I avoid this situation, should I create a new connection for each transaction?

This process happened inside Redis Server and will block all incoming command. So it doesn't matter if you use single or multiple connections(all connections will be blocked)

Related

Is redis transaction (WATCH, MULTI ,EXEC ) possible over multiplexed connection?

I am using Redis library that offers Connection multiplexing (I am currently using the Rust lib , but I think the question is relevant for any implementation).
According to the what I've read about multiplexing (And also what I understand from the lib implementation) , It utilizes the same connection for handling db operations from multiple contexts (threads/tasks/etc..).
Now, I'm not sure what will happened if WATCH is called in parallel with 2 different contexts on the same multiplexed-connection. Will the EXEC from one context cancel the WATCH in the other thread, or maybe Redis somehow knows how to distinguish between contexts even though they're utilizing the same connection?
No, it's not possible over multiplexed connection. The Redis transaction context is attached to a specific "client" meaning a specific conneciton.

Redis: Using lua and concurrent transactions

Two issues
Do lua scripts really solve all cases for redis transactions?
What are best practices for asynchronous transactions from one client?
Let me explain, first issue
Redis transactions are limited, with an inability to unwatch specific keys, and all keys being unwatched upon exec; we are limited to a single ongoing transaction on a given client.
I've seen threads where many redis users claim that lua scripts are all they need. Even the redis official docs state they may remove transactions in favour of lua scripts. However, there are cases where this is insufficient, such as the most standard case: using redis as a cache.
Let's say we want to cache some data from a persistent data store, in redis. Here's a quick process:
Check cache -> miss
Load data from database
Store in redis
However, what if, between step 2 (loading data), and step 3 (storing in redis) the data is updated by another client?
The data stored in redis would be stale. So... we use a redis transaction right? We watch the key before loading from db, and if the key is updated somewhere else before storage, storage would fail. Great! However, within an atomic lua script, we cannot load data from an external database, so lua cannot be used here. Hopefully I'm simply missing something, or there is something wrong with our process.
Moving on to the 2nd issue (asynchronous transactions)
Let's say we have a socket.io cluster which processes various messages, and requests for a game, for high speed communication between server and client. This cluster is written in node.js with appropriate use of promises and asynchronous concepts.
Say two requests hit a server in our cluster, which require data to be loaded and cached in redis. Using our transaction from above, multiple keys could be watched, and multiple multi->exec transactions would run in overlapping order on one redis connection. Once the first exec is run, all watched keys will be unwatched, even if the other transaction is still running. This may allow the second transaction to succeed when it should have failed.
These overlaps could happen in totally separate requests happening on the same server, or even sometimes in the same request if multiple data types need to load at the same time.
What is best practice here? Do we need to create a separate redis connection for every individual transaction? Seems like we would lose a lot of speed, and we would see many connections created just from one server if this is case.
As an alternative we could use redlock / mutex locking instead of redis transactions, but this is slow by comparison.
Any help appreciated!
I have received the following, after my query was escalated to redis engineers:
Hi Jeremy,
Your method using multiple backend connections would be the expected way to handle the problem. We do not see anything wrong with multiple backend connections, each using an optimistic Redis transaction (WATCH/MULTI/EXEC) - there is no chance that the “second transaction will succeed where it should have failed”.
Using LUA is not a good fit for this problem.
Best Regards,
The Redis Labs Team

how to design multi-process program using redis in python

I just started to use the redis cache in python. I read the tutorial but still feel confused about the concepts of "connectionpool", "connection" and etc..
I try to write a program which will be invoked multiple times in the console in different processes. They will all get and set the same shared in memory redis cache using same set of keys.
So to make it thread(process) safe, should I have one global connectionpool and get connections from the pool in different processes? Or should I have one global connection? What's the right way to do it?
Thanks,
Each instance of the program should spawn its own ConnectionPool. But this has nothing to do with thread safety. Whether or not your code is thread safe will depend on the type of operations you will be executing, and if you have multiple instances which may read and write concurrently, you need to look into using transactions, which are built into redis.

How does StackExchange.Redis use multiple endpoints and connections?

As explained in the StackExchange.Redis Basics documentation, you can connect to multiple Redis servers, and StackExchange.Redis will automatically determine the master/slave setup. Quoting the relevant part:
A more complicated scenario might involve a master/slave setup; for this usage, simply specify all the desired nodes that make up that logical redis tier (it will automatically identify the master):
ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("server1:6379,server2:6379");
I performed a test in which I triggered a failover, such that the master would go down for a bit, causing the old slave to become the new master, and the old master to become the new slave. I noticed that in spite of this change, StackExchange.Redis keeps sending commands to the old master, causing write operations to fail.
Questions on the above:
How does StackExchange.Redis decide which endpoint to use?
How should multiple endpoints (as in the above example) be used?
I also noticed that for each connect, StackExchange.Redis opens two physical connections, one of which is some sort of subscription. What is this used for exactly? Is it used by Sentinel instances?
What should happen there is that it uses a number of things (in particular the defined replication configuration) to determine which is the master, and direct traffic at the appropriate server (respecting the "server" parameter, which defaults to "prefer master", but which always sends write operations to a master).
If a "cannot write to a readonly slave" (I can't remember the exact text) error is received, it will try to re-establish the configuration, and should switch automatically to respect this. Unfortunately, redis does not broadcast configuration changes, so the library can't detect this ahead of time.
Note that if you use the library methods to change master, it can exploit pub/sub to detect that change immediately and automatically.
Re the second connection: that would be for pub/sub; it spins this up ahead of time, as by default it attempts to listen for the library-specific configuration broadcasts.

Acquiring Locks when updating a Redis key/value

I'm using AcquireLock method from ServiceStack Redis when updating and getting the key/value like this:
public virtual void Set(string key, T entity)
{
using (var client = ClientManager.GetClient())
{
using (client.AcquireLock(key + ":locked", DefaultLockingTimeout, DefaultLockExpire))
{
client.Set(key, entity);
}
}
}
I've extended AcqurieLock method to accept extra parameter for expiration of the lock key. So I'm wondering that if I need AcquireLock at all or not? My class uses AcquireLock in every operation like Get<>, GetAll<>, ExpireAt, SetAll<>, etc..
But this approach doesn't work everytime. For example, if the operating in the lock throws an exception, then the key remains locked. For this situation I've added DefaultLockExpire parameter to AcquireLock method to expire the "locked" key.
Is there any better solution, or when do we need acquiring locks like "lock" blocks in multi-thread programming.
As The Real Bill answer has said, you don't need locks for Redis itself. What the ServiceStack client offers in terms of locking is not for Redis, but for your application. In a C# application, you can lock things locally with lock(obj) so that something cannot happen concurrently (only one thread can access the locked section at a time), but that only works if you have one webserver. If you want to prevent something happening concurrently, you need a locking mechanism living outside of the webserver. Redis is a good fit for this.
We have a case where it is checked if a customer has a shopping cart already and if not, create it. Between checking and creating it, there's a time where another request could have also found out that cart doesn't exist and might also proceed to create one. That's a classical case for locking but a simple lock wouldn't work here as the request may have arrived from an entirely different web-server. So for this, we use the ServiceStack Redis client (with some abstraction) to lock using Redis and only allow one request at a time to enter the "create a cart" section.
So to answer your actual question: no, you don't need a lock for getting/setting values to Redis.
I wouldn't use locks for get/set operations. Redis will do those actions atomically, so there is no chance of it getting "changed underneath you" when setting or getting. I've built systems where hundreds of clients are updating/operating on values concurrently and never needed a lock to do those operations (especially an expire).
I don't know how Service Stack redis implements the locking it has so I can't say why it is failing. However, I'm not sure I'd trust it given there is no true locking needed on the Redis side for data operations. Redis is single-threaded so locking there doesn't make sense.
If you are doing complex operations where you get a value, operate on things based on it, then update it after a while and can't have the value change in the meantime I'd recommend reading and groking http://redis.io/topics/transactions to see if what you want is what Redis is good for, whether your code needs refactored to eliminate the problem, or at the least find a better way to do it.
For example, SETNX may be the route you need to get what you want, but without details I can't say it will work.
As #JulianR says, the locking in ServiceStack.Redis is only for application-level distributed locks (i.e. to replace using a DB or an empty .lock file on a distributed file system) and it only works against other ServiceStack.Redis clients in other process using the same key/API to acquire the lock.
You would never need to do this for normal Redis operations since they're all atomic. If you want to ensure a combination of redis operations happen atomically than you would combine them within a Redis Transaction or alternatively you can execute them within a server-side Lua script - both allow atomic execution of batch operations.