Redis - How to use async MULTI? - redis

We are using hiredis from our C++ application using the redisAsyncCommandArgv interface. What we are not able to figure out is how to execute a bunch of commands in a MULTI-EXEC transaction. The redisAsyncCommandArgv encodes only one command at a time. Can it be used to send all the commands in a transaction in one go? Synchronous API is straight forward but, they cannot be used.
Any help?

It is impossible to use MULTI-EXEC over Redis Asynchronous API. You can only choose one.
MULTI-EXEC transactions SHOULD always execute sequentially. Redis Asynchronous API, on the other hand, allows the commands to be delivered out of order. Hence, it won't make sense to make a MULTI-EXEC transaction if the commands aren't in the proper sequence or worse, if MULTI and EXEC commands themselves became out of order.

Related

why we need the eval command in redis, if redis is single-threaded?

One way to execute commands in REDIS, is via the EVAL script.
Redis uses the same Lua interpreter to run all the commands. Also
Redis guarantees that a script is executed in an atomic way: no other
script or Redis command will be executed while a script is being
executed.
Since redis is single threaded, why do we need EVAL to offer atomicity? I would expect that this is implied by the one running thread.
Am I missing something? Apologies if my question is pretty simple, I am quite new to redis
Every (data path) command in Redis is indeed atomic. EVAL allows you to compose an "atomic" command with a script that can include many Redis commands, not to mention control structures and some other utilities that are helpful to implement server-side logic. To achieve the similar "atomicity" of multiple commands you can also use MULTI/EXEC blocks (i.e. transactions) by the way.
Without an EVAL or a MULTI/EXEC block, your commands will run one after another, but other clients' commands may interleave between them. Using a script or transaction eliminates that.
Redis uses a single thread to execute commands from many different clients. So if you want a group of commands from one client to be executed in sequence, you need a way to direct Redis to do that. That's what EVAL is for. Without it, Redis could interleave the execution of commands from other clients in with yours.

Is redis server execute client commands in queue order?

First of all, my understanding is: redis is a single-process program, all commands are executed in first-in-first-out order. If this is the case, we don't need the watch command, but this is not the case.
I want to find out more about the order of execution of the redis command. Thanks in advance
You are correct, the Redis server will execute, the command in the order they are received independently of the client.
That said, it is interesting to know that you have some features like transaction and pipelining that do not have a direct impact on the execution order (not totally for a transaction, as you will see below)
Transactions
In a transaction, "all the commands in a transaction are serialized and executed sequentially". All the commands are executed as a single isolated operation.
So when you are running commands in the transaction, it is not possible to have commands from another client to be executed before the end of the transaction.
Pipelining
As described above the operation will be executed in order (FIFO), using pipelining that does not change, but what is different is that the client is able to send multiple commands without waiting for the response.
I let you look into the details of all this and test it in your application if needed.

Redis: Using lua and concurrent transactions

Two issues
Do lua scripts really solve all cases for redis transactions?
What are best practices for asynchronous transactions from one client?
Let me explain, first issue
Redis transactions are limited, with an inability to unwatch specific keys, and all keys being unwatched upon exec; we are limited to a single ongoing transaction on a given client.
I've seen threads where many redis users claim that lua scripts are all they need. Even the redis official docs state they may remove transactions in favour of lua scripts. However, there are cases where this is insufficient, such as the most standard case: using redis as a cache.
Let's say we want to cache some data from a persistent data store, in redis. Here's a quick process:
Check cache -> miss
Load data from database
Store in redis
However, what if, between step 2 (loading data), and step 3 (storing in redis) the data is updated by another client?
The data stored in redis would be stale. So... we use a redis transaction right? We watch the key before loading from db, and if the key is updated somewhere else before storage, storage would fail. Great! However, within an atomic lua script, we cannot load data from an external database, so lua cannot be used here. Hopefully I'm simply missing something, or there is something wrong with our process.
Moving on to the 2nd issue (asynchronous transactions)
Let's say we have a socket.io cluster which processes various messages, and requests for a game, for high speed communication between server and client. This cluster is written in node.js with appropriate use of promises and asynchronous concepts.
Say two requests hit a server in our cluster, which require data to be loaded and cached in redis. Using our transaction from above, multiple keys could be watched, and multiple multi->exec transactions would run in overlapping order on one redis connection. Once the first exec is run, all watched keys will be unwatched, even if the other transaction is still running. This may allow the second transaction to succeed when it should have failed.
These overlaps could happen in totally separate requests happening on the same server, or even sometimes in the same request if multiple data types need to load at the same time.
What is best practice here? Do we need to create a separate redis connection for every individual transaction? Seems like we would lose a lot of speed, and we would see many connections created just from one server if this is case.
As an alternative we could use redlock / mutex locking instead of redis transactions, but this is slow by comparison.
Any help appreciated!
I have received the following, after my query was escalated to redis engineers:
Hi Jeremy,
Your method using multiple backend connections would be the expected way to handle the problem. We do not see anything wrong with multiple backend connections, each using an optimistic Redis transaction (WATCH/MULTI/EXEC) - there is no chance that the “second transaction will succeed where it should have failed”.
Using LUA is not a good fit for this problem.
Best Regards,
The Redis Labs Team

Approach to Redis transactions with "rollback"

I am currently implementing a server system which has both an SQL database and a Redis datastore. The data written to Redis heavily depends on the SQL data (cache, objects representing logic entities defined by a number of SQL models and their relationships).
While looking for an error handling methodology to wrap client requests, something similar to SQL's transaction commit & rollback (Redis doesn't support rollbacks), I thought of a mechanism which can serve this purpose and I'd appreciate input regarding it.
Basically, I intend to wrap (with before/after middleware) every client request with an SQL transaction and a Redis multi command (pipes commands until exec or discard command is invoked), and allow both transactions to occur only if the request was processed successfully.
The problem is that once you start a Redis multi command, you are not able to preform any reads/writes and actually use their values while processing a request. I reduced the problem just for reads since depending on just-now written values can be optimized out.
My (simple) solution: split the Redis connection into two - a writer and a reader. The writer connection will be the one to be initialized with the multi command and executed/discarded at the end. Of course, all writing will be preformed through it, while reading is done using the reader (executed instantly).
The down side: as opposed to SQL, you can't rely on values written in the same request (transaction). Then again, usually quite easy to overcome.

Redis Muti/Exec vs Pipelining performance

I understand that functionally Multi/Exec and Pipelining are designed to serve different purpose and features.
However, considering only performance for block writes, which would perform better. My understanding is that the Multi/Exec would create a single request, and Pipe lining will create individual requests, but will avoid RTT.
Multi/Exec is slower.
Multi/Exec also create individual requests which stored at server side that would be executed one by one when server receiving 'EXEC', and two more requests Multi & Exec.
Every request executed in transaction would check if the watched keys changed, which pipelining would never do.