There's one thing I don't get when reading redis transaction docs:
All the commands in a transaction are serialized and executed
sequentially. It can never happen that a request issued by another
client is served in the middle of the execution of a Redis
transaction. This guarantees that the commands are executed as a
single isolated operation.
From: https://redis.io/topics/transactions
The fragment in bold is the one that bugs me. Does it mean that when setting value of the key A in one request, another request that wants to set a value for the key B will be blocked until the first one finishes?
No, opening a transaction with MULTI does not block other concurrent connections to redis.
https://redis.io/topics/transactions#usage
A Redis transaction is entered using the MULTI command. The command always replies with OK. At this point the user can issue multiple commands. Instead of executing these commands, Redis will queue them. All the commands are executed once EXEC is called.
Your commands inside of a transaction are collected until you close the transaction. Afterwards the commends will be executed all at one without interruption. This means other connections can still execute their commands before commands of the still open transaction if they are submitted afterwards, while they are still submitted before closing the transaction.
Related
First of all, my understanding is: redis is a single-process program, all commands are executed in first-in-first-out order. If this is the case, we don't need the watch command, but this is not the case.
I want to find out more about the order of execution of the redis command. Thanks in advance
You are correct, the Redis server will execute, the command in the order they are received independently of the client.
That said, it is interesting to know that you have some features like transaction and pipelining that do not have a direct impact on the execution order (not totally for a transaction, as you will see below)
Transactions
In a transaction, "all the commands in a transaction are serialized and executed sequentially". All the commands are executed as a single isolated operation.
So when you are running commands in the transaction, it is not possible to have commands from another client to be executed before the end of the transaction.
Pipelining
As described above the operation will be executed in order (FIFO), using pipelining that does not change, but what is different is that the client is able to send multiple commands without waiting for the response.
I let you look into the details of all this and test it in your application if needed.
We are trying to implement caching for our multi-tenant application. We are planning to create new Redis DB for each tenant.
We have one scenario where we need to use Redis Transactions. While going through this post https://redis.io/topics/transactions, we found that
All the commands in a transaction are serialized and executed
sequentially. It can never happen that a request issued by another
client is served in the middle of the execution of a Redis
transaction. This guarantees that the commands are executed as a
single isolated operation.
Is this read blocking will only apply to database level or at full instance level?
The guarantee you quoted applies to the instance, not the database. A command for DB 2 will not run in the middle of a transaction for DB 1.
You can find more information about multiple databases (including an argument by the creator of Redis against using them at all) in this question.
I want to be able to access a very recent copy of my master Redis server keys. It doesn't have to be completely up to date as I will be polling the read only copy but I don't want the transactions and Lua scripts I run on the master instance to block on the read only instance as I SCAN through the keys on my read only instance.
Can anyone confirm/deny this behaviour?
It won't block the slaves from anything, but while the master is busy processing your logic replication will be stopped. Once the logic ends (possibly generating writes), replication will resume with the previous buffered contents and the new ones (if any).
I want to implement an audit log using triggers which gets fired on created, changed and deleted data to store some values. Those triggers should be able to use user ids which made the changes and which are managed by the web application. I have some ideas on providing this data, but I don't seem to fully understand what the execution context of a trigger is. I've read through the PostgreSQL docs Overview of Trigger Behavior and others but my question doesn't seem to be answered.
What I want to know is the interaction between a client session with one running transaction and the trigger execution and the lifetime of both and how they depend on each other. From my understanding triggers are executed within the database independently from the client session which created the event which lead to trigger execution. Is that correct? That would mean triggers and their processing wouldn't impact performance of the client request and the client can close the session at any time. If both are independent, how would a trigger get notified about a client rolling back a transaction, which would logically mean that no data got changed at all? Or are triggers onyl executed after committing a transaction because they run independently?
Or are triggers executed async within the client session which created the events which lead to trigger execution? This would mean that if the client closes it's session for any reason, the trigger would abort, too. Their changes are directly bound to the clients transaction and can be rolled back, too.
I need to understand the behavior to know what I would like to do in another question.
Thanks for your input!
From my understanding triggers are executed within the database
independently from the client session which created the event which
lead to trigger execution. Is that correct? That would mean triggers
and their processing wouldn't impact performance of the client request
and the client can close the session at any time
No they totally depend on the client session, as part of the transaction which itself is tied to the session.
See this excerpt from CREATE TRIGGER (9.1):
They can be fired either at the end of the statement causing the
triggering event, or at the end of the containing transaction; in the
latter case they are said to be deferred
From your other question it appears you're using 8.4, which doesn't have deferred triggers, so it's even simpler. Triggers run always at the end of the statement (the triggering event), which means before the acknowledgment of execution is sent by the server to the client.
A COMMIT immediately following would be a new instruction, and could not be executed before the trigger is finished.
Scenario:
We have a wcf workflow with a client that does NOT use transactionflow.
The workflow contains several sequential TransactedReceiveScopes (using content-based correlation).
The TransactedReceiveScopes contain custom db operations.
Observations:
When we run SQL profiler against the first call, we see all the custom db calls, and the SaveInstance call in the profile trace.
We've noticed that, even though the SendReply is at the very end of TransactedReceiveScope, sometimes the sendreply occurs a good 10 seconds before the transaction gets committed.
We tried changing the TimeToPersist and TimeToUnload to zero, but that had no effect. (The trace shows the SaveInstance happening immediately anyway, but rather the commit seems to be delayed).
Questions:
Are our observations correct?
At what point is the transaction committed? Is this like garbage collection - i.e. it commits some time later when it's not busy?
Is there any way to control the commit delay, or is the only way to do this to use transactionflow from the client (anc then it should all commit when the client commits, including the persist).
The TransactedReceiveScope commits the transaction when the body is completed but as all execution is done through the scheduler that could be some time later. It is not related to garbage collection and there is no real way to influence it other that to avoid a busy machine and a lot of other parallel activities that could also be in the execution queue.