I have read in most places that Redis doesn't rollback the transactions. It is also mentioned in their documentation. Queries -
Why does it claim to be atomic then?
Refer screenshot below. It does indeed behaves like a rollback mechanism (the INCR a did not get processed or executed). What am I missing in understanding this behavior?
I am using version 3.0
Atomic means that either all or none of the commands are executed. If there's an error, a data store with roll back would execute none; one without roll back would execute all, error or no. So in either case it's still an atomic operation.
This is explained in the documentation. There are two kinds of error:
A command may fail to be queued, so there may be an error before EXEC is called. For instance the command may be syntactically wrong...
A command may fail after EXEC is called...
In the first case, matching your example of an invalid command, Redis "will refuse to execute the transaction returning also an error during EXEC, and discarding the transaction automatically."
By contrast, "errors happening after EXEC instead are not handled in a special way: all the other commands will be executed even if some command fails during the transaction." This is the no-rollback-but-still-atomic scenario.
Related
There's one thing I don't get when reading redis transaction docs:
All the commands in a transaction are serialized and executed
sequentially. It can never happen that a request issued by another
client is served in the middle of the execution of a Redis
transaction. This guarantees that the commands are executed as a
single isolated operation.
From: https://redis.io/topics/transactions
The fragment in bold is the one that bugs me. Does it mean that when setting value of the key A in one request, another request that wants to set a value for the key B will be blocked until the first one finishes?
No, opening a transaction with MULTI does not block other concurrent connections to redis.
https://redis.io/topics/transactions#usage
A Redis transaction is entered using the MULTI command. The command always replies with OK. At this point the user can issue multiple commands. Instead of executing these commands, Redis will queue them. All the commands are executed once EXEC is called.
Your commands inside of a transaction are collected until you close the transaction. Afterwards the commends will be executed all at one without interruption. This means other connections can still execute their commands before commands of the still open transaction if they are submitted afterwards, while they are still submitted before closing the transaction.
First of all, my understanding is: redis is a single-process program, all commands are executed in first-in-first-out order. If this is the case, we don't need the watch command, but this is not the case.
I want to find out more about the order of execution of the redis command. Thanks in advance
You are correct, the Redis server will execute, the command in the order they are received independently of the client.
That said, it is interesting to know that you have some features like transaction and pipelining that do not have a direct impact on the execution order (not totally for a transaction, as you will see below)
Transactions
In a transaction, "all the commands in a transaction are serialized and executed sequentially". All the commands are executed as a single isolated operation.
So when you are running commands in the transaction, it is not possible to have commands from another client to be executed before the end of the transaction.
Pipelining
As described above the operation will be executed in order (FIFO), using pipelining that does not change, but what is different is that the client is able to send multiple commands without waiting for the response.
I let you look into the details of all this and test it in your application if needed.
I am connecting to a SQL Server using no autocommit. If everything is successful, I call commit. Otherwise, I just exit. Do I need to explicitly call rollback, or will it be rolled back automatically when we close the connection without committing?
In case it matters, I'm executing the SQL commands from within proc sql in SAS.
UPDATE: It looks like SAS may call commit automatically at the end of the proc sql block if rollback is not called. So in this case, rollback would be more than good practice; it would be necessary.
Final Update: We ended up switching to a new system, which seems to me to behave the opposite of our previous one. On ending the transaction without specifying committing or rolling back, it will roll back. So, the advice given below is definitely correct: always explicitly commit or rollback.
It should roll back on close of connection. Emphasis on should for a reason :-)
Proper transaction and error handling should have you always commit when the conditions for commit are met and rollback when they aren't. I think it is a great habit to always commit or rollback when done and not rely on disconnect/etc. All it takes is one mistake or incorrectly/not closed session to create a blocking chain nightmare for all :-)
Under what circumstances would I see the above message? I have a single call to SQL Server which is wrapped in a call to TransactionScope. In our development and QA environments MSDTC is turned off and the call succeeds fine. However, in our production environment with MSDTC turned on we are failing with this call. Is there something that would cause this when I am sure we are not looking at a distributed transaction call at all?
Ok, so the problem was that we had a CreateTransaction call around the call AND a TransactionScope. So we DID have 2 transactions. I didn't think that this would cause this type of problem until I realized that when there was an error we would end up with two ROLLBACK calls. The second one would trigger the above error message and effectively hide the first. We found this by running SQLProfiler looking for "User Error Messages"
I'm looking for a way to continue execution of a transaction despite errors while inserting low-priority data. It seems like real nested transaction could be a solution, but they aren't supported by SQL Server 2005/2008. Another solution would be to have logic to decide if an error is critical or not, but it would seem that's not possible either.
Here's more detail on my scenario:
Data is periodicaly inserted in the database using ADO.NET/C#, and while some of it is vital, some could also be missing without problems. When the inserts are done, some computations are made on the data. (Both vital and non-vital) This whole process is inside a transaction so everything remains in synch.
Currently, transaction save points are used, and partial rollbacks are made on exceptions which occur during non-vital inserts. However, this doesn't work for "batch-abort" errors, which automaticly rollback the entire transaction. I understand some errors are critical, but things like failed casts are considered by SQL Server to be batch-abort errors. (Info on batch errors) I'm trying to prevent these errors from bringing down the whole insert if they occur on low priority data.
If what I'm describing isn't possible, I'm willing to consider any alternative way to achieve data integrity but allow the failure of the non-vital inserts.
Thanks for your help.
Unfortunately, can't be done as you describe (full support for nested transactions would be key here). Couple things I can think of that have been used to get around this in the past:
Best option would probably be to separate the commands into important/non-important commands that could be executed distinctly, naturally this would require that they not be order-dependent on each other
Could also use a messaging based approach (see Service Broker) where you would execute the primary commands inline and push the non-primary commands onto a queue for execution later/separately. The push to the queue would be transactional within the batch, but the execution of the command when you pop off the queue would be separate. This too would require they not be order-dependent on each other.
If order-dependent, you could use the messaging approach for everything, which would ensure order and could have separate messages per operation, then grouping them together (via conversation groups) would allow you to pull them off the queue in order as well and use separate transactions for each 'type' of operation (i.e. primary vs. non-primary). This would require some special coding on your part if all the grouped messages must be a single autonomous operation, but could be done.
I hesitate to even mention this option, because it is a terrible option, but for full disclosure I suppose you could consider it at your discretion if you think it fits (but it is definitely not an architecture that would apply to almost any scenario). You could use xp_cmdshell to call out to the command line and execute sqlcmd/osql for the non-critical tasks - this sqlcmd execution would be in a separate transaction from the module you are executing from, and simply ignoring the xp_cmdshell failure should allow the primary batch to continue.
Those are some ideas...
Can you do your import into a temporary location, using transactions only for the important parts. Once the temp location loaded, having absorbed any non-critical errors, you can copy the data into its final destination in a single transaction. Depends on the nature the work you are doing, but potentially a viable option.