I have a series data modify operation to do,such as
1. update table_a set value=1 where id=1
2. update table_b set value=2 where id=1
3. update table_c set value=3 where id=1
and I want to ensure this three operation must all complete,I know using transaction can guarantee all performed or none performed.But my point is must make these three all performed.when first sql performed,the app instance may crashed and the other two are missed.
Note this is a ditributed enviroment,may be another app instance can take over the unfinished SQL,but how can I do it?
Can I use a stored procedure,the app instance only fire the stored procedure,and database finsh all the sql?
If when performing transaction,the app instance suddenly crashes,will it leads to a dead lock?
Deadlocks are not crashed requests which fail before the end of the execution. If your request crashes into a transaction, it won't lead to a deadlock.
It is always better to use stored procedures but this won't help you for this specific case.
What I would suggest is inded the use of a transaction with a try catch to rollback the transaction in case of failure.
Something like that :
BEGIN TRY -- start of try
BEGIN TRANSACTION; -- start of transaction
update table_a set value=1 where id=1
update table_b set value=2 where id=1
update table_c set value=3 where id=1
COMMIT TRANSACTION; -- everything went ok we commit
BEGIN CATCH -- an error happened we rollback
PRINT N'Unexpected error';
ROLLBACK TRANSACTION;
END CATCH
You can check more complete examples here
If an app is performing a transaction on a database server, and the app crashes (abruptly disconnects from the database) before committing the transaction, the database server rolls back the transaction. The disconnection does not leave the database in an unusable (potentially deadlocked) state.
So your database contents won't reflect any of your three UPDATE operations when your app crashes during your transaction. It will just lose the transaction in progress.
How to handle this potential failure mode?
Reduce the probability of a crash during a transaction. Try to avoid doing stuff in your app that could make it crash while your transaction is in process. For example, if you get data from some other server or device, get it all before you begin your transaction. This solution is usually good enough for production apps.
Rig up some sort of way for your app, upon restarting, to find out the most recent successful transaction. One good way? Add a column like this to one of your tables: (this is a MySQL thing.)
last_update_timestamp TIMESTAMP DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,
This causes every UPDATE operation on each column to -- automatically -- put NOW() into the last_update_timestamp column. Then, when your crashed app restarts you can do
SELECT MAX(last_update_timestamp) FROM table
and you'll know when the most recent successful update occurred. This automatic update also gets rolled back if a transaction is rolled back. If you know when the last successful update occurred, your app may be able to redo the one that was rolled back by the crash.
If you choose to build a redo-transaction capability, be sure to build it so you can test it! if (testingAppCrash) crashNow = 1 / 0; might do the trick in your app.
Related
I'm trying to update a table which is In-Memory OLTP. There is a scenario where we may have to update a same row in parallel. During concurrent update of a same record I am getting below reported error. Here is my sample update statement.
In SQL Window 1 executing below command at the same time in Window 2 I am executing 2nd update command
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRANSACTION
BEGIN TRY
UPDATE [TestInmemory] SET CreatedDate = GETDATE() WHERE Id = 112
WAITFOR DELAY '00:00:30'
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE( )
ROLLBACK TRANSACTION
END CATCH
Window 2:
UPDATE [TestInmemory] SET CreatedDate = GETDATE() WHERE Id = 112
Now I am getting below reported error. But the same is working for normal table, the second window is waiting to complete first window transaction. How do I set at least same behavior for memory optimized table also.
System.Data.SqlClient.SqlException (0x80131904): The current transaction attempted to update a record that has been updated since this transaction started. The transaction was aborted. The statement has been terminated.
at System.Data.SqlClient.SqlCommand.<>c.b__126_0(Task1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
Slightly different scenario, but I am seeing this same error as a result. It started to happen right after I reworked the table as MOT. This looks buggy to me. We should write up a repro and tell microsoft, if people haven't done so already.
Kinda sad because they made a huge deal about hekaton and this was supposed to be their answer to locking problems.
I found answer to this. Long story short, explicit transaction (BEGIN TRAN) is not supported unless you do snapshot isolation.
Solution 1:
ALTER DATABASE CURRENT
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;
probably most convenient. Running the ALTER DATABASE above will tell your database to automatically use snapshot isolation for all your memory optimized stuff in one shot. This is a one-time, set-it-and-forget-it option.
Solution 2:
use with (snapshot) after table name in every query. kinda like how you do (nolock).
Read this link top to bottom and it explains everything.
https://learn.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/transactions-with-memory-optimized-tables?view=sql-server-ver16
I have the following statement in my PostgreSQL 10.5 database, which I execute in a repeatable read transaction:
delete from task
where task.task_id = (
select task.task_id
from task
order by task.created_at asc
limit 1
for update skip locked
)
returning
task.task_id,
task.created_at
Unfortunately, when I run it, I sometimes get:
[67] ERROR: could not serialize access due to concurrent update
[67] STATEMENT: delete from task
where task.task_id = (
select task.task_id
from task
order by task.created_at asc
limit $1
for update skip locked
)
returning
task.task_id,
task.created_at
which means the transaction rolled back because some other transaction modified the record in the meantime. (I think?)
I don't quite understand this. How could a different transaction modify a record that was selected with for update skip locked, and deleted?
This quote from the manual discusses your case exactly:
UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands
behave the same as SELECT in terms of searching for target rows: they
will only find target rows that were committed as of the transaction
start time. However, such a target row might have already been updated
(or deleted or locked) by another concurrent transaction by the time
it is found. In this case, the repeatable read transaction will wait
for the first updating transaction to commit or roll back (if it is
still in progress). If the first updater rolls back, then its effects
are negated and the repeatable read transaction can proceed with
updating the originally found row. But if the first updater commits
(and actually updated or deleted the row, not just locked it) then the
repeatable read transaction will be rolled back with the message
ERROR: could not serialize access due to concurrent update
Meaning, your transaction was unable to lock the row to begin with - due to concurrent write access that got there first. SKIP LOCKED cannot save you from this completely as there may not be a lock to skip any more and we still run into a serialization failure if the row has already been changed (and the change committed - hence the lock released) since transaction start.
The same statement should work just fine with default READ COMMITTED transaction isolation. Related:
Postgres UPDATE … LIMIT 1
I have an issue with "could not serialize access due to concurrent update". I checked logs and I can clearly see that two transactions were trying to update a row at the same time.
my sql query
UPDATE sessionstore SET valid_until = %s WHERE sid = %s;
How can I tell postgres to "try" update row without throwing any exception?
There is a caveat here which has been mentioned in comments. You must be using REPEATABLE READ transaction isolation or higher. Why? That is not typically required unless you really have a specific reason.
Your problem will go away if you use standard READ COMMITTED. But still it’s better to use SKIP LOCKED to both avoid lock waits and redundant updates and wasted WAL traffic.
As of Postgres 9.5+, there is a much better way to handle this, which would be like this:
UPDATE sessionstore
SET valid_until = %s
WHERE sid = (
SELECT sid FROM sessionstore
WHERE sid = %s
FOR UPDATE SKIP LOCKED
);
The first transaction to acquire the lock in SELECT FOR UPDATE SKIP LOCKED will cause any conflicting transaction to select nothing, leading to a no-op. As requested, it will not throw an exception.
See SKIP LOCKED notes here:
https://www.postgresql.org/docs/current/static/sql-select.html
Also the advice about a savepoint is not specific enough. What if the update fails for a reason besides a serialization error? Like an actual deadlock? You don’t want to just silently ignore all errors. But these are also in general a bad idea - an exception handler or a savepoint for every row update is a lot of extra overhead especially if you have high traffic. That is why you should use READ COMMITTED and SKIP LOCKED both to simplify the matter, and any actual error then would be truly unexpected.
The canonical way to do that would be to set a checkpoint before the UPDATE:
SAVEPOINT x;
If the update fails,
ROLLBACK TO SAVEPOINT x;
and the transaction can continue.
The default isolation level is "read committed" unless you need to change it for any specific use case.
You must be using the "repeatable read" or "serializable" isolation level. Here, the current transaction will roll-back if the already running transaction updates the value which was also supposed to be updated by current transaction.
Though this scenario can be easily handled by the "read_commit" isolation level where the current transaction accepts an updated value from other transaction and perform its instructions after the previous transaction is committed
ALTER DATABASE SET DEFAULT_TRANSACTION_ISOLATION TO 'read committed';
Ref: https://www.postgresql.org/docs/9.5/transaction-iso.html
Say I have a query like this:
BEGIN Transaction
UPDATE Person SET Field=1
Rollback
There are one hundred million people. I stopped the query after twenty minutes. Will SQL Server rollback the records updated?
A single update will not update some rows. It will either update all or 0.
So, if you cancel the query, nothing will be updated.
This is atomicity database systems which SQL Server follows.
In other words, you don't have to do that rollback at the end, nothing was committed anyway.
When you cancel a query, it will still hold locks until everything is rolled back so no need to panic.
You could test it yourself, execute the long query, cancel it and you will notice that it takes a while before the process really end.
While the update statement will not complete, the transaction will still be open, so make sure you rollback manually or close out the window to kill the transaction. You can test this by including two statements in your transaction, where the first one finishes and you cancel while it's running the second - you can still commit the transaction after stopping it, and then get the first half of your results.
BEGIN Transaction
UPDATE Person SET Field=1 WHERE Id = 1
UPDATE Person SET Field=1
Rollback
If you start this, give it enough time for the first line to finish, hit the Stop button in SSMS, then execute commit transaction, you'll see that the first change did get applied. Since you obviously don't want part of a transaction to succeed, I'd just kill the whole window after you've stopped it so you can be sure everything's rolled back.
Since you have opened the Transaction, Stoping the Query manually does not completes the transaction, This transaction will still be open and all the subsequent requests to this table will be blocked.
You can do any one of following options
Kill the Connection using the command KILL SPID (SPID is the process ID of your connection)
Note: This will auto rollback the changes you made, you can monitor the rollback status with command KILL SPID WITH STATUSONLY (After killing)
run the ROLLBACK command manually
** SPID is your request id, you can find it from sys.sysprocesses table/ you can also find it on Management Studio query Window the number which is within brackets / also you can find it at bottom right corner of your management studio beside the login name.
Example SQLQuery2.sql... (161) -- 161 is your spid.
Which procedure is more performant for an update which affects zero rows?
UPDATE table SET column = value WHERE id = number;
IF SQL%Rowcount > 0 THEN
COMMIT;
END IF;
or
UPDATE table SET column = value WHERE id = number;
COMMIT;
In other words if an Update affect ZERO rows and a commit is issued am I incurring any added expense at all?
I have a system which is being hampered by log file sync waits... and I'm wondering if issuing a commit; against a transaction which affects zero rows will write that statement to the log or not and thus cause more contention on LGWR.
COMMIT does force the log file sync so the system will have to wait indeed.
However, ROLLBACK does too and at some time either of them will have to happen.
So if you issue neither COMMIT nor ROLLBACK, you are just staying with an open transaction which sooner or later will cause a log sync wait.
Probably, you want to batch you UPDATE operations rather than waiting for a first successful update and committing it.
There are risks in this. Technically while the UPDATE may affect zero rows, it can fire before or after update triggers on the table (not at row level). Those triggers could potentially "do something" that requires a commit/rollback.
Safer to check to see if LOCAL_TRANSACTION_ID is set.
There are any number of reasons which can underlie waits for log file sync. It seems unlikely that the main culprit is committing SQL statements which have updated zero rows. It is true that issuing too many commits can be the cause of this problem. For instance, if the application is set up to commit after every statement (e.g. by using AUTOCOMMIT=TRUE) instead of designing proper transactions. If this is the cause then there is not much you can do, short of a major rewrite of the application.
If you want to delve deeper into the root causes of your problem I recommend you read this exhaustive (and exhausting) article by Pythian's Riyaj Shamsudeen on Tuning ‘log file sync’ Event Waits.