I have problem regarding to enforcing immediate transaction commitment, I tried to insert 300 rows with a new transaction in each passing command, however after finishing inserting the 300 rows and calling commit command, I try to read the inserted rows and I get the result of nothing, however if I just called 'exit sub' I find all 300 records are written.
My question how to enforce transaction to commit all changes to database before calling another select command.
I figure it out, I had a nested transactions which was causing delay in transaction commitment
Related
When we execute save, update or delete operation, we open a transaction and after completing operation we close transaction following a commit. If we run insert query with single or multiple row values, then what will happen?
We use BEGIN TRAN in DELETE or UPDATE statement to make sure that our statement is correct and We get the correct number of results returned.
some developers doesn't use it in session or in batches , because they already try their statement and exactly know what it will do.
I advise you to visit this URL , It's really useful:
https://www.mssqltips.com/sqlservertutorial/3305/what-does-begin-tran-rollback-tran-and-commit-tran-mean/
I have a stored procedure that deletes records from multiple tables.
I wish for either all of the delete statements to complete successfully, or none. The actual purpose here is to wipe all data related to a particular user.
Note that none of this data is related in any way to any other data. E.g. a user's data is not referenced in any way by another users data. However it is possible to have concurrent client sources accessing one user's data simultaneously. I don't know if this is relevant
So I've wrapped it in BEGIN TRANSACTION ... COMMIT TRANSACTION
like so:
CREATE PROCEDURE [dbo].[spDeleteData]
#MyID AS INT
AS
BEGIN TRANSACTION
DELETE FROM [Table1] WHERE myId = #MyID;
DELETE FROM [Table2] WHERE myId = #MyID;
....
COMMIT TRANSACTION
RETURN 0
My question here is what are the implications of wrapping multiple DELETE calls in a transaction? Will it create possible deadlock scenarios, or hurt performance in some way?
From what I am reading, using TRANSACTION ISOLATION LEVEL only applies to read operations, is this true?
What you are guaranteeing is that either all the rows that match the conditions in both tables are successfully deleted or none of the rows are deleted (i.e. if there is a problem the deletes are rolled back.) There are more locks and they are kept for a longer period but if it fails you don't have to manually recreate the rows the deletes are undone for you automatically. You probably want to add the statement:
set xact_abort on
at the beginning of the transaction and to wrap the whole thing in a begin try/begin catch statement.
Please see sommarskog.se/error-handling-I.html#XACT_ABORT for an execellent discussion on this statement and on error handling for TSQL.
So given this transaction:
select * from table_a where field_a = 'A' for update;
Assuming this gives out multiple rows/results, will the database lock all the results right off the bat? Or will it lock it one row at a time.
If the later is true, does that mean running this query concurrently, can result in a deadlock?
Thus, adding an order by to maintain consistency on the order is needed to solve this problem?
The documentation explains what happens as follows:
FOR UPDATE
FOR UPDATE causes the rows retrieved by the SELECT statement to be
locked as though for update. This prevents them from being locked,
modified or deleted by other transactions until the current
transaction ends. That is, other transactions that attempt UPDATE,
DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE
or SELECT FOR KEY SHARE of these rows will be blocked until
the current transaction ends; conversely, SELECT FOR UPDATE will
wait for a concurrent transaction that has run any of those commands
on the same row, and will then lock and return the updated row (or no
row, if the row was deleted). Within a REPEATABLE READ or
SERIALIZABLE transaction, however, an error will be thrown if a row
to be locked has changed since the transaction started. For further
discussion see Section 13.4.
The direct answer to your question is that Postgres cannot lock all the rows "right off the bat"; it has to find them first. Remember, this is row-level locking rather than table-level locking.
The documentation includes this note:
SELECT FOR UPDATE modifies selected rows to mark them locked, and so
will result in disk writes.
I interpret this as saying that Postgres executes the SELECT query and as it finds the rows, it marks them as locked. The lock (for a given row) starts when Postgres identifies the row. It continues until the end of the transaction.
Based on this, I think it is possible for a deadlock situation to arise using SELECT FOR UPDATE.
while 1 = 1
begin
waitfor time #timeToRun
begin
/*delete some records from table X*/
end
end
In the codes above, will the SQL server lock the table X during the wait? I would
like to insert records into table X during this wait time. Is it possible?
All write operations acquire X locks on the rows being updated (deleted) and all these X locks will be hold until the transaction commits. Every statement creates an implicit transaction that commits automatically at the end of the statement, if no transaction is explicitly specified.
So the answer to your question depends whether you call this in a context of an existing transaction or not. If not, then (assuming you do not start a transaction in the inner begin... end block and leave the transaction open) then no lock will be held. If the code is run in a the context of an existing transaction (eg. a TransactionScope in the client started automatically by the WCF service behavior) then any lock placed by the delete will be hold while you wait, until the transaction is committed..
Two part question:
1) In the codes above, will the SQL server lock the table X during the wait?
No. It may lock the rows, but not the table.
2) I would like to insert records into table X during this wait time. Is it possible?
Yes, but they will be locked until your commit
Note: you will want to wrap any work you are doing in a transaction with a BEGIN/COMMIT block. This will avoid the locking issue entirely.
Which procedure is more performant for an update which affects zero rows?
UPDATE table SET column = value WHERE id = number;
IF SQL%Rowcount > 0 THEN
COMMIT;
END IF;
or
UPDATE table SET column = value WHERE id = number;
COMMIT;
In other words if an Update affect ZERO rows and a commit is issued am I incurring any added expense at all?
I have a system which is being hampered by log file sync waits... and I'm wondering if issuing a commit; against a transaction which affects zero rows will write that statement to the log or not and thus cause more contention on LGWR.
COMMIT does force the log file sync so the system will have to wait indeed.
However, ROLLBACK does too and at some time either of them will have to happen.
So if you issue neither COMMIT nor ROLLBACK, you are just staying with an open transaction which sooner or later will cause a log sync wait.
Probably, you want to batch you UPDATE operations rather than waiting for a first successful update and committing it.
There are risks in this. Technically while the UPDATE may affect zero rows, it can fire before or after update triggers on the table (not at row level). Those triggers could potentially "do something" that requires a commit/rollback.
Safer to check to see if LOCAL_TRANSACTION_ID is set.
There are any number of reasons which can underlie waits for log file sync. It seems unlikely that the main culprit is committing SQL statements which have updated zero rows. It is true that issuing too many commits can be the cause of this problem. For instance, if the application is set up to commit after every statement (e.g. by using AUTOCOMMIT=TRUE) instead of designing proper transactions. If this is the cause then there is not much you can do, short of a major rewrite of the application.
If you want to delve deeper into the root causes of your problem I recommend you read this exhaustive (and exhausting) article by Pythian's Riyaj Shamsudeen on Tuning ‘log file sync’ Event Waits.