First of all sorry if my English is not good.
I'm facing a problem with respect to transaction isolation level. My current isolation level is read committed.But it leads table to dead lock some times.
For example
create table tmp(id int,name varchar(20))
insert into tmp(id,name)
values(1,'Binesh')
,(2,'Bijesh')
,(3,'Bibesh')
begin transaction
update tmp set name ='Harish' where id=2
And I'm trying to get in another query window
select * from tmp where id=1
It is locking the table so it is not giving any records until I rollback or commits the first one
I tried
ALTER DATABASE db
SET READ_COMMITTED_SNAPSHOT On
ALTER DATABASE db
SET ALLOW_SNAPSHOT_ISOLATION on
It not locks the table but it gives old value for id =2
select * from tmp where id=2
returns me Bijesh where I'm expecting a locking
I'm expecting a way like if id=1 it will work fine, but if id =2 it will wait until the other transaction overs.
Hopes your help.....
Thanks in advance
Binesh Nambiar C
This is not a deadlock you're experiencing.
Your second query is blocked because it has to scan the entire table. During the scan, it encountered the row being updated (exclusively locked), so it waits until the lock is released (ie transaction ends).
If engine knew that the id column is unique (primary key or unique constraint), you will not get blocking here, because it wouldn't have to do scan on entire table but would rather stop on first match.
Keep ypur transactions short, provide alternative access paths to data (indexes) and try not use "select *".
Also, think carefully whether you really want to use the READ UNCOMMITTED isolation level.
Related
how can I do a real LOCK of a table when inserting in redshift, I think it's like that but I'm not sure and aws documentation as always zero input
begin;lock table sku_stocks;insert into sku_stocks select facility_alias_id, facility_name, CAST( item_name AS bigint), description, CAST( prod_type AS smallint ), total_available, total_allocated from tp_sku_stocks;
LOCK has a documented behavior: it obtains an exclusive lock on the table, so that no other session or transaction can do anything to the table.
If you wish to verify this behavior, here's how you can do it:
Open a connection to the database and invoke the begin and lock commands against your test table.
Open a second connection to the database, and attempt to do a select against that table.
Wait until either the select returns or your are convinced that LOCK behaves as documented.
Execute a rollback in the first session, so that you're not permanently locking the table.
Update
Based on your comments, I think you have a misunderstanding of how transactions work:
When you start a transaction, Redshift assigns a transaction ID, and marks every row that was changed by that transaction.
SELECTs that read a table while it is being updated in a transaction will see a "snapshot of the data that has already been committed" (quote from above link), not the rows that are being updated inside the transaction.
INSERT/UPDATE/DELETE that try to update a table that is being updated by a transaction will block until the transaction completes (see doc, and note that this behavior is somewhat different than you would see from, say, MySQL).
When you commit/rollback a transaction, any new SELECTs will use the updated data. Any SELECTs that were started during the transaction will continue to use the old data.
Given these rules, there's almost no reason for an explicit LOCK statement. Your example update, without the LOCK, will put a write-lock on the table being updated (so guaranteeing that no other query can update it at the same time), and will use a snapshot of the table that it's reading.
If you do use a LOCK, you will block any query that tries to SELECT from the table during the update. You may think this is what you want, to ensure that your users only see the latest data, but consider this any SELECTs that were started prior to the update will still see the old data.
The only reason that I would use a LOCK statement is if you need to maintain consistency between a group of tables (well, also if you're getting into deadlocks, but you don't indicate that):
begin;
lock TABLE1;
lock TABLE2;
lock TABLE3;
copy TABLE1 from ...
update TABLE2 select ... from TABLE1
update TABLE3 select ... from TABLE2
commit;
In this case, you ensure that TABLE1, TABLE2, and TABLE3 will always remain consistent: queries against any of them will show the same information. Beware, however, that SELECTS that started before the lock will succeed, and show data prior to any of the updates. And SELECTs that start during the transaction won't actually execute until the transaction completes. Your users may not like that if it happens in the middle of their workday.
We have a need to (once per month) clear out the contents of a table with 50,000 records, and repopulate, using a Stored Procedure. The SP has a User Defined Table Type parameter which contains all of the new records to be inserted.
The current thought is as follows
ALTER PROCEDURE [ProcName]
#TableParm UserTableType READONLY
AS
[Set lock on table?]
BEGIN TRAN
DELETE FROM [table]
INSERT INTO [table](column, column, column)
SELECT (a.column, a.column, a.column) FROM #TableParm a
COMMIT TRAN
[Remove lock from table?]
I've read some solutions which suggest to set READ COMMITED or READ UNCOMMITED... but figured I'd turn to the pro's to steer me in the right direction, based on the situation.
thanks!
I'd use a serializable transaction
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
Both the READ... type levels would allow data of some form to be read from the table, which is probably not what you want.
You may also be able to use TRUNCATE TABLE rather than DELETE, depending on your data structure.
If reducing the unavailability of this table is an issue, you may be able to reduce it by creating a new table, populating it, then renaming the old and new tables.
I have a situation where I need to select some records from a table, store the primary keys of these records in a temporary table, and apply an exclusive lock to the records in order to ensure that no other sessions process these records. I accomplish this with locking hints:
begin tran
insert into #temp
select pk from myTable with(xlock)
inner join otherTables, etc
(Do something with records in #temp, after which they won't be candidates for selection any more)
commit
The problem is that many more records are being locked than necessary. I'd like to only lock the records that are actually inserted into the temporary table. I was initially setting a flag on the table to indicate the record was in use (as opposed to using lock hints), but this had problems because the database would be left in an invalid state if a situation prevented one or more records being processed.
Maybe setting a different isolation level and using the rowlock table hint is what you're looking for?
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
and
WITH (ROWLOCK)
Or maybe combining XLOCK and ROWLOCK might do the trick?
The documentation says:
XLOCK
Specifies that exclusive locks are to be taken and held until the transaction >completes. If specified with ROWLOCK, PAGLOCK, or TABLOCK, the exclusive locks apply to >the appropriate level of granularity.
I haven't tried this myself though.
I have a number of tables that get updated through my app which return a lot of data or are difficult to query for changes. To get around this problem, I have created a "LastUpdated" table with a single row and have a trigger on these complex tables which just sets GetDate() against the appropriate column in the LastUpdated table:
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET ListItems = GetDate()
GO
This way, the clients only have to query this table for the last updated value and then can decided whether or not they need to refresh their data from the complex tables. The complex tables are using snapshot isolation to prevent dirty reads.
In busy systems, around once a day we are getting errors writing or updating data in the complex tables due to update conflicts in "LastUpdated". Because this occurs in the statement executed by the trigger, the affected complex table fails to save data. The following error is logged:
Snapshot isolation transaction aborted due to update conflict. You
cannot use snapshot isolation to access table 'dbo.tblLastUpdated'
directly or indirectly in database 'devDB' to update, delete, or
insert the row that has been modified or deleted by another
transaction. Retry the transaction or change the isolation level for
the update/delete statement.
What should I be doing here in the trigger to prevent this failure? Can I use some kind of query hints on the trigger to avoid this - or can I just ignore errors in the trigger? Updating the data in LastUpdated is not critical, but saving the data correctly into the complex tables is.
This is probably something very simple that I have overlooked or am not aware of. As always, thanks for any info.
I would say that you should look into using Change Tracking (http://msdn.microsoft.com/en-gb/library/cc280462%28v=sql.100%29.aspx), which is lightweight builtin SQL Server functionality that you can use to monitor the fact that a table has changed, as opposed to logging each individual change (which you can also do with Change Data Capture). It needs Snapshot Isolation, which you are already using.
Because your trigger is running in your parent transaction, and your snapshot has become out of date, your whole transaction would need to start again. If this is a complex workload, maintaining this last updated data in this way would be costly.
Short answer - don't do that! Making the updated transactions dependent on one single shared row makes it prone to deadlocks and and update conflicts whole gammut of nasty things.
You can either use views to determine last update, e.g.:
SELECT
t.name
,user_seeks
,user_scans
,user_lookups
,user_updates
,last_user_seek
,last_user_scan
,last_user_lookup
,last_user_update
FROM sys.dm_db_index_usage_stats i JOIN sys.tables t
ON (t.object_id = i.object_id)
WHERE database_id = db_id()
Or, if you really insist on the solution with LastUpdate, you can implement it's update from the trigger in an autonomous transactions. Even though SQL Server doesn't support autonomous transactions, it could done using liked servers: How to create an autonomous transaction in SQL Server 2008
The schema needs to change. If you have to keep your update table, make a row for every table. That would greatly reduce your locks because each table could update their very own row and not competing for the sole row in a table.
LastUpdated
table_name (varchar(whatever)) pk
modified_date (datetime)
New Trigger for tblListItem
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET modified_date = GetDate() WHERE table_name = 'tblListItem'
GO
Another option that I use a lot is having a modified_date column in every table. Then people know exactly which records to update/insert to sync with your data rather than dropping and reloading everything in the table each time one record changes or is inserted.
Alternatively, you can update the log table inside the same transaction which you use to update your complex tables inside your application & avoid the trigger altogether.
Update
You can also opt for inserting a new row instead of updating the same row in LastUpdated table. You can then query max timestamp for latest update. However, with this approach your LastUpdated table would grow each day which you need to take care of if volume of transactions is high.
(I've tried this in MySql)
I believe they're semantically equivalent. Why not identify this trivial case and speed it up?
truncate table cannot be rolled back, it is like dropping and recreating the table.
...just to add some detail.
Calling the DELETE statement tells the database engine to generate a transaction log of all the records deleted. In the event the delete was done in error, you can restore your records.
Calling the TRUNCATE statement is a blanket "all or nothing" that removes all the records with no transaction log to restore from. It is definitely faster, but should only be done when you're sure you don't need any of the records you're going to remove.
Delete from table deletes each row from the one at a time and adds a record into the transaction log so that the operation can be rolled back. The time taken to delete is also proportional to the number of indexes on the table, and if there are any foreign key constraints (for innodb).
Truncate effectively drops the table and recreates it and can not be performed within a transaction. It therefore required fewer operations and executes quickly. Truncate also does not make use of any on delete triggers.
Exact details about why this is quicker in MySql can be found in the MySql documentation:
http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html
Your question was about MySQL and I know little to nothing about MySQL as a product but I thought I'd add that in SQL Server a TRUNCATE statement can be rolled back. Try it for yourself
create table test1 (col1 int)
go
insert test1 values(3)
begin tran
truncate table test1
select * from test1
rollback tran
select * from test1
In SQL Server TRUNCATE is logged, it's just not logged in such a verbose way as DELETE is logged. I believe it's referred to as a minimally logged operation. Effectively the data pages still contain the data but their extents have been marked for deletion. As long as the data pages still exist you can roll back the truncate. Hope this is helpful. I'd be interested to know the results if somebody tries it on MySQL.
For MySql 5 using InnoDb as the storage engine, TRUNCATE acts just like DELETE without a WHERE clause: i.e. for large tables it takes ages because it deletes rows one-by-one. This is changing in version 6.x.
see
http://dev.mysql.com/doc/refman/5.1/en/truncate-table.html
for 5.1 info (row-by-row with InnoDB) and
http://blogs.mysql.com/peterg/category/personal-opinion/
for changes in 6.x
Editor's note
This answer is clearly contradicted by the MySQL documentation:
"For an InnoDB table before version 5.0.3, InnoDB processes TRUNCATE TABLE by deleting rows one by one. As of MySQL 5.0.3, row by row deletion is used only if there are any FOREIGN KEY constraints that reference the table. If there are no FOREIGN KEY constraints, InnoDB performs fast truncation by dropping the original table and creating an empty one with the same definition, which is much faster than deleting rows one by one."
Truncate is on a table level, while Delete is on a row level. If you would translate this to sql in an other syntax, truncate would be:
DELETE * FROM table
thus deleting all rows at once, while DELETE statement (in PHPMyAdmin) goes like:
DELETE * FROM table WHERE id = 1
DELETE * FROM table WHERE id = 2
Just until the table is empty. Each query taking a number of (milli)seconds which add up to taking longer than a truncate.