Sql Table Locked By Readpast - sql

I have a SQL View. I use READPAST on this SQL View. Because i don't want to see dirty data. But SQL READPAST locked this SQL View 's Table. I don't want to Table locking, i just want to locking Row.
Which method is correct?

When you select from a table you put a shared lock on it anyway...but if your table is locked and you don't want to see dirty data beside using readpast you should make sure that your table has an index and then set page lock to off and row lock to on..of course your query should have a where clause on on yhe indexed column..
https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table?view=sql-server-2017

It seems the problem is isolation level.You must use read committed snapshot if you use sql server.This provides that you fetch only committed data.Also this does not cause table lock.But you must enable it in database level.You can look https://www.google.com/amp/s/www.red-gate.com/simple-talk/sql/performance/read-committed-snapshot-isolation-high-version_ghost_record_count/amp/ for configuration.
Also readpast is not a solution.It skips locked rows.So you get missing results.When a row is updated,Your select query ignore this row.But in read committed isolation level you get committed version of this row even if locked by another transaction and
this isolation level does not lock your table.I assume that you use default isolation level for the transaction.if you do not set a isolation level to the transaction it uses default isolation level and this is read committed.Read committed snapshot only works under read committed isolation level without locking.Except this for example in serializable isolation level it continues to lock.So you can use default isolation level for the transaction calling your view.

Related

Is updated in transaction value visible to other transactions

I want to update a status in a DB table row and I want this to be immediately visible for other processes/transactions reading the same row. Do I have to put this statement in a separate transaction which will be committed right after the update?
I suppose this depends on the DB engine and isolation level, but as this can change I want my code to handle whatever the DB engine and isolation level is.
You have two solutions:
Use the appropriate transaction isolation level SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED (details)
Other queries can use NOLOCK (details)
Check the links from more details.

why objectID(N'tablename') does not lock and name='tablename' does lock on sys.objects?

Experiment details:
I am running this in Microsoft SQL Server management studio.
On one query window I run:
BEGIN TRANSACTION a;
ALTER table <table name>
ALTER column <column> varchar(1025)
On the other I run:
SELECT 1
FROM sys.objects
WHERE name = ' <other_table name>'
Or this:
SELECT 1
FROM sys.objects
WHERE object_id = OBJECT_ID(N'[<other_table name>]')
For some reason the select with name= does not return until I do commit to the tranaction.
I am doing transaction to simulate a long operation of alter column that we have in our DB sometimes. Which I don't want to harm other operations.
This is because you are working under Default Transaction Isolation Level i.e ReadCommited .
Read Commited
Under Read Commited Trasanction Isolation Level when you explicitly Begin a Transaction Sql Server obtains Exclusive locks on the resources in order to maintain data integrity and prevent users from dirty reads. Once you have begin a transaction and working with some rows other users will not be able to see them rows untill you Commit your transaction or Rollback. However this is sql server's default behaviour this can be changed under different Transaction Isolation Level for instance under Read Uncommited You will be able to read rows which are being Modified/Used by other users , but there is a chance of you have Dirty Reads "Data That you think is still in database but Other user has changed it."
My Suggestion
If Dirty Reads is something you can live with go on than
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED;
Preferably Otherwise stick to default behaviour of Sql Server and let the user wait for a bit rather than giving them dirty/wrong data. I hope it helps.
Edit
it does not matter if the both queries are executed under same or different isolation level what matters is under what isolation level the queries are being executed, and you have to have Read Uncommited transaction isolation level when you are making use of explicit transactions. If you want other user to have access to the data in during a transaction. Not recomended and a bad practice, I would personally use Snapshot Isolation which will extensively make use of tempdb and will only show the last commited data.

Confusion about SQL Server snapshot isolation and how to use it

I am newbie to MS Sql Server. Just a few months experience maintaining SQL Server SPs. I am reading up about transaction isolation levels for optimising an SP and am quite confused. Please help me with below questions:
If I run DBCC useroptions in ClientDB, the default value for isolation level is 'read committed'. Does this mean the DB is set to isolation level is set to READ_COMMITTED_SNAPSHOT ON ?
Is SET TRANSACTION ISOLATION LEVEL (at transaction level) same as SET READ_COMMITTED_SNAPSHOT ON (at DB level)? By this I mean that IF my DB has SNAPSHOT enabled, then can I set isolation level in my SP and process data accordingly?
is ALLOW_SNAPSHOT_ISOLATION similar to above?
I have an SP which starts off with a very long running SELECT statement that also dumps it's contents into a temp table. Then uses the temp table to UPDATE/INSERT base table. There are about 8 mil records being selected and dumped into temp table, then a similar number of total rows updated/inserted. The problem we are facing is that this SP takes up too much disk space.
It is a client DB and we do not have permissions to check disk space/log size etc in the DB. So I do not know if the tempDB/tempDB-log is taking up this disk space or clientDB/clientDB-log is. But the disk space can reduce by as much as 10GB at one go! This causes the transaction log to run out of disk space (as disk is full) and SP errors out.
If I use SNAPSHOT isolation level, will this disk space get even more affected? as it uses tempDB to versionize the data?
What I really want to do is this:
SET transaction isolation level to SNAPSHOT. Then do the SELECT into Temp table. Then BEGIN TRANSACTION and update/insert base table for say ... 1 mil records. Do this in a LOOP until all records processed. Then END TRANSACTION. Do you think this is a good idea? Should the initial SELECT be kept out of the TRANSACTION? Will this help in reducing the load on the transaction logs?
An isolation level of "read committed" isn't the same thing as setting READ_COMMITTED_SNAPSHOT ON. Setting READ_COMMITTED_SNAPSHOT ON sets the default isolation level for all queries. A query or procedure that then uses "read committed" isolation level does use snapshot isolation. See Isolation Levels in the Database Engine and SET TRANSACTION ISOLATION LEVEL in Books Online.
When the READ_COMMITTED_SNAPSHOT database option is set ON, read
committed isolation uses row versioning to provide statement-level
read consistency. Read operations require only SCH-S table level locks
and no page or row locks. When the READ_COMMITTED_SNAPSHOT database
option is set OFF, which is the default setting, read committed
isolation behaves as it did in earlier versions of SQL Server. Both
implementations meet the ANSI definition of read committed isolation.
ALLOW_SNAPSHOT_ISOLATION doesn't change the default isolation level. It lets each query or procedure use snapshot isolation if you want it to. Each query that you want to use snapshot isolation needs to SET TRANSACTION ISOLATION LEVEL SNAPSHOT. On big systems, if you want to use snapshot isolation, you probably want this rather than changing the default isolation level with READ_COMMITTED_SNAPSHOT.
A database configured to use snapshot isolation does take more disk space.
Think about moving the log files to a bigger disk.

SQL Server 2008 - adding a column to a replicated table fails

We have scenario:
SQL Server 2008
we have replication on the db
we have simple sproc that performs ALTER of one of the table (add new column)
isolation level is default (READ COMMITTED)
Stored procedure fails with error:
You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ isolation levels
Questions:
What causes the problem?
How to fix that?
UPDATE:
I believe this is very common problem so I'm wonder why there is no good explanations why replication causes this issue
You cant only specify READPAST when reading from committed data.
Reason is because readpast ignores locked rows, so when you use it, you are pretty much saying to sql server, give me everything that has not been touched by any other transaction. Example from BOL:
For example, assume table T1 contains a single integer column with the
values of 1, 2, 3, 4, 5. If transaction A changes the value of 3 to 8
but has not yet committed, a SELECT * FROM T1 (READPAST) yields values
1, 2, 4, 5.
It doesnt make much sense saying that on a read uncommitted isolation level, which by default, brings back uncommitted values. Its kinda requesting two opposite things.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
//REST OF QUERY ADD WITH (READPAST) after the table names Ex. SELECT FOO.* FROM dbo.foobar FOO WITH (READPAST)
And/or this should help you.
http://support.microsoft.com/kb/981995
Are you sure isolation level is set to READ COMMITTED?
I've seen this error when isolation is set to serializable and you use ALTER TABLE on a table published for replication. This is because some replication stored procedures use the READPAST hint to avoid blocking which can only be used in READ COMMITTED or REPEATABLE READ isolation levels.
If you are sure that the isolation level is set to READ COMMITTED, then I would recommend contacting Microsoft PSS on this one as it should not be happening. They will be able to assist you better.

Would this prevent the row from being read during the transaction?

I remember an example where reads in a transaction then writing back the data is not safe because another transaction may read/write to it in the time between. So i would like to check the date and prevent the row from being modified or read until my transaction is finish. Would this do the trick? and are there any sql variants that this will not work on?
update tbl set id=id where date>expire_date and id=#id
Note: date>expire_date happens to be my condition. It could be anything. Would this prevent other transaction from reading the row until i commit or rollback?
In a lot of cases, your UPDATE statement will not prevent other transactions from reading the row.
ziang mentioned transaction isolation levels.
Depending on the isolation level, databases use different types of locking. At the highest level, locking can be divided into two categories:
- pessimistic,
- optimistic
MS SQL 2008, for example, has 6 isolation levels, 4 of them are pessimistic, 2 are optimistic. By default , it uses READ COMMITTED isolation level, which falls into the pessimistic category.
Oracle, on another note, uses optimistic locking by default.
The statement that will lock your record for writing is
SELECT * FROM TBL WITH UPDLOCK WHERE id=#id
From that point on, no other transaction will be able to update your record with id=#id
And only transactions running in isolation level READ UNCOMMITTED will be able to read it.
With the default transaction level, READ COMMITTED, no other thansaction will be able to read or write into this record until you either commit or roll back your entire transaction.
It depends on the transaction isolation level you set on your transaction control. There are 4 types of read
READ UNCOMMITTED: this allows the dirty read
READ COMMITTED
REPEATABLE READ
SERIALIZABLE
for more info, you can check msdn.
You should be able to do this in a normal select using a combination of
HOLDLOCK/ROWLOCK
It very well may work. Different platforms offer different services. For instance, in T-SQL, you can simply set the isolation level of the transaction and, as a result, force a lock to be obtained. I don't know what platform you are using so I cannot answer your question definitively.