Locking Row in SQL 2005-2008 - sql

Is there a way to lock a row in the SQL 2005-2008 database without starting a transaction, so other processes cannot update the row until it is unlocked?

You can use RowLock or other hints but you should be careful..
The HOLDLOCK hint will instruct SQL Server to hold the lock until you commit the transaction. The ROWLOCK hint will lock only this record and not issue a page or table lock.
The lock will also be released if you close your connection or it times out. I'd be VERY careful doing this since it will stop any SELECT statements that hit this row dead in their tracks. SQL Server has numerous locking hints that you can use. You can see them in Books Online when you search on either HOLDLOCK or ROWLOCK.

Everything you execute in the server happens in a transaction, either implicit or explicit.
You can not simply lock a row with no transaction (make the row read only). You can make the database read only, but not just one row.
Explain your purpose and it might be a better solution. Isolation levels and lock hints and row versioning.

Do you need to lock a row, or should Sql Server's Application locks do what you need?
An Application Locks is just a lock with a name that you can "lock", "unlock" and check if it is locked. see above link for details. (They get unlocked if your connection gets closed etc, so tend to clean themselfs up)

Related

Deadlock in SQL Server 2008 | INSERT (from application, EF) & SELECT (from stored procedure) statement working simultaneously

Program to insert into my 2 tables is written through Entity Framework and to SELECT the data is through a STORED PROC at SQL SERVER level. There is a point when SELECT and INSERT is getting done at the same time simultaneously. And when hitting that point, I got the below error:
Transaction (Process ID) was deadlocked on resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
How can I get rid of this DEADLOCK problem here? Need the best way to solve it.
Option 1: Implementing NOLOCK? What would be the PROS and CONS here for it?
Option 2: IF there is any way to exceed the DEADLOCK wait time so that it can wait for the resource for a longer time than usually it does? If yes, then HOW?
Option 3: Suggest Me?
Thanks,
Rahuul Dutta
A deadlock cannot be cured by increasing lock timeout. The resources are locked in such a way that it cannot be resolved by itself, regardless of how much time you can give it. A special background process in SQL Server, a deadlock monitor, periodically (rather often, actually) runs and if it identifies a deadlock it kills the 'lighter' transaction immediatelly.
The deadlocks are usually dealt with in one of several ways: by providing an alternative data access path for the SELECT query (ie adding a mnnclustered index), minimizing the transaction duration (by better indexing, again), or using one of snapshot isolation levels.
The least effort solution here will be setting the read committed snapshot isolation level. This way the SELECT query will not issue any shared locks on data, but still read only the committed data, which is a huge plus over using the NOLOCK hint (or read uncommitted isolation level).
You can change your transaction isolation level. Best option for deadlocks would be snapshot isolation i think. If you cannot turn this option on in your server or if you run into I/O issues, read committed should still prevent deadlocks from read/write dependencies. Make sure that you don't run into anomalies, read committed will allow non-repeatable reads and phantom reads.
First of all, thanks a lot for your precious answers!
With the help of your answers, some research and a call with Microsoft DBA team, I have got the following solution.
Solution: To implement this solution we have to change the database property to Read Committed Snapshot. This will help the Select statements in avoiding the blocks in case of locks by other sessions on the same table.
- To cater this solution the database will create a snapshot of the data in tempdb. Therefore we must have sufficient space in tempdb. Also if possible we must shift the tempdb to a new disk to split the I/O. This will improve the performance.
The following kb article helps in enabling Read Committed Snapshot property of the database:
http://technet.microsoft.com/en-us/library/ms175095(v=SQL.105).aspx
Alternately we can change this property through SSMS by right clicking the database---options---Miscellaneous----Is Read Committed Snapshot On. We have to change the value of this property to TRUE.
We do not have to restart the server to enable this property however we must note that 'When setting the READ_COMMITTED_SNAPSHOT option, only the connection executing the ALTER DATABASE command is allowed in the database. There must be no other open connection in the database until ALTER DATABASE is complete. The database does not have to be in single-user mode.'
This means we need a small amount of downtime from the application side.
Hope the MOM above would help you all too. :)
Thanks,
Rahuul Dutta

Kill delete sessions in informix

By mistake, I performed this query in informix using dbaccess session.
Delete from table #without where condition
Realizing my mistake, that I should have used TRUNCATE, I did another foolishness.
I killed the dbaccess session. But the table is exclusively locked and I am not able to do any action on that table.
What are the steps I can do to remove the lock and truncate the table.
1) Restart Informix server
2) onmode -z <sessionid> # Does not work.
I see hell lot of sessions created for the delete query
Is there any other easy way to fix this issue?
Assuming that you are not using Informix SE...
Is the database logged? If so, did you run the statement inside an explicit (BEGIN WORK) transaction?
Analysis
If you've got an unlogged database, then each row that the server's deleted is gone. If you stop the DELETE, it will not undo the partially complete changes. Using an unlogged database means that you do not want guaranteed statement level recovery.
If you've got a regular logged database and no explicit transaction, then the statement is probably still running after the DB-Access session is terminated. Because it is running as a singleton statement, it will complete and commit. Until it does that, if you forcibly take the server down, then fast recovery will rollback the statement (transaction). Given that I see '5 hours ago', I fear your chances of taking the server down in time now are limited.
If you've got a logged database with an explicit transaction, or a MODE ANSI database (where you're always in a transaction), then when the DELETE statement completes, the server will wait for the COMMIT, realize that the session connection is terminated, and will rollback the uncommitted work.
Recovery
If you've got an unlogged database, you can only recover to your last archive. Because it is unlogged, you can't recover it from the logical logs (but other databases in the same instance that are logged can be recovered up to the last logical log).
If you've got a logged database and you can take the server down — preferably under control, but crashing it if necessary — before the DELETE statement completes, then fast recovery will deal with the issue.
If the DELETE has completed and committed and you have good backups, you can consider a point-in-time restore of the database. It will take it offline while you do that (but if the data from the table is all missing, your DB is not going to be functional for a while).
If none of these scenarios applies, then you should contact IBM Technical Support, who may be able to perform minor (and not so minor) miracles.
But, as you may have noticed, a lot depends on the type of database (unlogged, logged, MODE ANSI) and whether there was an explicit transaction in effect when you ran the statement.
The trouble with DBMS is that they're trusting creatures. If you're authorized to do an operation, they assume that you intend to do what you say you want to do, and they go ahead and do it to the best of their ability. When you don't ask it to do what you intended to request, life gets tricky; the DBMS still trusts you and does what you actually asked it to do.

Should I run VACUUM in transaction or after?

I have a mobile application sync process. The transaction does a lot of modification on the database. Since this is done on mobile I need to issue a VACUUM to compact the database.
I am wondering when should I issue a VACUUM
in the transaction, as final statement
or after the transaction?
I am currently looking for SQLite, but if it's different for other engines, let me know in the answers (PostgreSQL, MySQL, Oracle, SQLServer)
Want it or not when using PostgreSQL you can't run VACUUM in transaction as stated in the manual:
VACUUM cannot be executed inside a transaction block.
I would say outside of the transaction. Certainly in PostgreSQL, VACUUM is designed to remove the "dead" tuples (i.e. the old row when a record has been changed or deleted.)
If you're running VACUUM in a transaction that has modified records, these dead rows won't have been marked for deletion.
Depending on which type of VACUUM you're doing, it may also require a table lock which will block if there are other transactions running, so you could potentially end up in a deadlock situation (transaction 1 is blocked waiting for a table lock to do its VACUUM, transaction 2 gets blocked waiting for a row to be released that transaction 1 has locked.)
I'd also recommend that this isn't done in an application (perhaps as a scheduled task) as it can take a while to complete and can negatively affect speed of other queries.
As for SQL Server, there is no VACUUM - what you're looking for is shrink. You can turn on auto shrink in 2005 which will automatically reclaim space when it the server decides, or issue a DBCC statement to shrink the database and log file, but this depends on your backup routine and strategy on a per-database level.
Vacuum is like defrag, it's good to do if youve recently deleted a lot of stuff, or maybe after youve inserted a lot of stuff, but by no means should you do it in every transaction. It's slower than almost any other database command and is more of a maintenance task.
We sometimes add/remove the majority of our db file, so then a vacuum would be a good idea, but I still would not consider it a part of the same transaction that did the work.
How frequently is the transaction run?
It's really a daily sort of process not a query by query process, but if you use it without full then it can be used in a transaction since it doesn't acquire a lock.
If your going to do it then it should be outside the transaction, since it is independent of the transactions data integrity.

Default SQL Server IsolationLevel Changes

we have a customer that's been experiencing some blocking issues with our database application. We asked them to run a Blocked Process Report trace and the trace they gave us shows blocking occurring between a SELECT and UPDATE operation. The trace files show the following:
The same SELECT query is being executed at different isolation levels. One trace shows a Serializable IsolationLevel while a later trace shows a RepeatableRead IsolationLevel. We do not use an explicit transaction while executing the query.
The UPDATE query is being executed with a RepeatableRead isolation level but is being blocked by the SELECT query. This is expected as our updates are wrapped in an explicit transaction with IsolationLevel of RepeatableRead.
So basically we're at a loss as to why the Isolation Level of the SELECT query would not be the default ReadCommitted IsolationLevel but, even more confusingly, why the IsolationLevel of the query would change over time? It is only one customer that is seeing this behaviour so we suspect it may be a database configuration issue.
Any ideas?
Thanks in advance,
Graham
In your scenario, I would recommend explicitly setting isolation level to snapshot - that will prevent read from getting in the way of writes (inserts and updates) by preventing locks, yet those read would still be "good" reads (i.e. not dirty data - it is not the same as a NOLOCK)
Generally i find that where i have locking issues with my queries, i manually control the lock applied. e.g. i would do updates with row-level locks to avoid page/table level locking, and set my reads to readpast (accepting that i may miss some data, in some scenarios that might be ok)
link|edit|delete|flag
EDIT-- Combining all the comments into the answer
As part of the optimisation process, sql server avoids getting commited reads on a page that it know hasn't changed, and automatically falls back to a lesser locking strategy. In your case, sql server drops from a serializable read to a repeatable read.
Q: Thanks for that useful info regarding dropping Isolation Levels. Can you think of any reason that it would use Serializable IsolationLevel in the first place, given that we don't use an explicit transaction for the SELECT - it was our understanding that the implicit transaction would use ReadCommitted?
A: By default, SQL Server will use Read Commmited if that is your default isolation level BUT if you do not additionally specify a locking strategy in your query, you are basically saying to sql server "do what you think is best, but my preference is Read Commited". Since SQL Server is free to choose, so it does in order to optimise the query. (The optimisation algorithm in sql server is very complex and i do not fully understand it myself). Not explicitly executing within a transaction does not, afaik, affect the isolation level that sql server uses.
Q: One last thing, does it seem reasonable that SQL Server would increase the Isolation Level (and presumably the number of locks required) to optimise the query? I'm also wondering whether the reuse of a pooled connection would affect this if it inherited the last used Isolation Level?
A: Sql server will do that as part of a process called "Lock Escalation". From http://support.microsoft.com/kb/323630, i quote: "Microsoft SQL Server dynamically determines when to perform lock escalation. When making this decision, SQL Server takes into account the number of locks that are held on a particular scan, the number of locks that are held by the whole transaction, and the memory that is being used for locks in the system as a whole. Typically, SQL Server's default behavior results in lock escalation occurring only at those points where it would improve performance or when you must reduce excessive system lock memory to a more reasonable level. However, some application or query designs may trigger lock escalation at a time when it is not desirable, and the escalated table lock may block other users".
Although lock escalation is not exactly the same thing as changing the isolation level a query runs under, this surprises me because i would not have expected sql server to take more locks than what the default isolation level permits.
More info regarding why SQL would take more locks by escalating: this is incorrect, escalating reduces (not increases) the number of locks required. A table lock is a single lock vs. all the page or row locks required to do the same from a lower level. Lock escalation is always done for one reason: it's more efficient to take a higher level lock than to lock all the lower-level objects
For example, perhaps there is no index available to lock efficiently against. I.e. if you take a count with UPDLOCK on all records with a year of 2010 in a field, and there is no index on that date field, this will require a row lock on each record in 2010, which is not efficient if many records are hit, and a page lock will not help either since they are presumably distributed randomly across pages, therefore SQL takes a table lock. Moreover, SQL MUST also lock other records from changing to being in the year 2010 while the UPDLOCK is held, and with no index on this field to do a range lock, SQL has NO CHOICE but to take a table lock to prevent this from happening. This latter point is one often missed by those new to optimization: the realization that SQL must also "protect" the integrity of the queries already executed in the transaction.

How can I get dead lock in this situation?

In my client application I have a method like this (in practice it's more complex, but I've left the main part):
public void btnUpdate_Click(...)
{
...
dataAdapter.Update(...);
...
dataAdapter.Fill(...); // here I got exception one time
}
The exception I found in logs says "Deadlock found when trying to get lock; try restarting transaction". I met this exception only time, so it wasn't repeated.
As I understand, DataAdapter.Fill() method executes only select query. I don't make an explicit transaction and I have autocommit enabled.
So how can I get dead lock on a simple select query which is not a part of bigger transaction?
As I understand, to get a dead lock, two transactions should wait for each other. How is that possible with a single select not inside a transaction? Maybe it's a bug in MySql?
Thank you in advance.
You are right it takes two transactions to make a deadlock. That is to say, No statement or statements within a single transaction can deadlock with other statements within the same transaction.
But it only take one transaction to notice a report of a deadlock. How do you know that the transaction you are seeing the deadlock reported in is the only transaction being executed in the database? Isn't there other activity going on in this database?
Also. your statement "I don't make an explicit transaction", and "... which is not a part of bigger transaction" implies that you do not understand that every SQL statement executed is always in an implicit transaction, even if you do not explicitly start one.
Most databases have reporting mechanisms specifically designed to track, report and/or log instances of deadlocks for diagnostic purposes. In SQL server there is a trace flag that causes a log entry with much detail about each deadlock that occurs, including details about each of the two transactions involved, like what sql statements were being executed, what objects in the database were being locked, and why the lock could not be obtained. I'd guess mySQL has similar disgnostic tool. Find out what it is and turn it on so that the next time this occurs you can look and find out exactly what happened.
You can deadlock a simple SELECT against other statements, like an UPDATE. On my blog I have an example explaining a deadlock between two well tunned statements: Read/Write deadlock. While the example is SQL Server specific, the principle is generic. I don't have enough knowledge of MySQL to claim this is necessarily the case or not, specially in the light of various engines MySQL can deploy, but none the less a simple SELECT can be the victim of a deadlock.
I haven't looked into how MySQL transaction works, but this is based on how MSSQL transactions work:
If you are not using a transaction, each query has a transaction by itself. Otherwise you would get a mess every time an update failed in the middle.
The reason for the deadlock might be lock escalation. The database tries to lock as little as possible for each query, so it starts out by locking only the single rows affected. When most of the rows in a page is locked by the query it might decide that escalating the lock into locking the entire page would be better, which may have the side effect of locking some rows not otherwise affected by the query.
If a select query and an update query are trying to escalate locks on the same table, they may cause a deadlock eventhough only a single table is involved.
I agree that in this particular issue this is unlikely to be the issue but this is supplemental to the other answers in terms of limiting their scope, recorded for posterity in case someone finds it useful.
MySQL can in rare cases have single statements periodically deadlock against themselves. This seems to happen particularly on bulk inserts and the issues are almost certainly a deadlock between different threads relating to the operation. I would expect bulk updates to have the same problem. In the past when faced with this sort of issue I have generally just cut down on the number of rows being inserted (or updated) in a single statement. You won't usually get a deadlock when trying to obtain the lock in this case but other messages.
A colleague of mine and I were discussing similar problems in MS SQL Server (so this is not unique to MySQL!) and he pointed out that the solution there is to tell the server not to parallelize the insert or update. The problems here are spinlock-related deadlocks, not logical lock deadlocks in the RDBMS.