SQL Server 2008 Express locking - sql

OK so I have read a fair amount about SQL Server's locking stuff, but I'm struggling to understand it all.
What I want to achieve is thus:
I need to be able to lock a row when user A SELECTs it
If user B then tries to SELECT it, my winforms .net app needs to set all the controls on the relevant form to be disabled, so the user can't try and update. Also it would be nice if I could throw up a messagebox for user B, stating that user A is the person that is using that row.
So basically User B needs to be able to SELECT the data, but when they do so, they should also get a) whether the record is locked and b) who has it locked.
I know people are gonna say I should just let SQL Server deal with the locking, but I need User B to know that the record is in use as soon as they SELECT it, rather than finding out when they UPDATE - by which time they may have entered data into the form, giving me inconsistency.
Also any locks need to allow SELECTs to still happen - so when user B does his SELECT, rather than just being thrown an exception and receiving no/incomplete data, he should still get the data, and be able to view it, but just not be able to update it.
I'm guessing this is pretty basic stuff, but there's so much terminology involved with SQL Server's locking that I'm not familiar with that it makes reading about it pretty difficult at the moment.
Thanks

To create this type of 'application lock', you may want to use a table called Locks and insert key, userid, and table names into it.
When your select comes along, join into the Locks table and use the presence of this value to indicate the record is locked.
I would also recommend adding a 'RowVersion' column to your table you wish to protect. This field will assist in identifying if you are updating or querying a row that has changed since you last selected it.

This isn't really what SQL Server locking is for - ideally you should only be keeping a transaction (and therefore a lock) open for the absolute minimum needed to complete an atomic operation against that database - you certainly shouldn't be holding locks while waiting for user input.
You would be better served keeping track of these sorts of locks yourself by (for example) adding a locked bit column to the table in question along with a locked_by varchar column to keep track of who has the row locked.
The first user should UPDATE the row to indicate that the row is locked and who has it locked:
UPDATE MyTable
SET `locked` = 1
AND `locked_by` = #me
WHERE `locked` = 0
The locked = 0 check is there to protect against potential race conditions and make sure that you don't update a record that someone else has already locked.
This first user then does a SELECT to return the data and ensure that they did really manage to lock the row.

Related

How do you handle stale data with multiple threads?

Let's say I have the following psuedocode:
SELECT count(*) FROM users WHERE email = 'bob#gmail.com'
>>>> MARKER A
if (count > 0) return;
else INSERT INTO users VALUES ('bob#gmail.com')
So essentially only insert the email if it doesn't exist already. I understand there's probably some sort of INSERT IF NOT EXISTS query I could use, but let's say we use this example.
So if the code above runs on thread A, and thread B actually inserts 'bob#gmail.com' into users at MARKER A, then thread A has "stale data" and will try to insert 'bob#gmail.com', thinking the count is still 0, but in fact it is now 1. This will error out since we have a unique index on the email.
What is the tool I should use to prevent this issue? From my reading about transactions, they basically make a set of operations atomic, so the code above will execute completely or not at all. It will NOT ensure the users table is locked against updates correct? So I can't just wrap the code above in a transaction and make it thread-safe?
Should I implement application-level locking? Should I ensure that when this operation occurs, it must acquire the lock to access the users table so that no other thread can make changes to it? I feel that locking the entire table is a performance hit I want to avoid.
Checking before inserting is a known anti-pattern on multi-threaded applications. Do not even try it.
The right way of doing it is letting the database take care of it. Add a UNIQUE constraint on the column, as in:
alter table users add constraint uq1 unique(email);
Just try to insert the row in the database. If it succeeds, all is good; if it fails, then some other thread has alreay inserted the row.
Alternatively, you could issue a LOCK on the whole table. That would also work, but the performance of your application would become horrible.

SQL Server - get last updated time for table

I need to know when the data in a table was last modified (data inserted, created, deleted). The common answer is to extract the data from sys.dm_db_index_usage_stats, but I don't have access to that table and as I understand it, access to that is a server-level permission, and since this is on a corporate shared server, the odds of the IT department granting me access is up there with pigs flying.
Is there a way to get this information, that does not require greatly elevated privileges?
Update #1
Some additional information: What I am doing, is caching the contents of this table locally, and need to know when to update my local cache. This means:
I don't need an actual timestamp, just any value that I can compare against a locally-cached value that tells me "things have changed, update the cache"
If this value changes too often (e.g. gets reset every time they restart the server which is extremely rarely) it's OK, it just means I do an extra cache update that I didn't actually need to do
Update #2
I should have done this early on, but I just assumed I would be able to create tables as needed ... but I'm not. Permission not granted. So neither of the proposed methods will work :-(
Here's my new thought : I can call
select checksum_agg(binary_checksum(*))
and get a checksum on the entire remote table. Then, if the checksum changes, I know the table has changed and I can update my local cache. I've seen the problems with checksum, but using HASHBYTES sounds like it would be much more complicated and much slower.
This is working, the problem is that when the checksum changes, I have to reload the entire cache. My table has so many rows that returning the checksum per row takes an unacceptably long time, is there a way to use the "OVER" clause and get maybe 10 checksums, the first checksum for the first tenth of the rows, etc.?
If you can modify their schema, then you should be able to add a trigger.
Once you add a lastmodified column, you can use this trigger to get the time updated any time the record changes:
CREATE trigger [dbo].[TR_TheirTable_Timestamp]
on [dbo].[TheirTable] for update
as
begin
update
dbo.TheirTable
set
lastmodified=getdate()
from
Inserted
where
dbo.TheirTable.UniqueID=Inserted.UniqueKey
end
The reason I do it only for update, and not insert is because I can see a new record, and I don't need a timestamp, but a modified record I can compare the time I last updated the record. if you want an insert update, then
on [dbo].[TheirTable] for insert,update
would work
If you just wanted to know when the table was updated, then the trigger could write to another table with the tablename and date, and you wouldn't have to modify their schema

SQL unique field: concurrency bugs? [duplicate]

This question already has answers here:
Only inserting a row if it's not already there
(7 answers)
Closed 9 years ago.
I have a DB table with a field that must be unique. Let's say the table is called "Table1" and the unique field is called "Field1".
I plan on implementing this by performing a SELECT to see if any Table1 records exist where Field1 = #valueForField1, and only updating or inserting if no such records exist.
The problem is, how do I know there isn't a race condition here? If two users both click Save on the form that writes to Table1 (at almost the exact same time), and they have identical values for Field1, isn't it possible that the following would happen?
User1 makes a SQL call, which performs the select operation and determines there are no existing records where Field1 = #valueForField1. User1's process is preempted by User2's process, which also finds no records where Field1 = #valueForField1, and performs an insert. User1's process is allowed to run again, and inserts a second record where Field1 = #valueForField1, violating the requirement that Field1 be unique.
How can I prevent this? I'm told that transactions are atomic, but then why do we need table locks too? I've never used a lock before and I don't know whether or not I need one in this case. What happens if a process tries to write to a locked table? Will it block and try again?
I'm using MS SQL 2008R2.
Add a unique constraint on the field. That way you won't have to SELECT. You will only have to insert. The first user will succeed the second will fail.
On top of that you may make the field autoincremented, so you won't have to care on filling it, or you may add a default value, again not caring on filling it.
Some options would be an autoincremented INT field, or a unique identifier.
You can add a add a unique constraint. Example from http://www.w3schools.com/sql/sql_unique.asp:
CREATE TABLE Persons
(
P_Id int NOT NULL UNIQUE
)
EDIT: Please also read Martin Smith's comment below.
jyparask has a good answer on how you can tackle this specific problem. However, I would like to elaborate on your confusion over locks, transactions, blocking, and retries. For the sake of simplicity, I'm going to assume transaction isolation level serializable.
Transactions are atomic. The database guarantees that if you have two transactions, then all operations in one transaction occur completely before the next one starts, no matter what kind of race conditions there are. Even if two users access the same row at the same time (multiple cores), there is no chance of a race condition, because the database will ensure that one of them will fail.
How does the database do this? With locks. When you select a row, SQL Server will lock the row, so that all other clients will block when requesting that row. Block means that their query is paused until that row is unlocked.
The database actually has a couple of things it can lock. It can lock the row, or the table, or somewhere in between. The database decides what it thinks is best, and it's usually pretty good at it.
There is never any retrying. The database will never retry a query for you. You need to explicitly tell it to retry a query. The reason is because the correct behavior is hard to define. Should a query retry with the exact same parameters? Or should something be modified? Is it still safe to retry the query? It's much safer for the database to simply throw an exception and let you handle it.
Let's address your example. Assuming you use transactions correctly and do the right query (Martin Smith linked to a few good solutions), then the database will create the right locks so that the race condition disappears. One user will succeed, and the other will fail. In this case, there is no blocking, and no retrying.
In the general case with transactions, however, there will be blocking, and you get to implement the retrying.

PostgreSQL and locking

Hopefully some smarter DBAs than I can help me find a good solution for what I need to do.
For the sake of discussion, lets assume I have a table called 'work' with some number of columns, one of which is a column that represents ownership of that row of work from a given client. The scenario is that I'll have 2 clients connected and polling a table for work to be done, when a row (or some number of rows) shows up, the first client that selects the rows will also update them to imply ownership, that update will remove those rows from being returned to any other client's selects. My question is, in this scenario, what sort of locking can I use to prevent 2 clients from hitting the table at the same time and both of them being returned the same rows via the select?
The UPDATE statement with RETURNING clause is the way to do this.
UPDATE table
SET ownership = owner
RETURNING ( column list );
REFERENCES:
Similar Question
Documentation
My question is, in this scenario, what sort of locking can I use to prevent 2 clients from hitting the table at the same time and both of them being returned the same rows via the select?
No locking needed here.
In the UPDATE, simply specify that you only want the script to take ownership of the task if the owner is still null (assuming that's how you flag unassigned tasks). This should work:
UPDATE foo SET owner = ? WHERE id = ? AND owner = ? WHERE owner IS NULL
If the number of modified rows is equal to the number you expected (or a RETURNING clause returns results as suggested by #Ketema), then you successfully grabbed ownership.
Fake edit because I noticed your comment mere moments before submitting this answer:
eg: 2 clients issuing that query at the same time, they have no chance of manipulating the same rows?
Correct. You might want to read up on MVCC. Running these statements outside of a transaction will do the right thing. Behavior inside a transaction will be different.

SQL, selecting and updating

I am trying to select 100s of rows at a DB that contains 100000s of row and update those rows afters.
the problem is I don't want to go to DB twice for this purpose since update only marks those rows as "read".
is there any way I can do this in java using simple jdbc libraries? (hopefully without using stored procedures)
update: ok here is some clarification.
there are a few instance of same application running on different servers, they all need to select 100s of "UNREAD" rows sorted according to creation_date column, read blob data within it, write it to file and ftp that file to some server. (I know prehistoric but requirements are requirements)
The read and update part is for to ensure each instance getting diffent set of data. (in order, tricks like odds and evens wont work :/)
We select data for update. the data transfers through the wire (we wait and wait) and then we update them as "READ". then release lock for reading. this entire thing takes too long. By reading and updating at the same time, I would like to reduce lock time (from time we use select for update to actual update) so that using multiple instances would increase read rows per second.
Still have ideas?
It seems to me there might be more than one way to interpret the question here.
You are selecting the rows for the
sole purpose of updating them and
not reading them.
You are selecting the rows to show
to somebody, and marking them as
read either one at a time or all as a group.
You want to select the rows and mark
them as read at the time you select
them.
Let's take Option 1 first, as that seems to be the easiest. You don't need to select the rows in order to update them, just issue an update with a WHERE clause:
update table_x
set read = 'T'
where date > sysdate-1;
Looking at option 2, you want to mark them as read when a user has read them (or a down stream system has received it, or whatever). For this, you'll probably have to do another update. If you query for the primary key, in addition to the other columns you'll need in the first select, you will probably have an easier time of updating, as the DB won't have to do table or index scans to find the rows.
In JDBC (Java) there is a facility to do a batch update, where you execute a set of updates all at once. That's worked out well when I need to perform a lot of updates that are of the exact same form.
Option 3, where you want to select and update all in one shot. I don't find much use for this, personally, but that doesn't mean others don't. I suppose some kind of stored procedure would reduce the round trips. I'm not sure what db you are working with here and can't really offer specifics.
Going to the DB isn't so bad. If you aren't returning anything 'across the wire' then an update shouldn't do you too much damage and its only a few hundred thousand rows. What is your worry?
If you're doing a SELECT in JDBC and iterating over the ResultSet to UPDATE each row, you're doing it wrong. That's an (n+1) query problem that will never perform well.
Just do an UPDATE with a WHERE clause that determines which of those rows needs to be updated. It's a single network round trip that way.
Don't be too code-centric. Let the database do the job it was designed for.
Can't you just use the same connection without closing it?