T-SQL Concurrency Issue : Auction / Bidding System - sql

I am currently developing an online Auction system using ASP.NET 3.5 and SQLServer 2008. I have reached the point in development where I need to ensure that my system sensibly handles the concurrency issue which may arise when:
Two people - Geraldine and John - want to bid on the same auction item which is currently going for £50. Geraldine enters a bid of £55 and John enters a bid of £52. The system now has two copies of the page 'submit_bid.aspx' running; each copy of the page checks to see that their bid is high enough, they both see that it is, and they submit the bids. If John's bid goes through first then the auction item price is currently £55 and a moment later it's being replaced by a bid of £52.
What I need to do is to lock the auction item row until the current bid price is updated before allowing any other bidder to check the current bid price and placing a new bid.
My question is: what is the best practice way for doing this using T-SQL and / or ADO.NET?
I currently have an AuctionItem table which has the following fields (plus other fields I haven't included for brevity):
AuctionItemID INT
CurrentBidPrice MONEY
CurrentBidderID INT
I have performed some research and come up with the following T-SQL (pseudocode-ish):
#Bid MONEY
#AuctionItemID INT
BEGIN TRANSACTION
SELECT #CurrentBidPrice = CurrentBidPrice
FROM AuctionItem
WITH (HOLDLOCK, ROWLOCK)
WHERE AuctionItemID = #AuctionItemID
/* Do checking for end of Auction, etc. */
if (#Bid > #CurrentBidPrice)
BEGIN
UPDATE AuctionItem
SET CurrentBidPrice = #Bid
WHERE AuctionItemID = #AuctionItemID
END
COMMIT TRANSACTION
I have also read that if I include the SET LOCK_TIMEOUT I can also reduce the number of failed concurrent updates. For example:
SET LOCK_TIMEOUT 1000
...will make a concurrent update wait for 1000 milliseconds for a lock to be released. Is this best practice?

Source: "chrisrlong", http://www.dbasupport.com/forums/archive/index.php/t-7282.html
Here are the methodologies used to handle multi-user concurrency issues:
Do Nothing (Undesirable)
User 1 reads a record
User 2 reads the same record
User 1 updates that record
User 2 updates the same record
User 2 has now over-written the changes that User 1 made. They are completely gone, as if they never happened. This is called a 'lost update'.
Pessimistic locking (Lock the record when it is read.)
User 1 reads a record and locks it by putting an exclusive lock on the record (FOR UPDATE clause)
User 2 attempts to read and lock the same record, but must now wait behind User 1
User 1 updates the record (and, of course, commits)
User 2 can now read the record with the changes that User 1 made
User 2 updates the record complete with the changes from User 1
The lost update problem is solved. The problem with this approach is concurrency. User 1 is locking a record that they might not ever update. User 2 cannot even read the record because they want an exclusive lock when reading as well. This approach requires far too much exclusive locking, and the locks live far too long (often across user control - an absolute no-no). This approach is almost never implemented.
Use Optimistic Locking.
Optimistic locking does not use exclusive locks when reading. Instead, a check is made during the update to make sure that the record has not been changed since it was read. Generally this is done by adding a version/etc column (INT/numeric, holding a numeric value that is increased when an UPDATE statement is made). IE:
UPDATE YOUR_TABLE
SET bid = 52
WHERE id = 10
AND version = 6
An alternate option is to use a timestamp, rather than a numeric column. This column is used for no other purpose than implementing optimistic concurrency. It can be a number or a date. The idea is that it is given a value when the row is inserted. Whenever the record is read, the timestamp column is read as well. When an update is performed, the timestamp column is checked. If it has the same value at UPDATE time as it did when it was read, then all is well, the UPDATE is performed and the timestamp is changed!. If the timestamp value is different at UPDATE time, then an error is returned to the user - they must re-read the record, re-make their changes, and try to update the record again.
User 1 reads the record, including the timestamp of 21
User 2 reads the record, including the timestamp of 21
User 1 attempts to update the record. The timestamp in had (21) matches the timestamp in the database(21), so the update is performed and the timestamp is update (22).
User 2 attempts to update the record. The timestamp in hand(21) does not match the timestamp in the database(22), so an error is returned. User 2 must now re-read the record, including the new timestamp(22) and User 1's changes, re-apply their changes and re-attempt the update.
Comparison
Optimistic locking is database independent -- no need for mucking with isolation levels and database specific syntax for isolation levels.
I'd use a numeric column over a timestamp -- less data & hassle to manage

You don't need a transaction if just use 1 statement like this:
-- check if auction is over (you could also include this in the sql)
UPDATE AuctionItem
SET CurrentBidPrice = #Bid
WHERE AuctionItemID = #AuctionItemID
AND CurrentBidPrice < #Bid
IF ##ROWCOUNT=1
BEGIN
--code for accepted bit
SELECT 'NEW BIT ACCEPTED'
END ELSE
BEGIN
--code for unaccepted bit
SELECT 'NEW BIT NOT ACCEPTED'
END

I followed Alex K's suggestion above and implemented a 'Bid History'. Works a treat. Thanks Alex K.

Related

Users updating same row at the same time SQL Server

I want to create a SQL Server table that has a Department and a Maximum Capacity columns (assume 10 for this scenario). When users add them selves to a department the system will check the current assignment count (assume 9 for this scenario) in the department and compare it to the maximum value. If it is below the maximum, they will be added.
The issue is this: what if two users submit at the same time and the when the code retrieves the current assignment count it will be 9 for both. One user updates the row sooner so now its 10 but the other user has already retrieved the previous value before the update (9) and so both are valid when compared and we end up with 11 users in the department.
Is this even possible and how can one solve it?
The answer to your problem lies in understanding "Database Concurrency" and then choosing the correct solution to your specific scenario.
It too large a topic to cover in a single SO answer so I would recommend doing some reading and coming back with specific questions.
However in simple form you either block the assignments out to the first person who tries to obtain them (pessimistic locking), or you throw an error after someone tries to assign over the limit (optimistic locking).
In the pessimistic case you then need ways to unblock them if the user fails to complete the transaction e.g. a timeout. A bit like on a ticket booking website it says "These tickets are being held for you for the next 10 minutes, you must complete your booking within that time else you may lose them".
And when you're down to the last few positions you are going to be turning everyone after the first away... no other way around it if you require this level of locking. (Well you could then create a waiting list, but that's another issue in itself).

How to make the Automatic Record Permission field to update itself as quickly as possible?

If you are working with access control, you must have faced the issue where the Automatic Record Permission field (with Rules) does not update itself on recalculating the record. You either have to launch full recalculation or wait for a considerable amount of time for the changes to take place.
I am facing this issue where based on 10 different field values in the record, I have to give read/edit access to 10 different groups respectively.
For instance:
if rule 1 is true, give edit access to 1st group of users
if rule 1 and 2 are true, give edit access to 1st AND 2nd group of
users.
I have selected 'No Minimum' and 'No Maximum' in the Auto RP field.
How to make the Automatic Record Permission field to update itself as quickly as possible? Am I missing something important here?
If you are working with access control, you must have faced the issue
where the Automatic Record Permission field (with Rules) does not
update itself on recalculating the record. You either have to launch
full recalculation or wait for a considerable amount of time for the
changes to take place.
Tanveer, in general, this is not a correct statement. You should not face this issue with [a] well-designed architecture (relationships between your applications) and [b] correct calculation order within the application.
About the case you described. I suggest you check and review the following possibilities:
1. Calculation order.Automatic Record Permissions [ARP from here] are treated by Archer platform in the same way as calculated fields. This means that you can modify the calculation order in which calculated field and automatic record permissions will be updated when you save the record.So it is possible that your ARP field is calculated before certain calculated fields you use in the rules in ARP. For example, let say you have two rules in ARP field:
if A>0 then group AAA
if B>0 then groub BBB
Now, you will have a problem if calculation order is the following:
"ARP", "A", "B"
ARP will not be updated after you click "Save" or "Apply", but it will be updated after you click "Save" or "Apply" twice within the save record.With calculation order "A","B","ARP" your ARP will get recalculated right away.
2. Full recalculation queue.
Since ARPs are treated as calculated fields, this mean that every time ARP needs to get updated there will be recalculation job(s) created on the application server on the back end. And if for some reason the calculation queue is full, then record permission will not get updated right away. Job engine recalculation queue can be full if you have a data feed running or if you have a massive amount of recalculations triggered via manual data imports. Recalculation job related to ARP update will be created and added to the queue. Recalculation job will be processed based on the priorities defined for job queue. You can monitor the job queue and alter default job's processing priorities in Archer v5.5 via Archer Control Panel interface. I suggest you check the job queue state next time you see delays in ARP recalculations.
3. "Avalanche" of recalculations
It is important to design relationships and security inheritance between your applications so recalculation impact is minimal.
For example, let's say we have Contacts application and Department application. - Record in the Contacts application inherits access using Inherited Record Permission from the Department record.-Department record has automatic record permission and Contacts record inherits it.-Now the best part - Department D1 has 60 000 Contacts records linked to it, Department D2 has 30 000 Contacts records linked to it.The problem you described is reproducible in the described configuration. I will go to the Department record D1 and updated it in a way that ARP in the department record will be forced to recalculate. This will add 60 000 jobs to the job engine queue to recalculate 60k Contacts linked to D1 record. Now without waiting I go to D2 and make change forcing to recalculate ARP in this D2 record. After I save record D2, new job to recalculate D2 and other 30 000 Contacts records will be created in the job engine queue. But record D2 will not be instantly recalculated because first set of 60k records was not recalculated yet and recalculation of the D2 record is still sitting in the queue.
Unfortunately, there is not a good solution available at this point. However, this is what you can do:
- review and minimize inheritance
- review and minimize relationships between records where 1 record reference 1000+ records.
- modify architecture and break inheritance and relationships and replace them with Archer to Archer data feeds if possible.
- add more "recalculation" power to you Application server(s). You can configure your web-servers to process recalculation jobs as well if they are not utilized to certain point. Add more job slots.
Tanveer, I hope this helps. Good luck!

Lock Table overflow issue in Progress 4GL

I am facing a lock table overflow issue and below is the error it displays me and as soon as it displays it crashes the code.
Lock table overflow, increase -L on server (915)
I have checked the error number and it is saying we need to modify that -L value before server starts and it has been set to 500 by default. But I would not imagine I have been given that privilege to change that value unless I am a database administrator of the company.
What i was trying to do was wipe out roughly 11k of member records with all the linked table records ( more than 25 tables are linked to each member record ) while backing them each table up into separate file. So roughly it achieves 'EXCLUSIVE-LOCK' when entering the member for loop as below,
for each member
EXCLUSIVE-LOCK:
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
When it hits certain number of member records program crashes out. So i had to run it as batches like,
for each member
EXCLUSIVE-LOCK:
Increment a member count.
When count = 1k
then RETURN.
/*
Then find each linked records in a order.
Extract them.
Delete them.
*/
Finally it extracts the member.
Delete member.
end.
So literally I've ended up with running the same code more than 11 times to get the work done. I hope someone should have come across this issue and would be great help if you like to share a long time solution rather than my temporary solution.
You need a lock for each record that is part of a transaction. Otherwise other users could make conflicting changes before your transaction commits.
In your code you have a transaction that is scoped to the outer FOR EACH. Thus you need 1 lock for the "member" record and another lock for each linked record associated with that member.
(Since you are not showing real code it is also possible that your actual code has a transaction scope that is even broader...)
The lock table must be large enough to hold all of these locks. The lock table is also shared by all users -- so not only must it hold your locks but there has to be room for whatever other people are doing as well.
FWIW -- 500 is very, very low. The default is 8192. There are two startup parameters using the letter "l", one is upper case, -L, and that is the lock table and it is a server startup parameter. Lower case, -l, is the "local buffer size" and that is a client parameter. (It controls how much memory is available for local variables.)
"Batching", as you have sort of done, is the typical way to ensure that no one process uses too many locks. But if your -L is really only 500 a batch size of 1,000 makes no sense. 100 is more typical.
A better way to batch:
define buffer delete_member for member.
define buffer delete_memberLink for memberLink. /* for clarity I'll just do a single linked table... */
for each member no-lock: /* do NOT get a lock */
batch_loop: do for delete_member, delete_memberLink while true transaction:
b = 0.
for each delete_memberLink exclusive-lock where delete_memberLink.id = member.id:
b = b + 1.
delete delete_memberLink.
if b >= 100 then next batch_loop.
end.
find delete_member exclusive-lock where recid( delete_member ) = recid( member ).
leave batch_loop. /* this will only happen if we did NOT execute the NEXT */
end.
end.
You could also increase your -L database startup parameter to take into account your one off query / delete.

How do I not waste Generator values when using them server side with Firebird?

Check this simple piece of code that uses a generator to create unique primary keys in a Firebird table:
CREATE OR ALTER TRIGGER ON_BEFOREINSERT_PK_BOOKING_ITEM FOR BOOKING_ITEM BEFORE INSERT POSITION 0
AS
BEGIN
IF ((NEW.booking_item_id IS NULL) OR (NEW.booking_item_id = 0)) THEN BEGIN
SELECT GEN_ID(LastIdBookingItem, 1) FROM RDB$DATABASE INTO :NEW.booking_item_id;
END
END!
This trigger grabs and increments then assigns a generated value for the booking item id thus creating an auto-incremented key for the BOOKING_ITEM table. The trigger even checks that the booking id has not already been assigned a value.
The problem is the auto-incremented value will be lost (wasted) if, for some reason, the BOOKING_ITEM record cannot be posted.
I have a couple of ideas on how to avoid this wasting but have concerns about each one. Here they are:
Decrement the counter if a posting error occurs. Within the trigger I set up a try-except block (do try-except blocks even exist in Firebird PSQL?) and run a SELECT GEN_ID(LastIdBookingItem, -1) FROM RDB$DATABASEon post exceptions. Would this work? What if another transaction sneaks in and increments the generator before I decrement it? That would really mess things up.
Use a Temporary Id. Set the id to some unique temp value that I change to the generator value I want on trigger AFTER INSERT. This method feels somewhat contrived and requires a way that ensures that the temp id is unique. But what if the booking_item_id was supplied client side, how would I distinguish that from a temp id?. Plus I need another trigger
Use Transaction Control. This is like option 1. except instead of using the try-except block to reset the generator I start a transaction and then roll it back if the record fails to post. I don't know the syntax for using transaction control. I thought I read somewhere that SAVEPOINT/SET TRANSACTION is not allowed in PSQL. Plus the roll back would have to happen in the AFTER INSERT trigger so once again I need another trigger.
Surely this is an issue for any Firebird developer that wants to use Generators. Any other ideas? Is there something I'm missing?
Sequences are outside transaction control, and meddling with them to get 'gapless' numbers will only cause troubles because another transaction could increment the sequence as well concurrently, leading to gaps+duplicates instead of no gaps:
start: generator value = 1
T1: increment: value is 2
T2: increment: value is 3
T1: 'rollback', decrement: value is 2 (and not 1 as you expect)
T3: increment: value is 3 => duplicate value
Sequences should primarily be used for generating artificial primary keys, and you shouldn't care about the existence of gaps: it doesn't matter as long as the number uniquely identifies the record.
If you need an auditable sequence of numbers, and the requirement is that there are no gaps, then you shouldn't use a database sequence to generate it. You could use a sequence to assign numbers after creating and committing the invoice itself (so that it is sure it is persisted). An invoice without a number is simply not final yet. However even here there is a window of opportunity to get a gap, eg if an error or other failure occurs between assigning the invoice number and committing.
Another way might be to explicitly create a zero-invoice (marked as cancelled/number lost) with the gap numbers, so that the auditor knows what happened to that invoice.
Depending on local law and regulations, you shouldn't 're-use' or recycle lost numbers as that might be construed as fraud.
You might find other ideas in "An Auditable Series of Numbers". This also contains a Delphi project using IBObjects, but the document itself describes the problem and possible solutions pretty well.
What if, instead of using generators, you create a table with as many columns as the number of generators, giving each column the name of a generator. Something like:
create table generators
(
invoiceNumber integer default 0 not null,
customerId integer default 0 not null,
other generators...
)
Now, you have a table where you can increment invoice number using a SQL inside a transaction, something like:
begin transaction
update generator set invoiceNumber = invoiceNumber + 1 returning invoiceNumber;
insert into invoices set ..........
end transaction.
if anything goes wrong, the transaction would be rolled-back, together with the new
invoice number. I think there would be no more gaps in the sequence.
Enio

Sql server row lock

I have developed an app with c# where a form displays a few details of a employee in a listview control. When a user clicks on a row, then another form will open and show that employee's record with more detail.
I want it to work so that a user will see the form where employee records are shown in more detail, that that record will be locked in such a way that another user will not be able to see that record until the original user closes the detail form. Please guide me how I can design this type of app with SQL Server lock.
A preferred way would be...
You could add a timestamp column to the table and compare it to the incoming update.
If the user tries to update the data and the timestamp is different, alert the user that the data has changed and refresh the screen.
A non-preferred way would be...
Add a userEditing column to the table, and set it to the user who is working with the row (like a checkout). Hide this row from any user that doesnt have it "checked out", and release it when they are done.
This can become problematic for you in many ways (Joe user locks a row, is out today, and Jane needs it now), but can be the appropriate solution in some cases.
For row lock you could use:
BEGIN TRANSACTION
UPDATE someTable set SomeThing = 'new value' where someID = 1
-- this will lock 'someTable' in affected row as long as transaction alive.
-- in another connection
select * from someTable with (readpast)
-- this will skip locked rows
but keep in mind that this is not a proper way to implement.
The user might panic and blame system for losing data or even say it's a bug.
When you lock a row, keep in mind that it will lock until the connection/transaction is timed out or until the user commits that row back.
No connection pooling.
Bad idea for web app, due to remote connection.
Any query involving someTable without a with (readpast) statement will have to wait.
Just let the user 'view' only in editing row. In ASP.NET for caching editing row:
Application[string.Format("{0}.{1}", tableName, primaryKey)] = true;