I have developed an app with c# where a form displays a few details of a employee in a listview control. When a user clicks on a row, then another form will open and show that employee's record with more detail.
I want it to work so that a user will see the form where employee records are shown in more detail, that that record will be locked in such a way that another user will not be able to see that record until the original user closes the detail form. Please guide me how I can design this type of app with SQL Server lock.
A preferred way would be...
You could add a timestamp column to the table and compare it to the incoming update.
If the user tries to update the data and the timestamp is different, alert the user that the data has changed and refresh the screen.
A non-preferred way would be...
Add a userEditing column to the table, and set it to the user who is working with the row (like a checkout). Hide this row from any user that doesnt have it "checked out", and release it when they are done.
This can become problematic for you in many ways (Joe user locks a row, is out today, and Jane needs it now), but can be the appropriate solution in some cases.
For row lock you could use:
BEGIN TRANSACTION
UPDATE someTable set SomeThing = 'new value' where someID = 1
-- this will lock 'someTable' in affected row as long as transaction alive.
-- in another connection
select * from someTable with (readpast)
-- this will skip locked rows
but keep in mind that this is not a proper way to implement.
The user might panic and blame system for losing data or even say it's a bug.
When you lock a row, keep in mind that it will lock until the connection/transaction is timed out or until the user commits that row back.
No connection pooling.
Bad idea for web app, due to remote connection.
Any query involving someTable without a with (readpast) statement will have to wait.
Just let the user 'view' only in editing row. In ASP.NET for caching editing row:
Application[string.Format("{0}.{1}", tableName, primaryKey)] = true;
Related
We have a split MS Access database. When users log on, they are connected/linked to two separate Access database (one for the specific project they are working on and one for record locking (and other global settings)). The "locking" database is the one I need to find a solution for.
One of the tables "tblTS_RecordLocking", simply stores a list of user names and the recordID of the record they are editing. This never has more than 50 records - usually being closer to 5-10. But before a user can do anything on a record, it opens "tblTS_RecordLocking" to see if the record is in use (so it happens a lot):
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.ProjectID_Lock)=1111) AND ((tblTSS_RecordLocking.RecordID_Lock)=123456));", , dbReadOnly)
If it's in use, the user simply gets a message and the form/record remains locked. If not in use, it will close the recordset and re-open it so that the user name is updated with the Record ID:
Set recIOC = CurrentDb.OpenRecordset("SELECT tblTSS_RecordLocking.* FROM tblTSS_RecordLocking WHERE (((tblTSS_RecordLocking.UserName_Lock)='John Smith'));")
If recIOC.EOF = True Then
recIOC.AddNew
recIOC.Fields![UserName_Lock] = "John Smith"
Else
recIOC.Edit
End If
recIOC.Fields![RecordID_Lock] = 123456
recIOC.Fields![ProjectID_Lock] = 111
recIOC.Update
recIOC.Close: set recIOC=Nothing
As soon as this finishes, everything realting to the second database is closed down (and the .laccdb file disappears).
So here's the problem. On very rare occasions, a user can get a message:
3027 - Cannot update. Database or object is read-only.
On even rarer occasions, it can flag the db as corrrupt needing to be compressed and re-indexed.
I really want to know the most reliable way to do the check/update. The process may run several hundred times in a day, and whilst I only see the issue once every few weeks, and for the most part handle it cleanly (on the front-end), I'm sure there is a better, more reliable way.
I agree with mamadsp that moving to SQL is the best option and am already in the process of doing this. However, whilst I was not able to create a fix for this issue, I was able to find a work-around that has yet to fail.
Instead of having a single lock table in the global database. I found that creating a lock table in the project database solved the problem. The main benefit of this is that there are much fewer activities on the table. So, not perfect - but it is stable.
Good evening!
At this moment I'm working on page in the Oracle Apex Application, which works the following way. This is the page, which contains some kind of big and complex report with data differentiated by one feature (let it be named feature A). On the left side of the page there is a "catalog menu", due to which user can see the data answering to feature A, on the right side the data is shown and above there is a the search bar, which can help users to find some exact data by other features feature B, feature C etc.
I had a view (let it be named V_REPORT_BIG_DATA) for showing the report, but it was so big and was loading so slow, that I've decided to switch the page on the table with the same fields as V_REPORT_BIG_DATA (let it be named T_REPORT_BIG_DATA_TEMP). Besides, it has the additional field for process identificator (let it be named PID) and is temporary not physically, but by its purpose. I thought that it must work this way: user enters the page, receives his own PID relevant to the session (it works if PID is null, otherwise it doesn't change), and then the procedure (P_REPORT_BIG_DATA_RELOAD) makes the deleting of the "old" data and uploading of the "new" one, besides these actions are executed with the one PID and are concerned to definite (say, current) user.
But my idea didn't appear to work correct. The procedure P_REPORT_BIG_DATA_RELOAD itself works fine and is executed from the Process page, and PID is a global Application Item (it is generated from a database sequence). But my brain has nearly been blown up when I saw that my table has duplicates of data concerned to one user and one PID! By making the table of logs (which has been filled with the facts, how much rows had been deleted and inserted again, in the code of P_REPORT_BIG_DATA_RELOAD) I saw very strange thing: some users "loaded" duplicates as if the uploading procedure had been executed several times simultaneously!
Taking into account all I've said before, I have the following question: what do I do wrong? What should I do, so that I wouldn't have to use the word "distinct" in the query from the table T_REPORT_BIG_DATA_TEMP?
UPD: Additional facts to my question. Excuse me for my inattention, because I thought that there I cannot edit my first posts. :-/
Well, I'll explain my problem further. :) Firstly, I did all the best for my view P_REPORT_BIG_DATA_RELOAD to expect its loading much faster, but it involves many-many rows. Secondly, the code executed from the Process Page (say, during the loading of my page) is this:
begin
if :PID is null then
:PID := NEW_PID;
end if;
P_REPORT_BIG_DATA_RELOAD(AUTH => :SUSER, PID => :PID);
end;
NEW_PID is a function which generates new PID, and P_REPORT_BIG_DATA_RELOAD is my procedure which refresh the data depending on user and his PID.
and the code of my procedure is this:
procedure P_REPORT_BIG_DATA_RELOAD
(AUTH in varchar2, PID in number)
is
NCOUNT_DELETED number;
NCOUNT_INSERTED number;
begin
--first of all I check that both parameters are not null - let me omit this part
--I find the count of data to be deleted (for debug only)
select count(*)
into NCOUNT_DELETED
from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I delete "old" data
delete from T_REPORT_BIG_DATA_TEMP T
where T.AUTHID = AUTH
and T.PID = P_REPORT_BIG_DATA_RELOAD.PID;
--I upload "new" one
insert into T_REPORT_BIG_DATA_TEMP
select V.*, PID from
(select S.* from V_REPORT_BIG_DATA S
where S.AUTHID = AUTH);
--I find the count of uploaded data (for debug only)
NCOUNT_INSERTED := SQL%ROWCOUNT;
--I write the logs (for debug only)
insert into T_REPORT_BIG_DATA_TEMP_LG(AUTHID,PID,INS_CNT,DLD_CNT,WHEN)
values(AUTH,PID,NCOUNT_INSERTED,NCOUNT_DELETED,sysdate);
end P_REPORT_BIG_DATA_RELOAD;
And one more fact: I tried to turn :PID into Page Item, but it cleared after every refresh in spite of that the option Maintain session state is Per session, so that I couldn't even hope for using the same PID by every definite user in definite session.
Using a split database, everyone gets a front end with a local table I use as a 'cart' like in online shopping.
I'm copying records to a local table from stock. I don't want the record I copied across to be allowed to be transferred over again making duplicates. I also don't want to delete the original record, just modify it.
So I want them to edit the records copy locally then hit a button that will update the record on the database back end. If they don't hit the button and close the front end, no changes are made. Assume the temp table is wiped on start up.
To stop duplicate records I want to hide select records from the particular user of the front end database only. So if the Access app crashes the record isn't hidden for all users.
Idea: What If I add a Stock_ID (hidden) field to the local table? Then I can poll the column and if any Stock_ID matches the ID of the record I want to copy a message box says Error, record already exists and cancels the record copy?
I think you're saying you want to show the front end user only those stock records whose Stock_ID values are not present in the local table.
If that is correct, you can use an "unmatched query" to display those stock records.
SELECT s.*
FROM
stock AS s
LEFT JOIN [local] AS l
ON s.Stock_ID = l.Stock_ID
WHERE l.Stock_ID Is Null;
The Access query designer has a query wizard for this task. It should be worth a look.
When you say "hide select records", what combinations? Hide all of a certain type from ALL users; hide certain records from SOME users? In your split database, does EACH user have a copy of the front-end, or do all share the same front-end? There must be some criteria that determines who sees what records? Once that is identified, then a solution can follow.
I have the following table "users" (simplified for the purposes of this question) with non-incremental ids (not my design but I have to live with it)
userUUID email
----------------------------------------------------
44384582-11B1-4BB4-A41A-004DFF1C2C3 dabac#sdfsd.com
3C0036C2-04D8-40EE-BBFE-00A9A50A9D81 sddf#dgfg.com
20EBFAFE-47C5-4BF5-8505-013DA80FC979 sdfsd#ssdfs.com
...
We are running a program that loops through the table and sends emails to registered users. Using a try/catch block, I am recording the UUID of a failed record in a text file. The program then ceases and would need restarting.
When the program is restarted, I don't want to resend emails to those that were successful, but begin at the failed record. I assume this means, I need to process records that were created AFTER the failed record.
How do I do this?
Why not keep track somewhere (e.g. another table, or even in a BIT column of the original table, called "WelcomeEmailSent" or something) which UUIDs have already been mailed? Then no matter where your program dies or what state it was in, it can start up again and always know where it left off.
sort by a column (in this case I would recommend userUUID)
do a where userUUID > 'your-last-uuid'
I am currently developing an online Auction system using ASP.NET 3.5 and SQLServer 2008. I have reached the point in development where I need to ensure that my system sensibly handles the concurrency issue which may arise when:
Two people - Geraldine and John - want to bid on the same auction item which is currently going for £50. Geraldine enters a bid of £55 and John enters a bid of £52. The system now has two copies of the page 'submit_bid.aspx' running; each copy of the page checks to see that their bid is high enough, they both see that it is, and they submit the bids. If John's bid goes through first then the auction item price is currently £55 and a moment later it's being replaced by a bid of £52.
What I need to do is to lock the auction item row until the current bid price is updated before allowing any other bidder to check the current bid price and placing a new bid.
My question is: what is the best practice way for doing this using T-SQL and / or ADO.NET?
I currently have an AuctionItem table which has the following fields (plus other fields I haven't included for brevity):
AuctionItemID INT
CurrentBidPrice MONEY
CurrentBidderID INT
I have performed some research and come up with the following T-SQL (pseudocode-ish):
#Bid MONEY
#AuctionItemID INT
BEGIN TRANSACTION
SELECT #CurrentBidPrice = CurrentBidPrice
FROM AuctionItem
WITH (HOLDLOCK, ROWLOCK)
WHERE AuctionItemID = #AuctionItemID
/* Do checking for end of Auction, etc. */
if (#Bid > #CurrentBidPrice)
BEGIN
UPDATE AuctionItem
SET CurrentBidPrice = #Bid
WHERE AuctionItemID = #AuctionItemID
END
COMMIT TRANSACTION
I have also read that if I include the SET LOCK_TIMEOUT I can also reduce the number of failed concurrent updates. For example:
SET LOCK_TIMEOUT 1000
...will make a concurrent update wait for 1000 milliseconds for a lock to be released. Is this best practice?
Source: "chrisrlong", http://www.dbasupport.com/forums/archive/index.php/t-7282.html
Here are the methodologies used to handle multi-user concurrency issues:
Do Nothing (Undesirable)
User 1 reads a record
User 2 reads the same record
User 1 updates that record
User 2 updates the same record
User 2 has now over-written the changes that User 1 made. They are completely gone, as if they never happened. This is called a 'lost update'.
Pessimistic locking (Lock the record when it is read.)
User 1 reads a record and locks it by putting an exclusive lock on the record (FOR UPDATE clause)
User 2 attempts to read and lock the same record, but must now wait behind User 1
User 1 updates the record (and, of course, commits)
User 2 can now read the record with the changes that User 1 made
User 2 updates the record complete with the changes from User 1
The lost update problem is solved. The problem with this approach is concurrency. User 1 is locking a record that they might not ever update. User 2 cannot even read the record because they want an exclusive lock when reading as well. This approach requires far too much exclusive locking, and the locks live far too long (often across user control - an absolute no-no). This approach is almost never implemented.
Use Optimistic Locking.
Optimistic locking does not use exclusive locks when reading. Instead, a check is made during the update to make sure that the record has not been changed since it was read. Generally this is done by adding a version/etc column (INT/numeric, holding a numeric value that is increased when an UPDATE statement is made). IE:
UPDATE YOUR_TABLE
SET bid = 52
WHERE id = 10
AND version = 6
An alternate option is to use a timestamp, rather than a numeric column. This column is used for no other purpose than implementing optimistic concurrency. It can be a number or a date. The idea is that it is given a value when the row is inserted. Whenever the record is read, the timestamp column is read as well. When an update is performed, the timestamp column is checked. If it has the same value at UPDATE time as it did when it was read, then all is well, the UPDATE is performed and the timestamp is changed!. If the timestamp value is different at UPDATE time, then an error is returned to the user - they must re-read the record, re-make their changes, and try to update the record again.
User 1 reads the record, including the timestamp of 21
User 2 reads the record, including the timestamp of 21
User 1 attempts to update the record. The timestamp in had (21) matches the timestamp in the database(21), so the update is performed and the timestamp is update (22).
User 2 attempts to update the record. The timestamp in hand(21) does not match the timestamp in the database(22), so an error is returned. User 2 must now re-read the record, including the new timestamp(22) and User 1's changes, re-apply their changes and re-attempt the update.
Comparison
Optimistic locking is database independent -- no need for mucking with isolation levels and database specific syntax for isolation levels.
I'd use a numeric column over a timestamp -- less data & hassle to manage
You don't need a transaction if just use 1 statement like this:
-- check if auction is over (you could also include this in the sql)
UPDATE AuctionItem
SET CurrentBidPrice = #Bid
WHERE AuctionItemID = #AuctionItemID
AND CurrentBidPrice < #Bid
IF ##ROWCOUNT=1
BEGIN
--code for accepted bit
SELECT 'NEW BIT ACCEPTED'
END ELSE
BEGIN
--code for unaccepted bit
SELECT 'NEW BIT NOT ACCEPTED'
END
I followed Alex K's suggestion above and implemented a 'Bid History'. Works a treat. Thanks Alex K.