Suppose I need to select max value as order number. Thus I'll select MAX(number), assign number to order, and save changes to database. However, how do I prevent others from messing with the number? Will transactions do? Something like:
ordersRepository.StartTransaction();
order.Number = ordersRepository.GetMaxNumber() + 1;
ordersRepository.Commit();
Will the code above "lock" changes so that order numbers are read/write only by one DB client? Given that transactions are plain NHibernate ones, and GetMaxNumber just does SELECT MAX(Number) FROM Orders.
Using an ITransaction with IsolationLevel.Serializable should do the job. Be careful of table contention, though. If you've got high frequency updates on the table, things could slow down big time. You might want to profile the hit on the db when using GetMaxNumber().
I had to do something similar to generate custom IDs for high concurrency usage. My solution moved the ID generation into the database, and used a separate Counter table to hold the max values.
Using a separate Counter table has a couple of plus points:
It removes the contention on the Order table
It's usually faster
If it's small enough, It can be pinned into memory
I also used a stored proc to return the next available ID:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
UPDATE [COUNTER] SET Value = Value + 1 WHERE COUNTER_ID = #counterId
COMMIT TRAN
RETURN [NEW_VALUE]
Hope that helps.
Related
I have an Operations table with the columns sourceId, destinationId, amount, status. Whenever a user makes a transfer the API inserts a new row into that table, after checking the user's balance by calculating the sum of credit operations minus the sum of debit operations. Only when the balance is greater or equal than the transfer amount the operation is inserted with a successful status.
The issue is concurrency since a user performing multiple transfers at the same time might end up with a negative balance.
There are multiple ways of handling this concurrency issue with PostgreSQL:
Serializable transaction isolation level
Table locking
Row versioning
Row locking
etc.
Our expected behavior is, instead of failing with a unique violation on (sourceId, version), the database should wait for the previous transaction to finish, get the latest inserted version without setting the transaction isolation level to SERIALIZABLE.
However, I am not completely sure about the best approach. Here's what I tried:
1. Serializable transaction isolation level
This is the easiest approach, but the problem is lock escalation because if the database engine is under heavy load, 1 transaction can lock the whole table up, which is the documented behavior.
Pseudo-code:
newId = INSERT INTO "Operations" ("SourceId", "DestinationId", "Amount", "Status", "OccuredAt") values (null, 2, 3, 100, 'PENDING', null);
BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT * FROM "Operations" WHERE ("SourceId" = 2 or "DestinationId"=2) and "Status" = 'SUCCESSFUL';
'''API: check if balance > transfer amount'''
UPDATE "Operations" SET "Status" = 'SUCCESSFUL' where id = newId
COMMIT;
2. Table locking
This is what we want to avoid by NOT using serializable transaction level
3. Row versioning
This approach seems best so far performance-wise. We added a column version int and a unique index on (sourceId, version), and when the transaction is inserted it is inserted with the next version. If two transactions are concurrent the database throws an error:
duplicate key value violates unique constraint "IX_Transactions_SourceWalletId_Version"
Pseudo-code:
newId = INSERT INTO "Operations" ("SourceId", "DestinationId", "Amount", "Status", "OccuredAt") values (null, 2, 3, 100, 'PENDING', null);
BEGIN;
lastVersion = SELECT o."Version"
FROM "Operations"
WHERE ("SourceId" = 2) AND ("Version" IS NOT NULL)
ORDER BY o."Version" DESC
LIMIT 1
SELECT * FROM "Operations" WHERE ("SourceId" = 2 or "DestinationId"=2)
and "Status" = 'SUCCESSFUL';
'''API: check if balance > transfer amount'''
UPDATE "Operations" SET "Status" = 'SUCCESSFUL', "Version" = lastVersion + 1 where id = newId;
COMMIT;
4. Row locking
Before calculating the user balance, lock all transaction rows with sourceWalletId = x (where x is the user making the transfer). But I can't find a way of doing this in PostgreSQL, using for update does the trick, but after a concurrent transaction waits on the first one, the result does not return the newly inserted row, which is the documented behavior for PostgreSQL.
using for update does the trick, but after a concurrent transaction waits on the first one, the result does not return the newly inserted row, which is the documented behavior for PostgreSQL.
Kind of true, but also not a show-stopper.
Yes, in default READ COMMITTED transaction isolation each statement only sees rows that were committed before the query began. The query, mind you, not the transaction. See:
Can concurrent value modification impact single select in PostgreSQL 9.1?
Just start the next query in the same transaction after acquiring the lock.
Assuming a table holding exactly one row per (relevant) user (like you should have). I'll call it "your_wallet_table", based on the cited "sourceWalletId":
BEGIN;
SELECT FROM "your_wallet_table" WHERE "sourceWalletId" = x FOR UPDATE;
-- x is the user making the transfer
-- check the user's balance (separate query!)
INSERT INTO "Operations" ... -- 'SUCCESSFUL' or 'PENDING'
COMMIT;
The lock is only acquired once no other transaction is working on the same user, and only released at the end of the transaction.
The next transaction will see all committed rows in its next statement.
If all transactions stick to this modus operandi, all is fine.
Of course, transactions cannot be allowed to change rows affecting the balance of other users.
Related:
How to use RETURNING with ON CONFLICT in PostgreSQL?
Number one in your example, the serializable variant, is the only one that will guarantee correct behaviour without the code having to retry transactions if the update count is zero or the transaction was rolled back. It is also the simplest to reason about. By the way, REPEATABLE READ would also be good enough for already existing rows.
Number 3 in your example, which looks lightly like optimistic locking, might look more performant, but that depends on the type of load. Updating an index, in your case the unique index, can also be a performance hit. And generally, you have less control over locks on indexes, which makes the situation less deterministic or more difficult to reason about. Also, it is still possible to suffer from different read values in any transaction isolation level below REPEATABLE READ.
Another thing is that your implementation behaves incorrectly in the following scenario:
Process 1 starts and reads version 1
Process 2 starts and reads version 1
Process 2 succeeds and writes version 2
Process 3 starts and reads version 2
Process 3 succeeds and writes version 3
Process 1 succeeds and writes version 2 // NO VIOLATION!
What does work is optimistic locking, which looks somewhat like your number 3. Pseudo code / SQL:
BEGIN
SELECT "version", "amount" FROM "operations" WHERE "id" = identifier
// set variable oldVersion to version
// set variable newVersion to version + 1
// set variable newAmount to amount + amountOfOperation
IF newAmount < 0
ROLLBACK
ELSE
UPDATE "operations" SET "version" = newVersion, "amount" = newAmount WHERE "id" = identifier AND "version" = oldVersion
COMMIT
ENDIF
This does not require a unique index containing version. And in general, the query in the WHERE condition of the UPDATE and the update itself are correct even with READ COMMITTED transaction isolation level. I am not certain about PostGreSQL: do verify this in documentation!
In general, I would start with the serializable number 1 example, until measurements in real use-cases show that it is a problem. Then try optimistic locking and see if it improves, also with actual use-cases.
Do remember that the code must always be able to replay the transaction from BEGIN to END if the UPDATE reports zero updated rows, or if the transaction fails.
Good luck and have fun!
You will have to take a performance hit. SERIALIZABLE isolation would be the easiest way. You can increase max_predicate_locks_per_relation, max_predicate_locks_per_xact or max_predicate_locks_per_page (depending on which limit you hit) to escalate locks later.
Alternatively, you could have a table that stores the balance per user, which is updated by a deferred trigger. Then you can have a check constraint on that table. This will serialize operations only per user.
This question is related/came from discussion about another thing:
What is the correct isolation level for Order header - Order lines transactions?
Imagine scenario where we have usual Orders_Headers and Orders_LineItems tables. Lets say also that we have a special business rules that say:
Each order has Discount field which is calculated based on time passed from last order entered
Each next order Discount field is calculated specially if there has been more than X order in last Y hours.
Each next order Discount field is calculated specially if average frequency of last 10 orders was higher than x per minute.
Each next order Discount field is calculated specially
Point here is to show that every Order is dependant on previous ones and isolation level is crucial.
We have a transaction (just logic of the code shown):
BEGIN TRANSACTION
INSERT INTO Order_Headers...
SET #Id = SCOPE_IDENTITY()
INSERT INTO Order_LineItems...(using #Id)
DECLARE #SomeVar INT
--just example to show selecting previous x orders
--needed to calculate Discount value for new Order
SELECT #SomeVar = COUNT(*) Order_Headers
WHERE ArbitraryCriteria
UPDATE Order_Headers
SET Discount= UDF(#SomeVar)
WHERE Id = #Id
COMMIT
END TRANSACTION
We also have another transaction to read orders:
SELECT TOP 10 * FROM Order_Headers
ORDER BY Id DESC
QUESTIONS
Is SNAPSHOT isolation level for first transaction and READ COMMITED for second appropriate levels?
Is there a better way of approaching CREATE/UPDATE transaction or is this the way to do it?
The problem with snapshot is not about inserting/reading (which i assume you decided to use). Its about updates, that you should be a concerned.
Snapshot isolation levels are using row versioning. Which means any time you insert/update/delete row, those rows get duplicated in tempdb(version store, location for those kinds of rows), and increase its size by 14 bytes with a versioning tag so that your newly started transaction can read a row from the last committed transaction. Keep in mind that these resized rows will stay as they are until you rebuild the index.
This should be an indicator ,that if your table is really busy, your indexes will be defragmented much faster and it will add certain amount of % overhead on your temp.So keep that in mind.
What is even bigger concern here are updates, as i mentioned.
Any time you insert/delete/update row, you will get exclusive locks on those rows (object later),and since you snapshot is using row versioning, inserts from another transaction are adding exclusive locks on a NEW row, and that is not a problem.However if you try to update an existing row and session 2 tries to acquire X lock on that row, it will fail because session 1 already has X lock on it, and this is where you will get this message:
Read Committed and Serializable have covered these issues well, so you might wanna take that approach and test all solutions before you actually implement it. Remember all transactions will cause blocking on updates, and snapshot/read comitted snapshot will simply fail.
Me personally would`ve used read committed snapshot and altered procedure , to rerun in catch block N amount of times, but hey that has flaws as well !
The serializable option:
Using a pessimistic locking strategy by way of the updlock and serializable table hints to acquire a key range lock specified by the where criteria (backed by a supporting index to lock only the range necessary for the query):
declare #Id int, #SomeVar int;
begin tran;
select #SomeVar = count(OrderDate)
from Order_Headers with (updlock,serializable)
where OrderDate >= '20170101';
insert into Order_Headers (OrderDate, SomeVar)
select sysdatetime(), #SomeVar;
set #Id = scope_identity();
insert into Order_LineItems (id,cols)
select #Id, cols
from #TableValuedParameter;
commit tran;
A good guide to the why and how of using the updlock and serializable table hints to lock a key range with a select, and why you need both, is covered in Sam Saffron''s upsert (update/insert) patterns.
Reference:
Documentation on serializable and other Table Hints - MSDN
Key-Range Locking - MSDN
SQL Server Isolation Levels: A Series - Paul White
Questions About T-SQL Transaction Isolation Levels You Were Too Shy to Ask - Robert Sheldon
Isolation Level references curated by Brent Ozar
I would like to do something like this
begin trans
declare #maxpriceprice int;
Select #maxpriceprice MAX(price) from products where typeId = 5
and later on
Insert into products (price) values(#maxpriceprice + RAND() * 10)
commit tran
It is rather strange but it is strange application.
between the select and insert i cannot afford to havve people inserting stuff into thedatabase.
Would it be ok to do a select max price with (XLOCK) to prevent other people getting max price until i complete my transaction.
You never have a race condition in the database if you let the DBMS handle the locking for you. To do that, you have to give it enough information:
insert into products (price)
select MAX(price) + RAND() * 10
from products where typeId = 5
Would it be ok to do a select max price with (XLOCK)
Basically, no. You never want to leave a transaction open and hand control back to an application or a user. You want to get your ducks in a row, and execute your transaction. If you want to "lock" something (while, say, the user updates the information) you usually resort to an optimistic concurrency strategy whereby at time of update you verify that the row has not changed since it was read. You might want to read up on the timestamp datatype, which supports that style.
SQL also defines cursors, row-level locking, and serializable isolation. Those are all available too in SQL server, at some cost to concurrency.
Once in a while, I need to clear out the anonymous user profiles from the database. A colleague has suggested I use this procedure because it allows a little breathing space from time to time for other procedures to run.
WHILE EXISTS (SELECT * FROM aspnet_users WITH (NOLOCK)
WHERE userID IN (SELECT UserID FROM #AspnetUsersToDelete))
BEGIN
SET ROWCOUNT 1000
DELETE FROM aspnet_users WHERE userID IN (SELECT UserID FROM #AspnetUsersToDelete )
print 'aspnet_Users deleted: ' + CONVERT(varchar(255), ##ROWCOUNT)
SET ROWCOUNT 0
WAITFOR DELAY '00:00:01'
END
This is the first time I've seen the NOLOCK keyword used and the logic for the rowcount seems backwards to me. Does anyone else use a similar sort of technique for providing windows in long running procedures and is this the best way of doing things?
Any time I anticipate deleting a very large number of rows, I'll do something similar to this to keep transaction batch sizes reasonable.
For SQL Server 2005+, you could use DELETE TOP (1000)... instead of the SET ROWCOUNT statements. I usually do:
SELECT NULL; /* Fudge ##ROWCOUNT value for first time in loop */
WHILE (##ROWCOUNT <> 0) BEGIN
DELETE TOP (1000)
...
END /* WHILE */
The SET ROWCOUNT 1000 means it will only process one thousand rows in the following statements (i.e., DELETE statement). SET ROWCOUNT 0 means each statement processes however many rows are relevant.
So basically, over all it deletes one thousand rows, waits a second, deletes another thousand, and continues that until there are no more to delete.
The WITH (NOLOCK) prevents the data from being locked, meaning that multiple queries running simultaneously can access the data. This allows your query to be a little faster. For more information about NOLOCK, consult the following link:
http://www.mollerus.net/tom/blog/2008/03/using_mssqls_nolock_for_faster_queries.html
(NOLOCK) allows dirty reads. Basically, there is a chance that if you are reading data out of the table while it is in the process of being updated, you could read the wrong data. You can also read data that has been modified by transactions that have not been committed yet as well as a slew of other problems.
Best practice is not to use NOLOCK unless you are reading from tables that really don't change (such as a table containing states) or from a data warehouse type DB that is not constantly updated.
I want to use a database table as a queue. I want to insert in it and take elements from it in the inserted order (FIFO). My main consideration is performance because I have thousands of these transactions each second. So I want to use a SQL query that gives me the first element without searching the whole table. I do not remove a row when I read it.
Does SELECT TOP 1 ..... help here?
Should I use any special indexes?
I'd use an IDENTITY field as the primary key to provide the uniquely incrementing ID for each queued item, and stick a clustered index on it. This would represent the order in which the items were queued.
To keep the items in the queue table while you process them, you'd need a "status" field to indicate the current status of a particular item (e.g. 0=waiting, 1=being processed, 2=processed). This is needed to prevent an item be processed twice.
When processing items in the queue, you'd need to find the next item in the table NOT currently being processed. This would need to be in such a way so as to prevent multiple processes picking up the same item to process at the same time as demonstrated below. Note the table hints UPDLOCK and READPAST which you should be aware of when implementing queues.
e.g. within a sproc, something like this:
DECLARE #NextID INTEGER
BEGIN TRANSACTION
-- Find the next queued item that is waiting to be processed
SELECT TOP 1 #NextID = ID
FROM MyQueueTable WITH (UPDLOCK, READPAST)
WHERE StateField = 0
ORDER BY ID ASC
-- if we've found one, mark it as being processed
IF #NextId IS NOT NULL
UPDATE MyQueueTable SET Status = 1 WHERE ID = #NextId
COMMIT TRANSACTION
-- If we've got an item from the queue, return to whatever is going to process it
IF #NextId IS NOT NULL
SELECT * FROM MyQueueTable WHERE ID = #NextID
If processing an item fails, do you want to be able to try it again later? If so, you'll need to either reset the status back to 0 or something. That will require more thought.
Alternatively, don't use a database table as a queue, but something like MSMQ - just thought I'd throw that in the mix!
If you do not remove your processed rows, then you are going to need some sort of flag that indicates that a row has already been processed.
Put an index on that flag, and on the column you are going to order by.
Partition your table over that flag, so the dequeued transactions are not clogging up your queries.
If you would really get 1.000 messages every second, that would result in 86.400.000 rows a day. You might want to think of some way to clean up old rows.
Everything depends on your database engine/implementation.
For me simple queues on tables with following columns:
id / task / priority / date_added
usually works.
I used priority and task to group tasks and in case of doubled task i choosed the one with bigger priority.
And don't worry - for modern databases "thousands" is nothing special.
This will not be any trouble at all as long as you use something to keep track of the datetime of the insert. See here for the mysql options. The question is whether you only ever need the absolute most recently submitted item or whether you need to iterate. If you need to iterate, then what you need to do is grab a chunk with an ORDER BY statement, loop through, and remember the last datetime so that you can use that when you grab your next chunk.
perhaps adding a LIMIT=1 to your select statement would help ... forcing the return after a single match...
Since you don't delete the records from the table, you need to have a composite index on (processed, id), where processed is the column that indicates if the current record had been processed.
The best thing would be creating a partitioned table for your records and make the PROCESSED field the partitioning key. This way, you can keep three or more local indexes.
However, if you always process the records in id order, and have only two states, updating the record would mean just taking the record from the first leaf of the index and appending it to the last leaf
The currently processed record would always have the least id of all unprocessed records and the greatest id of all processed records.
Create a clustered index over a date (or autoincrement) column. This will keep the rows in the table roughly in index order and allow fast index-based access when you ORDER BY the indexed column. Using TOP X (or LIMIT X, depending on your RDMBS) will then only retrieve the first x items from the index.
Performance warning: you should always review the execution plans of your queries (on real data) to verify that the optimizer doesn't do unexpected things. Also try to benchmark your queries (again on real data) to be able to make informed decisions.
I had the same general question of "how do I turn a table into a queue" and couldn't find the answer I wanted anywhere.
Here is what I came up with for Node/SQLite/better-sqlite3.
Basically just modify the inner WHERE and ORDER BY clauses for your use case.
module.exports.pickBatchInstructions = (db, batchSize) => {
const buf = crypto.randomBytes(8); // Create a unique batch identifier
const q_pickBatch = `
UPDATE
instructions
SET
status = '${status.INSTRUCTION_INPROGRESS}',
run_id = '${buf.toString("hex")}',
mdate = datetime(datetime(), 'localtime')
WHERE
id IN (SELECT id
FROM instructions
WHERE
status is not '${status.INSTRUCTION_COMPLETE}'
and run_id is null
ORDER BY
length(targetpath), id
LIMIT ${batchSize});
`;
db.run(q_pickBatch); // Change the status and set the run id
const q_getInstructions = `
SELECT
*
FROM
instructions
WHERE
run_id = '${buf.toString("hex")}'
`;
const rows = db.all(q_getInstructions); // Get all rows with this batch id
return rows;
};
A very easy solution for this in order not to have transactions, locks etc is to use the change tracking mechanisms (not data capture). It utilizes versioning for each added/updated/removed row so you can track what changes happened after a specific version.
So, you persist the last version and query the new changes.
If a query fails, you can always go back and query data from the last version.
Also, if you want to not get all changes with one query, you can get top n order by last version and store the greatest version I'd you have got to query again.
See this for example Using Change Tracking in SQL Server 2008