Prevent two connections from reading same row - sql

I am looking at building a database based Message Queue implementation. I will essentially have a database table which will contain a autogenerated id (bigint), a message id and message data. I will be writing a pull based consumer which will query for the oldest record (min(id)) from the table and hands it over for processing.
Now my doubt is how would I handle the querying of the oldest record when there are mulitple threads of consumer. How do I lock the first read record to the first consumer and basically not even make it visible to the next one.
One idea that I have is to add another column called locked by where I will store, lets say the thread name and select the record for update and immediately update the locked by column and then continue processing it. So that I will not select the locked columns in the next query.
Is this a workable solution?
Edit:
Essentially, this is what I want.
Connection one queries the database table for a row. Reads first row and locks it while reading for update.
Connection two queries the database table for a row. Should not be able to read the first row, should read second row if available and lock it for update.
Similar logic for connection 3, 4, etc..
Connection one updates the record with its identifier. Processes it and subsequently deletes the record.

Connection one queries the database table for a row. Reads first row and locks it while reading for update.
Connection two queries the database table for a row. Should not be able to read the first row, should read second row if available and
lock it for update.
Similar logic for connection 3, 4, etc..
Connection one updates the record with its identifier. Processes it and subsequently deletes the record.
TL;DR, see Rusanu's using tables as queues. The example DDL below is gleaned from the article.
CREATE TABLE dbo.FifoQueueTable (
Id bigint not null identity(1,1)
CONSTRAINT pk_FifoQueue PRIMARY KEY CLUSTERED
,Payload varbinary(MAX)
);
GO
CREATE PROCEDURE dbo.usp_EnqueueFifoTableMessage
#payload varbinary(MAX)
AS
SET NOCOUNT ON;
INSERT INTO dbo.FifoQueueTable (Payload) VALUES (#Payload);
GO
CREATE PROCEDURE dbo.usp_DequeueFifoTableMessage
AS
SET NOCOUNT ON;
WITH cte AS (
SELECT TOP(1) Payload
FROM dbo.FifoQueueTable WITH (ROWLOCK, READPAST)
ORDER BY Id
)
DELETE FROM cte
OUTPUT deleted.Payload;
GO
This implementation is simple but handing the unhappy path can be complex depending on the nature of the messages and the cause of the error.
When message loss is acceptable, one can simply use the default autocommit transaction and log errors.
In cases where messages must not be lost, the dequeue must be done in a client-initiated transaction and committed only after successful processing or no message read. The transaction will also ensure messages are not lost if the application or database service crashes. A robust error handling strategy depends on the type of error, nature of messages, and message processing order implications.
A poison message (i.e. an error in the payload that prevents the message from ever being successfully), one can insert the bad message into a dead letter table for subsequent manual review and commit the to transaction.
A transient error, such as a failure calling an external service, can be handled with techniques like:
Rollback the transaction so the message is first in the FIFO queue for retry next iteration.
Requeue the erred message and commit so the message is last in the FIFO queue for retry.
Enqueue the erred message in a separate retry queue along with a retry count. The message can be inserted into dead letter table once a retry limit is reached.
The app code can also include retry logic during message processing but should avoid long running database transactions and fallback to one techniques above after some retry threshold.
These same concepts can be implemented with Service Broker to facilitate a T-SQL only solution (internal activation) but adds complexity when that's not a requirement (as in your case). Note that SB queues intrinsically implement the "READPAST" requirement but, because all messages within the same conversation group are locked, the implication is that each message will need to be in a separate conversation.

Related

SQL Trigger when queued insert transactions are finished

In my application which is implemented using SoftwareAG Webmethods, I'm using JMS to process more than 2 millions of records in parallel, so basically each JMS thread will have a batch of records (lets's x1000) to process and then insert into database table (let's call it table A) and after each thread inserts each batch they will send result message on JMS which I will aggregate later to update the process status.
The problem I'm facing now is that the thread will process its batch, insert and put the result message on JMS queue but the insert transactions will get queued in the mssql database but it doesn't wait in the application itself. it considers it as done and continues with the next line of logic.
Therefore the process on each thread is completed and the main process is marked as completed while there are a lot of records still waiting to get inserted into the database yet.
so my question is that is there any trigger in mssql that can be used for when the queued transactions on a table are finished?
I suggest you instead of INSERT batches, use batches that will create a jobs with two steps. First step is insertion of data and second insert data about batch complete in some results table. After that you can check table with results.

Redshift: Serializable isolation violation on table

I have a very large Redshift database that contains billions of rows of HTTP request data.
I have a table called requests which has a few important fields:
ip_address
city
state
country
I have a Python process running once per day, which grabs all distinct rows which have not yet been geocoded (do not have any city / state / country information), and then attempts to geocode each IP address via Google's Geocoding API.
This process (pseudocode) looks like this:
for ip_address in ips_to_geocode:
country, state, city = geocode_ip_address(ip_address)
execute_transaction('''
UPDATE requests
SET ip_country = %s, ip_state = %s, ip_city = %s
WHERE ip_address = %s
''')
When running this code, I often receive errors like the following:
psycopg2.InternalError: 1023
DETAIL: Serializable isolation violation on table - 108263, transactions forming the cycle are: 647671, 647682 (pid:23880)
I'm assuming this is because I have other processes constantly logging HTTP requests into my table, so when I attempt to execute my UPDATE statement, it is unable to select all rows with the ip address I'd like to update.
My question is this: what can I do to update these records in a sane way that will stop failing regularly?
Your code is violating the serializable isolation level of Redshift. You need to make sure that your code is not trying to open multiple transactions on the same table before closing all open transactions.
You can achieve this by locking the table in each transaction so that no other transaction can access the table for updates until the open transaction gets closed. Not sure how your code is architected (synchronous or asynchronous), but this will increase the run time as each lock will force others to wait till the transaction gets over.
Refer: http://docs.aws.amazon.com/redshift/latest/dg/r_LOCK.html
Just got the same issue on my code, and this is how I fixed it:
First things first, it is good to know that this error code means you are trying to do concurrent operations in redshift. When you do a second query to a table before the first query you did moments ago was done, for example, is a case where you would get this kind of error (that was my case).
Good news is: there is a simple way to serialize redshift operations! You just need to use the LOCK command. Here is the Amazon documentation for the redshift LOCK command. It works basically making the next operation wait until the previous one is closed. Note that, using this command your script will naturally get a little bit slower.
In the end, the practical solution for me was: I inserted the LOCK command before the query messages (in the same string, separated by a ';'). Something like this:
LOCK table_name; SELECT * from ...
And you should be good to go! I hope it helps you.
Since you are doing a point update in your geo codes update process, while the other processes are writing to the table, you can intermittently get the Serializable isolation violation error depending on how and when the other process does its write to the same table.
Suggestions
One way is to use a table lock like Marcus Vinicius Melo has suggested in his answer.
Another approach is to catch the error and re run the transaction.
For any serializable transaction, it is said that the code initiating the transaction should be ready to retry the transaction in the face of this error. Since all transactions in Redshift are strictly serializable, all code initiating transactions in Redshift should be ready to retry them in the face of this error.
Explanations
The typical cause of this error is that two transactions started and proceeded in their operations in such a way that at least one of them cannot be completed as if they executed one after the other. So the db system chooses to abort one of them by throwing this error. This essentially gives control back to the transaction initiating code to take an appropriate course of action. Retry being one of them.
One way to prevent such a conflicting sequence of operations is to use a lock. But then it restricts many of the cases from executing concurrently which would not have resulted in a conflicting sequence of operations. The lock will ensure that the error will not occur but will also be concurrency restricting. The retry approach lets concurrency have its chance and handles the case when a conflict does occur.
Recommendation
That said, I would still recommend that you don't update Redshift in this manner, like point updates. The geo codes update process should write to a staging table, and once all records are processed, perform one single bulk update, followed by a vacuum if required.
Either you start a new session when you do second update on the same table or you have to 'commit' once you transaction is complete.
You can write set autocommit=on before you start updating.

NServiceBus stops a message from being handled by two consumers?

The NServiceBus documentation lists a benefit of SQL transport as:
Queues support competing consumers (multiple instances of same
endpoint feeding off of same queue) so there is no need for
distributor in order to scale out the processing
http://docs.particular.net/nservicebus/sqlserver/design
Who does NServiceBus prevent a message from being handled by multiple consumers if multiple consumers have subscribed to the same queue?
Does NServiceBus lock the entire table until the message is handled? Or is the message marked as 'being processed' ??
The SQL Transport uses very specific lock hints in order to lock a row and cause other competing threads to ignore any row currently locked.
From NServiceBus.SqlServer 2.2.0 (current version at the time I'm writing this) the SQL used, but reformatted by me, is:
WITH message AS
(
SELECT TOP(1) *
FROM [{Schema}].[{Queue}] WITH (UPDLOCK, READPAST, ROWLOCK)
ORDER BY [RowVersion] ASC
)
DELETE FROM message
OUTPUT deleted.Id, deleted.CorrelationId, deleted.ReplyToAddress,
deleted.Recoverable, deleted.Expires, deleted.Headers, deleted.Body;
It uses a Common Table Expression to limit the source data to the one row to return, then uses the following lock hints:
UPDLOCK - Hold the data under lock with intent to update it.
READPAST - Ignore locked rows and fetch the next unlocked one.
ROWLOCK - Force row-level locks and don't escalate to page locks or table locks.
By executing the entire thing as a delete but then outputting the data about to be deleted, we can read the data, and if the transaction commits, the row is removed. Otherwise, if the transaction rolls back, then the lock is released and the row waits to be picked up by the next competing consumer.

Why is an implicit table lock being released prior to end of transaction in RedShift?

I have an ETL process that is building dimension tables incrementally in RedShift. It performs actions in the following order:
Begins transaction
Creates a table staging_foo like foo
Copies data from external source into staging_foo
Performs mass insert/update/delete on foo so that it matches staging_foo
Drop staging_foo
Commit transaction
Individually this process works, but in order to achieve continuous streaming refreshes to foo and redundancy in the event of failure, I have several instances of the process running at the same time. And when that happens I occasionally get concurrent serialization errors. This is because both processes are replaying some of the same changes to foo from foo_staging in overlapping transactions.
What happens is that the first process creates the staging_foo table, and the second process is blocked when it attempts to create a table with the same name (this is what I want). When the first process commits its transaction (which can take several seconds) I find that the second process gets unblocked before the commit is complete. So it appears to be getting a snapshot of the foo table before the commit is in place, which causes the inserts/updates/deletes (some of which may be redundant) to fail.
I am theorizing based on the documentation http://docs.aws.amazon.com/redshift/latest/dg/c_serial_isolation.html where it says:
Concurrent transactions are invisible to each other; they cannot detect each other's changes. Each concurrent transaction will create a snapshot of the database at the beginning of the transaction. A database snapshot is created within a transaction on the first occurrence of most SELECT statements, DML commands such as COPY, DELETE, INSERT, UPDATE, and TRUNCATE, and the following DDL commands :
ALTER TABLE (to add or drop columns)
CREATE TABLE
DROP TABLE
TRUNCATE TABLE
The documentation quoted above is somewhat confusing to me because it first says a snapshot will be created at the beginning of a transaction, but subsequently says a snapshot will be created only at the first occurrence of some specific DML/DDL operations.
I do not want to do a deep copy where I replace foo instead of incrementally updating it. I have other processes that continually query this table so there is never a time when I can replace it without interruption. Another question asks a similar question for deep copy but it will not work for me: How can I ensure synchronous DDL operations on a table that is being replaced?
Is there a way for me to perform my operations in a way that I can avoid concurrent serialization errors? I need to ensure that read access is available for foo so I can't LOCK that table.
OK, Postgres (and therefore Redshift [more or less]) uses MVCC (Multi Version Concurrency Control) for transaction isolation instead of a db/table/row/page locking model (as seen in SQL Server, MySQL, etc.). Simplistically every transaction operates on the data as it existed when the transaction started.
So your comment "I have several instances of the process running at the same time" explains the problem. If Process 2 starts while Process 1 is running then Process 2 has no visibility of the results from Process 1.

Design a Lock for SQL Server to help relax the conflict between INSERT and SELECT

SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.