Using Commit in a SQL Server Trigger - sql

Currently I am working on a requirement which requires logging of input record into a staging table.
The requirement summery is as below:
Web Service will send a request which will insert a record into the level 1 staging table (StagingTable1).
A database trigger on StagingTable1 will process the validity of the record and raise an error
This error message gets acknowledged as Web Service response to other end system.
My challenge is to track these failed records in another staging table StagingTable2. But RAISERROR in a trigger is causing the entire transaction to rollback.
How this can be achieved?
Note: I tried temporary tables but it does not work. Pragma_autonomuos transactions are not possible in SQL Server as far as I know.

You can make the trigger put the record into the StagingTable2 - or not, so the absence of it will indicate an unsuccessful validation. You can even put error descriptions, along with row' key, into some other table, so that you will always know which ones failed to pass.
Or you can use Service Broker for that, if your servers are far enough away from each other.

Related

Prevent two connections from reading same row

I am looking at building a database based Message Queue implementation. I will essentially have a database table which will contain a autogenerated id (bigint), a message id and message data. I will be writing a pull based consumer which will query for the oldest record (min(id)) from the table and hands it over for processing.
Now my doubt is how would I handle the querying of the oldest record when there are mulitple threads of consumer. How do I lock the first read record to the first consumer and basically not even make it visible to the next one.
One idea that I have is to add another column called locked by where I will store, lets say the thread name and select the record for update and immediately update the locked by column and then continue processing it. So that I will not select the locked columns in the next query.
Is this a workable solution?
Edit:
Essentially, this is what I want.
Connection one queries the database table for a row. Reads first row and locks it while reading for update.
Connection two queries the database table for a row. Should not be able to read the first row, should read second row if available and lock it for update.
Similar logic for connection 3, 4, etc..
Connection one updates the record with its identifier. Processes it and subsequently deletes the record.
Connection one queries the database table for a row. Reads first row and locks it while reading for update.
Connection two queries the database table for a row. Should not be able to read the first row, should read second row if available and
lock it for update.
Similar logic for connection 3, 4, etc..
Connection one updates the record with its identifier. Processes it and subsequently deletes the record.
TL;DR, see Rusanu's using tables as queues. The example DDL below is gleaned from the article.
CREATE TABLE dbo.FifoQueueTable (
Id bigint not null identity(1,1)
CONSTRAINT pk_FifoQueue PRIMARY KEY CLUSTERED
,Payload varbinary(MAX)
);
GO
CREATE PROCEDURE dbo.usp_EnqueueFifoTableMessage
#payload varbinary(MAX)
AS
SET NOCOUNT ON;
INSERT INTO dbo.FifoQueueTable (Payload) VALUES (#Payload);
GO
CREATE PROCEDURE dbo.usp_DequeueFifoTableMessage
AS
SET NOCOUNT ON;
WITH cte AS (
SELECT TOP(1) Payload
FROM dbo.FifoQueueTable WITH (ROWLOCK, READPAST)
ORDER BY Id
)
DELETE FROM cte
OUTPUT deleted.Payload;
GO
This implementation is simple but handing the unhappy path can be complex depending on the nature of the messages and the cause of the error.
When message loss is acceptable, one can simply use the default autocommit transaction and log errors.
In cases where messages must not be lost, the dequeue must be done in a client-initiated transaction and committed only after successful processing or no message read. The transaction will also ensure messages are not lost if the application or database service crashes. A robust error handling strategy depends on the type of error, nature of messages, and message processing order implications.
A poison message (i.e. an error in the payload that prevents the message from ever being successfully), one can insert the bad message into a dead letter table for subsequent manual review and commit the to transaction.
A transient error, such as a failure calling an external service, can be handled with techniques like:
Rollback the transaction so the message is first in the FIFO queue for retry next iteration.
Requeue the erred message and commit so the message is last in the FIFO queue for retry.
Enqueue the erred message in a separate retry queue along with a retry count. The message can be inserted into dead letter table once a retry limit is reached.
The app code can also include retry logic during message processing but should avoid long running database transactions and fallback to one techniques above after some retry threshold.
These same concepts can be implemented with Service Broker to facilitate a T-SQL only solution (internal activation) but adds complexity when that's not a requirement (as in your case). Note that SB queues intrinsically implement the "READPAST" requirement but, because all messages within the same conversation group are locked, the implication is that each message will need to be in a separate conversation.

Redshift: Serializable isolation violation on table

I have a very large Redshift database that contains billions of rows of HTTP request data.
I have a table called requests which has a few important fields:
ip_address
city
state
country
I have a Python process running once per day, which grabs all distinct rows which have not yet been geocoded (do not have any city / state / country information), and then attempts to geocode each IP address via Google's Geocoding API.
This process (pseudocode) looks like this:
for ip_address in ips_to_geocode:
country, state, city = geocode_ip_address(ip_address)
execute_transaction('''
UPDATE requests
SET ip_country = %s, ip_state = %s, ip_city = %s
WHERE ip_address = %s
''')
When running this code, I often receive errors like the following:
psycopg2.InternalError: 1023
DETAIL: Serializable isolation violation on table - 108263, transactions forming the cycle are: 647671, 647682 (pid:23880)
I'm assuming this is because I have other processes constantly logging HTTP requests into my table, so when I attempt to execute my UPDATE statement, it is unable to select all rows with the ip address I'd like to update.
My question is this: what can I do to update these records in a sane way that will stop failing regularly?
Your code is violating the serializable isolation level of Redshift. You need to make sure that your code is not trying to open multiple transactions on the same table before closing all open transactions.
You can achieve this by locking the table in each transaction so that no other transaction can access the table for updates until the open transaction gets closed. Not sure how your code is architected (synchronous or asynchronous), but this will increase the run time as each lock will force others to wait till the transaction gets over.
Refer: http://docs.aws.amazon.com/redshift/latest/dg/r_LOCK.html
Just got the same issue on my code, and this is how I fixed it:
First things first, it is good to know that this error code means you are trying to do concurrent operations in redshift. When you do a second query to a table before the first query you did moments ago was done, for example, is a case where you would get this kind of error (that was my case).
Good news is: there is a simple way to serialize redshift operations! You just need to use the LOCK command. Here is the Amazon documentation for the redshift LOCK command. It works basically making the next operation wait until the previous one is closed. Note that, using this command your script will naturally get a little bit slower.
In the end, the practical solution for me was: I inserted the LOCK command before the query messages (in the same string, separated by a ';'). Something like this:
LOCK table_name; SELECT * from ...
And you should be good to go! I hope it helps you.
Since you are doing a point update in your geo codes update process, while the other processes are writing to the table, you can intermittently get the Serializable isolation violation error depending on how and when the other process does its write to the same table.
Suggestions
One way is to use a table lock like Marcus Vinicius Melo has suggested in his answer.
Another approach is to catch the error and re run the transaction.
For any serializable transaction, it is said that the code initiating the transaction should be ready to retry the transaction in the face of this error. Since all transactions in Redshift are strictly serializable, all code initiating transactions in Redshift should be ready to retry them in the face of this error.
Explanations
The typical cause of this error is that two transactions started and proceeded in their operations in such a way that at least one of them cannot be completed as if they executed one after the other. So the db system chooses to abort one of them by throwing this error. This essentially gives control back to the transaction initiating code to take an appropriate course of action. Retry being one of them.
One way to prevent such a conflicting sequence of operations is to use a lock. But then it restricts many of the cases from executing concurrently which would not have resulted in a conflicting sequence of operations. The lock will ensure that the error will not occur but will also be concurrency restricting. The retry approach lets concurrency have its chance and handles the case when a conflict does occur.
Recommendation
That said, I would still recommend that you don't update Redshift in this manner, like point updates. The geo codes update process should write to a staging table, and once all records are processed, perform one single bulk update, followed by a vacuum if required.
Either you start a new session when you do second update on the same table or you have to 'commit' once you transaction is complete.
You can write set autocommit=on before you start updating.

SQL Replication error - row was not found at the Subscriber

We are trying to setup replication on a SQL Server 2005 database. We have followed some instructions for the past year, and all has been fine. Recently, it started failing (Development environment, so every week we rebuild the database.. and apply replciation).
We follow a set of steps, snap shot gets generated.. and applied to the replicated database. All fine. No errors.
We then add a new row to the source database, and bang! Error.
Command attempted:
if ##trancount > 0 rollback tran
(Transaction sequence number: 0x000004BE00000558000100000000, Command ID: 1)
Error messages:
The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)
We're inserting a row, but it's complaining that the row isn't on the subscriber. That's right, though. We want it to replicate the insert to the subscriber...
When we do a SELECT COUNT(*) on both the source and the destination, the row count is the same, until we do the INSERT, at which point, the source incriments, but the destination remains the same....
Any ideas where we can start looking?
Ugh... this error sucks. When you say that you inserted a row, I assume that you inserted it at the publisher. That's not going to work; replication delivers commands serially. That is, it won't replicate the fact that you inserted the missing row until it gets past your current error.
So, here's where we start. In the error message, we see an transaction sequence number. We can use that to determine the primary key of the missing row. At the distributor, there's a stored procedure called sp_browsereplcmds. You can plug in the transaction sequence number in for both the #xact_seqno_start and #xact_seqno_end parameters. You'll also see a command_id parameter in the stored procedure; this corresponds to the Command ID in your error message. Try executing the procedure with just those parameters specified. It should give you the command that it's trying to execute at the subscriber. From there, you can tell the primary key of the row that it's either trying to update or delete. You can then insert a row with that primary key at the subscriber and replication will move on.
Alternatively, you could drop this article from this subscriber, re-add it, and re-initialize that article. It's a bit more intense on the server, but is a lot less fiddly.
This is due to data corruption on publisher database,we faced the same replication errors,when we ran DBCC check DB with allow data loss.
Finally We tried doing RCA like
1.) Checking for storage errors by doing CHKDSK in offline mode
2.)Purging the table if it has a lot of data ..in our case we had 40 million rows.
The issue was gone after doing Purging data in our case

Is a commit needed on a select query in DB2?

I have a vendor reporting product executing queries to pull report data, no inserts, no updates just reading data.
We have double our heap size 3 times and are now at 1024 4k pages, The app will run fine for a week then we will begin to see DB2 SQL error: SQLCODE: -954, SQLSTATE: 57011 indicating the transaction log is not able to accomodate the request.
Its not the size of the reports since they run fine after a recycle. I spoke with another DBA on this. He believe the problem was in a difference between ORACLE and DB2 in that the vendor code is crappy and not issuing commits on the selects. This is causing the references to not be cleaned up and is slowly accumulating as garbage in the heap.
I wanted to know if this is accurate as I thought only inserts and updates needed to have commits included. Is there any IBM documentation on this?
We are currently recycling on a weekly basis to alleviate the problem but I would like to have a good handle on the issue before going back to the vendor asking them to alter their code.
Any transaction needs to be properly terminated -- why did you think that only applies to inserts and updates? Consider running transactionally a "select a from b where c > 12" and then "select a from b where c <= 12"; within a transaction the DB has to guarantee that every a gets returned exactly once either from the first or second select, not both (assuming c is never null;-). Without transactionality, some a's might fall between the cracks or be returned twice if their corresponding c was changed by a different transaction, and that's just not ACID!-)
So when you do not need separate SELECT queries to be transactional wrt each other, tell the DB! And the way you tell, is by terminating the transaction after each select (normally commit is what you use for the purpose, though I guess you could, indifferently, choose to use rollback here;-).
Per Alex's response, the first SQL activity after any CONNECT, COMMIT, or ROLLBACK initiates a transaction.
To get a handle on your resource issue (transaction logs full), you should investigate your application that issues the reports - ensure that transactions are being closed out explicitly in code. I've seen cases where application developers rely upon the Garbage Collector to clean up database objects - while those objects are waiting for cleanup, the database resources (transactions) are held open.
It's always good practice to explicitly COMMIT or ROLLBACK your transactions as soon as you are done with the data - regardless of the programming methodology you use.
I get this error when committing transaction on a SELECT query, but despite the error it does return a Result-Set that include queried data.
tran.Commit();
error [hy011] [ibm] cli0126e the operation is invalid sqlstate=hy011
I changed my code to tran.Rollback(); and the error disapered.
Can anyone explain this behavior?

uncommittable transaction is detected at the end of batch. the transaction is rolled back

We are having problem with the server migration. We have one application that are having
so much transactions It working fine on the one database server. But when transfer same database to another server. We are facing the following error.
Server: Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is
detected at the end of the batch. The
transaction is rolled back.
Same database is copied to the another server with the all data. If we change connectionstring to old server then it is working fine.
Can anybody suggest on this?
This message means one of the other participants in the transaction voted to rollback. After that the transaction must fail.
So this message is a consequence, rather than a cause. Are you receiving any earlier / other error messages?
What happens when you run the query from Management Studio?
What you seem to have is a problem where the record is acceptable in one database but not theother. Suggest you look at the differnces between the two database structures (yes I know they are supposed to be the same, but clearly they are not). Suspect you will either find a collation difference, a data type differnce, or a data length differnce between the two. YOu might also have a table where the identity definition is missing and thus it can't insert becasue it is a required field and the value is missing. Tools like SQl Compare are easy to use to find differences.