Using TSQLUnit to test INSTEAD OF triggers - sql-server-2005

I have an INSTEAD OF trigger on a table in my SQL Server 2005 database that checks several incoming values. If an incoming value is invalid, an error is raised and the transaction is rolled back. Otherwise the record is inserted.
I would like to include a TSQLUnit test of this trigger where, if an invalid value is inserted, having the transaction rolled back is the successful outcome of the test. I have created a test procedure to do this, but rolling back the transaction aborts execution of the whole suite of tests.
Has anyone had success with this? If so, how did you accomplish it?
If this is not possible with TSQLUnit, how do you test your triggers? Or do you test them at all?

I don't know if it's TSQLUnit or just standard behavior of SQL you're seeing.
A trigger exception in SQL Server 2000 and earlier is batch aborting on ROLLBACK because ##TRANCOUNT = 0
With TRY/CATCH in SQL Server 2005 the behavior changes and the client should handle it correctly. Saying that, I'd wrap the outer call in TRY/CATCH too anyway.
Suggestions:
check the state of data before and after to see what you have
use a stored procedure (which is what you're doing anyway)
use TRY/CATCH

There is another TSQL unit testing framework called TST http://tst.codeplex.com/ that supposedly allows testing code that has its own transactions. From the TST docs:
"When the tested code has its own transactions, TST uses a reliable way of detecting the cases where its own rollback mechanism becomes ineffective. The TST rollback can be disabled at the test, suite or global level."
I have not used this framework, but I came across it (and your questions) while researching this topic.

Related

Reason for MSDTC promotion

Reason for System.Transactions.TransactionInDoubtException mentions three reasons for transactions being promoted to MSDTC. The first two are fairly well known, however the third reason is the following:
3.If you have "try/catch{retry if timeout/deadlock}" logic that is running within your code, then this can cause issues when the transaction is within a System.Transactions.TransactionScope, because of the way that SQL Server automatically rolls back transaction when a timeout or deadlock occurs.
I am seeing this behavior in one of my server apps when it is under severe load (SQL 2012). I've tried Googling extensively, but I'm not finding any more info. Does anyone have any references to additional information on this topic?
thanks,
Larry
I guess we're really running one transaction over one connection with a ref count of two. Which I agree is bogus and should be re-coded.
This is not a problem by itself.
However, there times when the inner transaction rolls back and retries
The problem is that a rollback rolls back everything. You can't retry the "inner" work in isolation. (Yes, this would be super useful but SQL Server does not support it.)
This looks to fire up a second connection for the inner transaction. Which would cause MSDTC to be invoked to try to coordinate.
This is moot because at this point the "outer" work has been destroyed.
There is no great solution for this problem. The best strategy is likely to have only one transaction and retry outer and inner work as one unit. Retries always must retry the entire transaction. You can use transaction "ref counting" if you want but you can't use it to rollback.
A particular nasty feature of SQL Server is that it is unpredictable whether any particular error causes the transaction to roll back. Therefore it is the cleanest way to never handle errors with SQL Server and always declare the transaction to be lost. (There is no technical reason SQL Server has to do this. It's just a stupid design choice.)

A transactional Rollback of a liquibase changeset

I am currently using liquibase with SQL based changeset, and most of them contain INSERT statements. According to the documentation this type of update operation does not produce, (by the tool) automatic rollback statements.
My question is (I might be missing something), since we can declare that a certain changeset can run in the context of a DB transaction, if an error occurs during applying the changeset, can the tool (liquibase) just issue a transaction roll back for this specific changeset?
My case is that currently all these scripts are part of a development process and these scripts are not yet final, meaning that someone changes the content, and we reply them from scratch. If in a 2000 line insert script there is a error in the SQL, I would like the tool to automatically rollback the currently transaction and not commit the changes in the DB.
Many thanks for any tips
During the update process, liquibase runs each changeSet in a transaction and rolls it back if there are any errors. So If you have a 2000 line insert with an error half way through it will fail as expected and roll back automatically.
What is not generated automatically is SQL to "roll back" (in this case delete) the inserts after they have been committed. If you specify a block in your changeSet, then after update successfully executes and been committed, you can later on run "liquibase rollback v2.3" and it will undo changes since the v2.3 tag. It cannot rely on the database rollback functionality because it is already committed.
It is probably a bit confusing since rollback in this case is different than the normal database use of the term "rollback" in the context of a transaction.

What happen with modifications queries when JDBC application abnormaly exited or connection dropped (Oracle is this essential)?

I call extensive update SQL statement and PL/SQL procedures.
What will happen with data when my application lose connection to DB or server halted or etc?
In case of SQL update command I think that it will be rollback.
For PL/SQL procedure I assume that code execution stopped at some time, any previous commit command will be applied but rest of code doesn't.
Am I right?
Yes it should rollback to the last rollback/commit call.
This became too long for a comment.
DDL statements (truncate, create, drop,...) implicitly commit. So if you do that in your stored procedure calls, everything before that statement will be committed whether you want or not. If the jdbc session is lost after the truncate, the changes before are still committed.
And yes, if you are inserting large volumes without intermediate commits, things can slow down. This is typically because you are building up rollback segments. There is a sweet spot with large inserts where you insert a batch of, say, 1,000 records at a time, committing after each batch.
What you are describing does not seem like normal transactional activity but more like bulk loading. If you are bulk loading, then maintain state so that you can restart the load or discard the records already loaded if you replay. Consider things like shipping as a file and importing (or using an external table) rather than necessarily inserting via a client connection. The APPEND hint and INSERT's NOLOGGING clause to speed up inserts (but note that the db will not be in a typical 'recoverable' state afterward and should be backed up again).

How to handle errors in a trigger?

I'm writing some SQL code that needs to be executed when rows are inserted in a database table, so I'm using an AFTER INSERT trigger; the code is quite complex, thus there could still be some bugs around.
I've discovered that, if an error happens when executing a trigger, SQL Server aborts the batch and/or the whole transaction. This is not acceptable for me, because it causes problems to the main application that uses the database; I also don't have the source code for that application, so I can't perform proper debugging on it. I absolutely need all database actions to succeed, even if my trigger fails.
How can I code my trigger so that, should an error happen, SQL Server will not abort the INSERT action?
Additionally, how can I perform proper error handling so that I can actually know the trigger has failed? Sending an email with the error data would be ok for me (the trigger's main purpose is actually sending emails), but how do I detect an error condition in a trigger and react to it?
Edit:
Thanks for the tips about optimizing performance by using something else than a trigger, but this code is not "complex" in the sense that it's long-running or performance intensive; it simply builds and sends a mail message, but in order to do so, it must retrieve data from various linked tables, and since I am reverse-engineering this application, I don't have the database schema available and am still trying to find my way around it; this is why conversion errors or unexpected/null values can still creep up, crashing the trigger execution.
Also, as stated above, I absolutely can't perform debugging on the application itself, nor modify it to do what I need in the application layer; the only way to react to an application event is by firing a database trigger when the application writes to the DB that something has just heppened.
If the operations in the trigger are complex and/or potentially long running, and you don't want the activity to affect the original transaction, then you need to find a way to decouple the activity.
One way might be to use Service Broker. In the trigger, just create message(s) (one per row) and send them on their way, then do the rest of the processing in the service.
If that seems too complex, the older way to do it is to insert the rows needing processing into a work/queue table, and then have a job continuously pulling rows from there are doing the work.
Either way, you're now not preventing the original transaction from committing.
Triggers are part of the transaction. You could do try catch swallow around the trigger code, or somewhat more professional try catch log swallow, but really you should let it go bang and then fix the real problem which can only be in your trigger.
If none of the above are acceptable, then you can't use a trigger.

Do I need to call rollback if I never commit?

I am connecting to a SQL Server using no autocommit. If everything is successful, I call commit. Otherwise, I just exit. Do I need to explicitly call rollback, or will it be rolled back automatically when we close the connection without committing?
In case it matters, I'm executing the SQL commands from within proc sql in SAS.
UPDATE: It looks like SAS may call commit automatically at the end of the proc sql block if rollback is not called. So in this case, rollback would be more than good practice; it would be necessary.
Final Update: We ended up switching to a new system, which seems to me to behave the opposite of our previous one. On ending the transaction without specifying committing or rolling back, it will roll back. So, the advice given below is definitely correct: always explicitly commit or rollback.
It should roll back on close of connection. Emphasis on should for a reason :-)
Proper transaction and error handling should have you always commit when the conditions for commit are met and rollback when they aren't. I think it is a great habit to always commit or rollback when done and not rely on disconnect/etc. All it takes is one mistake or incorrectly/not closed session to create a blocking chain nightmare for all :-)