C# SQL transaction - sql

I am using a C# class that is calling a SQL stored procedure in a serializable transaction.
So if something goes wrong in the stored procedure, everything is rolled back.
I have one statement in the SQL stored procedure that should be always executed (even if the stored procedure fails at some point and a rollback occurs). The statement is an update of a record.
I cannot change the C# library, so I need to do this in my stored procedure.
Is there some way I can execute that one statement outside the transaction?

You could perhaps use SAVE TRANSACTION. It is not supported in distributed transactions, and your statement must be executed first, so it might not be what you are looking for.

I have found the solution.
I didn't realize that SQL knew it was called in a transactional matter by the c# class.
The update statement that should always be executed is the first step (and also last step) in the procedure. Let me clarify:
I have an IF function. If it is true, the update should occur no matter what. If it is false, some transactional logic should be executed.
Now in the c# class, it expects a result from the stored proc. If the proc doesn't return a result (like it does in the update statement), it rollbacks the transaction.
So by just adding the following lines right after the update statement, the update occurs :)
IF ##TRANCOUNT > 0
BEGIN
COMMIT TRANSACTION
END

Your solution does not sound like a good one.. For example- if your stored procedure will be part of bigger transaction, then it will commit all changes made before it. Also I believe no one would guess that your proc has such behaviour without first seeing code.
The need to always execute some part of proc sounds like need for security audit. So maybe you should use trace, extended events or sql server audit instead.
if you really need what you say you need- you can use method described here: How to create an autonomous transaction in SQL Server 2008

Related

Trigger calls Stored Procedure and if we we do a select will the return values be the new or old?

Using MS SQL Server, a Trigger calls a Stored Procedure which internally makes a select, will the return values be the new or old ones?
I know that inside the trigger I can access them by FROM INSERTED i inner join DELETED, but in this case I want to reuse (cannot change it) an existing Stored Procedure that internally makes a select on the triggered table and processes some logic with them. I just want to know if I can be sure that the existing logic will work or not (by accessing the NEW values).
I can simply try to simulate it with one update... But maybe there are other cases (example: using transactions or something else) that I maybe not be aware and never test it that could result in a different case.
I decided to ask someone else that might know better. Thank you.
AFTER triggers (the default) fire after the DML action. When the proc is called within the trigger, the tables will reflect changes made by the statement that fired the trigger as well changes made within the trigger before calling the proc.
Note changes are uncommitted until the trigger completes or explict transaction later committed.
Since the procedure is running in the same transaction as the (presumably, "after") trigger, it will see the uncommitted data.
I hope you see the implications of that: the trigger is executing as part of the transaction started by the DML statement that caused it to fire, so the stored procedure is part of the same transaction, so a "complicated" stored procedure means that transaction stays open longer, holding locks longer, making responses back to users slower, etc etc.
Also, you said
internally makes a select on the triggered table and processes some logic with them.
if you just mean that the procedure is selecting the data in order to do some complex processing and then write it to somewhere else inside the database, ok, that's not great (for reasons given above), but it will "work".
But just in case you mean you are doing some work on the data in the procedure and then returning that back to the client application, Don't do that
The ability to return results from triggers will be removed in a future version of SQL Server. Triggers that return result sets may cause unexpected behavior in applications that aren't designed to work with them. Avoid returning result sets from triggers in new development work, and plan to modify applications that currently do. To prevent triggers from returning result sets, set the disallow results from triggers option to 1.

How to pass values from a table parameter in a Stored Procedure to another Stored Procedure?

I've written a stored procedure called FooUpsert that inserts and updates data in various tables. It takes a number of numeric and string parameters that provide the data. This procedure is in good shape and I don't want to modify it.
Next, I'm writing another stored procedure that servers as a sort of bulk insert/update.
It is of tantamount importance that the procedure do its work as an atomic transaction. It would be unacceptable for some data to be inserted/updated and some not.
It seemed to me that the appropriate way of doing this would be to set up a table-valued procedure, say FooUpsertBulk. I began to write this stored procedure with a table parameter that holds data similar to what is passed to FooUpsert, the idea being that I can read it one row at a time and invoke FooUpsert for the values in each row. I realize that this may not be the best practice for this, but once again, FooUpsert is already written, plus FooUpsertBulk will be run at most a few times a day.
The problem is that in FooUpsertBulk, I don't know how to iterate the rows and pass the values in each row as parameters to FooUpsert. I do realize that I could change FooUpsert to accept a table-values parameter as well, but I don't want to rewrite FooUpsert.
Can one of you SQL ninjas out there please show me how to do this?
My SQL server is MS-SQL 2008.
Wrapping various queries into an explicit transaction (i.e. BEGIN TRAN ... COMMIT or ROLLBACK) makes all of it an atomic operation. You can:
start the transaction from the app code (assuming that FooUpsert is called by app code) and hence have to deal with the commit and rollback there as well. this still leaves lots of small operations, but a single transaction and no code changes needed.
start the transaction in a proc, calling FooUpsert in a loop that is contained in a TRY / CATCH so that you can handle the ROLLBACK if any call to FooUpsert fails.
copy the code from FooUpsert into a new FooUpsertBulk that accepts a TVP from the app code and handles everything as set-based operations. Adapt each of the queries in FooUpsertBulk from handling various input params to getting fields from the TVP table variables once the TVP is joined into the query. Keep FooUpsert in place until FooUpsertBulk is working.

can we implement autonomous transaction in Netezza Stored procedure

Netezza has a single commit else rollback in the SP. So if a Netezza SP fails, the control goes to Exception block, in that case does anyone know if I put a insert into error table or call another SP, will that transaction in exception gets committed even if the transaction in the main block is rolled back? In other word can we implement autonomous transaction in Netezza?
In short, no.
Netezza (v7.x and earlier versions) does not support subtransactions, which are neccessary for that to work. To make this worse, the only way to use NZPLSQL is by wrapping it in a stored procedure (it does not support anonymous nzplsql blocks)
This only applies to custom exception handling though. Branching with IF or CASE works fine.
In my view, this tradeoff is one of the major differences compared to Oracle. One way to solve it is by putting the exception handling logic in an external script or application.
There is not much emphasis on this in the documentation, but there are hints and footnotes scattered throughout:
From the documentation on transaction control:
Some SQL commands are prohibited within the BEGIN/COMMIT transaction block. For example:
BEGIN [CREATE | DROP] DATABASE (+ some other DDL commands
like ALTER TABLE, ...)
These SQL commands are also prohibited within the body of a Netezza
stored procedure. If you use one of these commands within a
transaction block or stored procedure, the system displays an error
From the documentation on NZPLSQL:
This section describes the NZPLSQL language, its structure, and how to
use the language to create stored procedures.
From the documentation on stored procedures:
Important: Be careful not to confuse the use of BEGIN/END for grouping
statements in NZPLSQL with the BEGIN/END SQL database commands for
transaction control. The NZPLSQL BEGIN/END keywords are used only for
grouping; they do not start or end a transaction. Procedures always
run within a transaction established by an outer query; they cannot
start or commit transactions, since IBM® Netezza® SQL does not have
nested transactions.

In SQL Server, how can I separate a large number of tsql statement into batches?

In SQL Server, how can I separate a large number of tsql statement into batches? Should I use the GO statement in stored procedures or functions? Should I use the GO statement in explicit transaction management situation(between BEGIN TRANSACTION and ROLLBACK TRANSACTION or COMMIT TRANSACTION)?
Are there some best practice about this topic?
Great thanks in advance.
GO is not actually a SQL keyword - it's interpreted by SQL Server Management Studio. So you can't use it in stored procedures.
If you're writing a script for SSMS, you can use GO inside a transaction, but be careful about error handling - if an error occurs, the transaction will be rolled back, but only the current batch will be aborted, and then execution will continue to the next batch. See this question.
As for best practises, personally I just use GO only when I have to (for example, when creating multiple stored procedures - each has to have its own batch). The fewer GO statements, the less work to handle errors.

Is having a stored procedure that calls other stored procedures bad?

I'm Trying to make a long stored procedure a little more manageable, Is it wrong to have a stored procedures that calls other stored procedures for example I want to have a sproc that inserts data into a table and depending on the type insert additional information into table for that type, something like:
BEGIN TRANSACTION
INSERT INTO dbo.ITSUsage (
Customer_ID,
[Type],
Source
) VALUES (
#Customer_ID,
#Type,
#Source
)
SET #ID = SCOPE_IDENTITY()
IF #Type = 1
BEGIN
exec usp_Type1_INS #ID, #UsageInfo
END
IF #TYPE = 2
BEGIN
exec usp_Type2_INS #ID, #UsageInfo
END
IF (##ERROR <> 0)
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
Or is this something I should be handling in my application?
We call procs from other procs all the time. It's hard/impossible to segment a database-intensive (or database-only) application otherwise.
Calling a procedure from inside another procedure is perfectly acceptable.
However, in Transact-SQL relying on ##ERROR is prone to failure. Case in point, your code. It will fail to detect an insert failure, as well as any error produced inside the called procedures. This is because ##ERROR is reset with each statement executed and only retains the result of the very last statement. I have a blog entry that shows a correct template of error handling in Transact-SQL and transaction nesting. Also Erland Sommarskog has an article that is, for long time now, the reference read on error handling in Transact-SQL.
No, it is perfectly acceptable.
Definitely, no.
I've seen ginormous stored procedures doing 20 different things that would have really benefited from being refactored into smaller, single purposed ones.
As long as it is within the same DB schema it is perfectly acceptable in my opinion. It is reuse which is always favorable to duplication. It's like calling methods within some application layer.
not at all, I would even say, it's recommended for the same reasons that you create methods in your code
One stored procedure calling another stored procedure is fine. Just that there is a limit on the level of nesting till which you can go.
In SQL Server the current nesting level is returned by the ##NESTLEVEL function.
Please check the Stored Procedure Nesting section here http://msdn.microsoft.com/en-us/library/aa258259(SQL.80).aspx
cheers
No. It promotes reuse and allows for functionality to be componentized.
As others have pointed out, this is perfectly acceptable and necessary to avoid duplicating functionality.
However, in Transact-SQL watch out for transactions in nested stored procedure calls: You need to check ##TRANCOUNT before issuing rollback transaction because it rolls back all nested transactions. Check this article for an in-depth explanation.
Yes it is bad. While SQL Server does support and allow one stored procedures to call another stored procedure. I would generally try to avoid this design if possible. My reason?
single responsibility principle
In our IT area we use stored procedures to consolidate common code for both stored procedures and triggers (where applicable). It's also virtually mandatory for avoiding SQL source duplication.
The general answer to this question is, of course, No - it's normal and even preferred way of coding SQL stored procedures.
But it could be that in your specific case it is not such a good idea.
If you maintain a set of stored procedures that support data access tier (DAO) in your application (Java, .Net, etc.) then having database tier (let's call stored procedures that way) streamlined and relatively thin would benefit your overall design. Thus, having extensive graph of stored procedure calls may indeed be bad for maintaining and supporting overall data access logic in such application.
I would lean toward more uniform distribution of logic between DAO and database tier so that stored procedure code would fit inside single functional call.
Adding to the correct comments of other posters, there is nothing wrong in principle but you need to watch out on the execution time in case the procedure is being called for instance by an external application which is conforming to a specific timeout.
Typical example if you call the stored procedure from a web application: when the default timeout kicks in since your chain of executions takes longer you get a failure in the web application even when the stored procedure committs correctly.
Same happens if you call from an external service.
This can lead to an inconsistent behaviour in your application, triggering error management routines in external services etc.
If you are in situations like this what I do is breaking the chain of calls redirecting the long execution children calls to different processes using a Service Broker.