SQL Try Catch the exact errors caused by the recent variables - sql

Query:
BEGIN TRY
SELECT #AccountNumber,
#AccountSuffix,
#Sedat,
#Dedo,
#Payalo,
#Artisto
FROM SWORDBROS
WHERE AMAZING ='HAPPENS'
END TRY
EGIN CATCH
Print #Sedat
END CATCH
How can I get the #Sedat, is it possible?
SQL 2005 , it will be in an SP

Like this, no?
BEGIN TRY
SELECT #AccountNumber,
#AccountSuffix,
#Sedat,
#Dedo,
#Payalo,
#Artisto
FROM SWORDBROS
WHERE AMAZING ='HAPPENS'
END TRY
BEGIN CATCH
--error handling only
END CATCH
--There is no finally block like .net
Print #Sedat

IN a proc when I want to trap the exact values that caused an erorr, this is what I do. I declare a table variable (very important must be a table variable not a temp table) that has the fields I want to have information on. I populate the table variable with records as I go. In a multitep proc, I would add one record for each step if I wanted to see the who process or only a record if I hit an error (which I would populate in this case in the catch block typically). Then in The catch block I would rollback the transaction and then I would insert the contents of the table varaible into a permanent exception processing table. You could also just do a select of this table if you wanted, but if I'm going to this much trouble it usually is for an automated process where I need to be able to research the problem at a later time, not see the problem when it hits becasue I'm not running it on my mchine or where I could see a select or print statement. By using the table varaible which stay in scope even after the rollback, my information is still available for me to log in my exception logging table. But it important that you do the logging to any permananent table after the rollback or the process will rollback with everything else.

which database are you using?
also, which programming language is this?
usually there would be an INTO clause and some local variables declared.
your query should also have a FROM clause at a minimum

It is not clear if you are expecting the returned values to be placed into the # variables or whether you are trying to dynamically specify which columns you want selected. In a Sql Server stored procedure you usually return a result set, not a bunch of individual variables. The syntax you have will not work if you want column values returned since what you have will dynamically specify which columns are wanted based on the column names passed into the stored procedure. And this will not work since the stored procedure must know which columns you are going after when it is analyzed as it is stored. Now the except clause will be trigged if there is a problem reading from the database (communication down, disk error, etc.) in which case none of the column values will be known.
Use the Sql Query Analyzer tool (under the "Tools" menu in SqlManager after you have selected a database) to define your stored procedure and test it. If you installed the documentation when you installed SqlManager go to Start>Programs>Microsoft Sql Server>Books Online and open the "Transact-SQL Reference" node for documentation on what can be done.

Related

Will a stored procedure fail if one of the queries inside it fails?

Let's say I have a stored procedure with a SELECT, INSERT and UPDATE statement.
Nothing is inside a transaction block. There are no Try/Catch blocks either.
I also have XACT_ABORT set to OFF.
If the INSERT fails, is there a possibility for the UPDATE to still happen?
The reason the INSERT failed is because I passed in a null value to a column which didn't allow that. I only have access to the exception the program threw which called the stored procedure, and it doesn't have any severity levels in it as far as I can see.
Potentially. It depends on the severity level of the fail.
User code errors are normally 16.
Anything over 20 is an automatic fail.
Duplicate key blocking insert would be 14 i.e. non-fatal.
Inserting a NULL into a column which does not support it - this is counted as a user code error (16) - and consequently will not cause the batch to halt. The UPDATE will go ahead.
The other major factor would be if the batch has a configuration of XACT_ABORT to ON. This will cause any failure to abort the whole batch.
Here's some further reading:
list-of-errors-and-severity-level-in-sql-server-with-catalog-view-sysmessages
exceptionerror-handling-in-sql-server
And for the XACT_ABORT
https://www.red-gate.com/simple-talk/sql/t-sql-programming/defensive-error-handling/
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql
In order to understand the outcome of any of the steps in the stored procedure, someone with appropriate permissions (e.g. an admin) will need to edit the stored proc and capture the error message. This will give feedback as to the progress of the stored proc. An unstructured error (i.e. not in try/catch) code of 0 indicates success, otherwise it will contain the error code (which I think will be 515 for NULL insertion). This is non-ideal as mentioned in the comments, as it still won't cause the batch to halt, but it will warn you that there was an issue.
The most simple example:
DECLARE #errnum AS int;
-- Run the insert code
SET #errnum = ##ERROR;
PRINT 'Error code: ' + CAST(#errornum AS VARCHAR);
Error handling can be a complicated issue; it requires significant understanding of the database structure and expected incoming data.
Options can include using an intermediate step (as mentioned by HLGEM), amending the INSERT to include ISNULL / COALESCE statements to purge nulls, checking the data on the client side to remove troublesome issues etc. If you know the number of rows you are expecting to insert, the stored proc can return SET #Rows=##ROWCOUNT in the same way as SET #errnum = ##ERROR.
If you have no authority over the stored proc and no ability to persuade the admin to amend it ... there's not a great deal you can do.
If you have access to run your own queries directly against the database (instead of only through stored proc or views) then you might be able to infer the outcome by running your own query against the original data, performing the stored proc update, then re-running your query and looking for changes. If you have permission, you could also try querying the transaction log (fn_dblog) or the error log (sp_readerrorlog).

Debug Insert and temporal tables in SQL 2012

I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END

PL/SQL Dynamic Table Names

I'm pretty new to Oracle so not entirely sure this is possible, or if perhaps I'm going about it the wrong way but here goes ...
Part of an old feeder script I'm fixing is looping through ~ 20 tables (could change anytime) to populate relevant staging tables. This part is currently very basic:
...
INSERT INTO staging_tbl_1(
SELECT *
FROM source_tbl_1
);
INSERT INTO staging_tbl_2(
SELECT *
FROM source_tbl_2
);
...
Some of the fields in the source database have different constraints etc which means that every now and then it will throw an exception and the feeder will stop. What I'm hoping to do is create a procedure within the existing feeder package to loop through each row in each record before it is inserted and simply wrap it in an exception block. This way it can be logged without causing the feeder to stop.
Essentially I'm chasing something like this:
BEGIN procedure_x(source_record, staging_record)
-- Perform validation to ensure records exit
-- Loop through all record rows
FOR row IN (SELECT * FROM source_record) LOOP
-- Wrap in exception block
-- Insert into staging record
-- Log exception if it occurs
END LOOP;
END
I've attempted ref cursors however in order to get them to work I would also need to know the rowtype in advance (from my limited understanding). I've also tried execute immediate however I cannot find a way to loop this in an appropriate way. Are there any other ways to tackle this?
Additional:
I realise that we really should be fixing the source of the problem rather than going about it like this, unfortunately it is far outside my area of influence.
It is possible to do this without making a separate procedure and just wrap all of the table references in a loop, however I'd like to leave this as a last resort.
Oracle has functionality for logging of DML errors. Use it with single SQL statements. Don't go row-by-row and make your processes crawl.
http://docs.oracle.com/cd/B19306_01/server.102/b14231/tables.htm#ADMIN10261

SQL Server 2005 stored procedure with logging and error handling

I have a (SQL 2005) stored procedure that processes a giant table and groups old data from the past (up til one year ago). It has this main steps:
copy old data grouped to a new table
copy recent data as is to the new table
rename table
Now I want to log every run and every step in logging tables. However I start a transaction in the beginning so that I can rollback the whole batch if something goes wrong. But that would also rollback my logging which isn't what I want.
How can I resolve this?
Log to a table variable as this doesn't get rolled back with the transaction then at the end of the procedure after commit or rollback insert the contents of the table variable into your permanent logging table.
Use SET XACT_ABORT ON to force rollback
To catch all errors (where code runs), use TRY/CATCH blocks.
Then, you can simply log the errors in your CATCH blocks.
Example here (can add your own logging): Nested stored procedures containing TRY CATCH ROLLBACK pattern?
Personally, I find this more elegant than using table variables.

How to troubleshoot a stored procedure?

what is the best way of troubleshoot a stored procedure in SQL Server, i mean from where do you start etc..?
Test each SELECT statements (if any) outside of your stored procedure to see whether it returns the expected results;
Make INSERT and UPDATE statements as simple as possible;
Try to test Inserts and Updates outside of your SP so that you can check it gives the expected results;
Use the debugger provided with SSMS Express 2008.
Visual Studio 2008 / 2010 has a debug facility. Simply connect to to your SQL Server instance in 'Server Explorer' and browse to your stored procedure.
Visual Studio 'Test Edition' also can generate Unit Tests around your stored procedures.
Troubleshooting a complex stored proc is far more than just determining if you can get it to run or not and finding the step which won't run. What is most critical is whether it actually returns the corect results or performs the correct actions.
There are two kinds of stored procs that need extensive abilites to troublshoot. First the the proc which creates dynamic SQL. I never create one of these without an input parameter of #debug. When this parameter is set, I have the proc print the SQl statment as it would have run and not run it. Almost everytime, this leads you right away to the problem as you can then see the syntax error in the generated SQL code. You also can run this sql code to see if it is returning the records you expect.
Now with complex procs that have many steps that affect data, I always use an #test input parameter. There are two things I do with the #test parameter, first I make it rollback the actions so that a mistake in development won't mess up the data. Second, I have it display the data before it rollsback to see what the results would have been. (These actually appear in the reverse order in the proc; I just think of them in this order.)
Now I can see what would have gone into the table or been deleted from the tables without affecting the data permananently. Sometimes, I might start with a select of the data as it was before any actions and then compare it to a select run afterwards.
Finally, I often want to log actions of a complex proc and see exactly what steps happened. I don't want those logs to get rolled back if the proc hits an error, so I set up a table variable for the logging information I want at the start of the proc. After each step (or after an error depending on what I want to log), I insert to this table variable. After the rollback or commit statement, I select the results of the table variable or use those results to log to a permanent logging table. This can be especially nice if you are using dynamic SQL because you can log the SQL that was run and then when something strange fails on prod, you have a record of which statement was run when it failed. You do this in a table variable because those do not go out of scope in a rollback.
In SSMS, you can simply start by opening the proc., and clicking on the check mark button (Parse) next to the Execute button on the menu bar. It reports any errors it finds.
If there are no errors there and you're stored procedure is harmless to run (you're not inserting into tables, just creating a temp table for example), then comment out the CREATE PROCEDURE x (or ALTER PROCEDURE x) and declare all the parameters by copying that part, then define them with valid values. Then run it to see what happens.
Maybe this is simple, but it's a place to start.