TSQL Cursor error in FETCH NEXT - Fixed when executed second time - sql

I am running a script in SQL Server Management Studio 2012 that involves a cursor. The script will go through a database table row-by-row, and combine data into single rows inserted into a second database table. I am confident that a cursor is the best way to go about this.
I have an issue that does not seems to be related to my code, because when it occurs, my only solution is to simply re-executed the code, and the error will not occur again.
I have pinpointed the error to the following:
FETCH NEXT FROM #theCursor INTO #variable, #anotherVariable, #etc...
The following error will occur
Could not complete cursor operation because the set options have changed since the cursor was declared.
Again, the extremely confusing problem is that this will not occur if I re-run the script after getting this error. I create a new query on the database, and I get the error every time. I then re-run the script, and it will work every time.
Another strange thing is that the error will only occur after this FETCH line has been executed about five times!
I have tried catching the error and simply re-invoking the entire procedure, but the error keeps occurring. The only solution I know of is to run the script again via SQL Server's 'Execute' command (F5 on keyboard).
I know there is not much here to go on. I myself do not have much to go on in specifics.

So, it turns out that I didn't know what types of things could be regarded as 'SET options', and what impact they could have.
In my script, I play around with the dateformat, as to deal with some inconsistent data in the database tables. It goes something like this:
set dateformat mdy
/* Do some DATETIME stuff with current format... */
set dateformat ymd
// ...
set dateformat dmy
// ...
set dateformat ymd
// ...
The solution was to not leave the dateformat as ymd, as I do above. Simply adding a set dateformat mdy at the end of all my date format shenanigans prevents the error from ever occurring. Strangely enough, ending off with set dateformat dmy does not work, but set dateformat mdy does.

Related

Is there a SQL equivalent of return?

Consider the following bit of SQL
SET DATEFORMAT ymd
SET ARITHABORT, ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS, NOCOUNT ON
SET NUMERIC_ROUNDABORT, IMPLICIT_TRANSACTIONS, XACT_ABORT OFF
GO
USE master
GO
IF DB_NAME() <> N'master' SET NOEXEC ON
--
-- Create database [myDatabaseName]
--
PRINT (N'Create database [myDatabaseName]')
GO
CREATE DATABASE myDatabaseName
There is then a very long script setting up tables, views, stored procedures etc etc.
I would like to know if SQL would allow something along the likes of the following pseudo code;
If (myDatabaseName Exists)
Return // in other word abort the script here but don't throw an error
Else
//Carry on and install the database
I am aware of the Exists function in SQL but I can't seem to find anything that would simply abort the remains of the script straightaway.
This script will end up in an installation routine. In theory it should never be in an installer where the database is already present, however I would prefer not to take chances and prepare properly for a potential mistake. It is also crucial that the script does not throw any error as that will just cause the installer to roll back and install nothing.
I'm hoping that something exists in SQL that will just exit a script cleanly if particular conditions are met. By exit I really do mean exit as opposed to simply breaking out of the condition being currently evaluated.
The problem is, your client tool (SSMS, SQLCMd, etc) splits your script into batches based on the location of the GO keyword (it's a client tool thing, not SQL Server at all).
It then sends the first batch. After the first batch is complete (no matter what the outcome), it sends the second batch, then the third after the second, etc.
If you're running with sufficient permissions, a high-valued RAISERROR (severity 20-25) should stop the client tool in its tracks (because it forces the connection closed). It's not that clean though.
Another option is to try to set NOEXEC ON which still does some work with each subsequent batch (compilation) but won't run any of the code1. This allows you a slightly better recovery option if you want some batches at the end to always run, by turning it OFF again.
1Which means you still will see error messages for compilation errors for later batches which rely upon database structures that would have been created in earlier batches, if they weren't being skipped.
You can use GOTO as follows :
If (myDatabaseName Exists)
GOTO QUIT; // in other word abort the script here but don't throw an error
Else
//Carry on and install the database
QUIT:
SELECT 0;
There are several methods for that kind of request :
raiserror('Oh no a fatal error', 20, -1) with log
OR
print 'Fatal error, script will not continue!'
set noexec on
They should work and close the connection.
See here : Answer

Will a stored procedure fail if one of the queries inside it fails?

Let's say I have a stored procedure with a SELECT, INSERT and UPDATE statement.
Nothing is inside a transaction block. There are no Try/Catch blocks either.
I also have XACT_ABORT set to OFF.
If the INSERT fails, is there a possibility for the UPDATE to still happen?
The reason the INSERT failed is because I passed in a null value to a column which didn't allow that. I only have access to the exception the program threw which called the stored procedure, and it doesn't have any severity levels in it as far as I can see.
Potentially. It depends on the severity level of the fail.
User code errors are normally 16.
Anything over 20 is an automatic fail.
Duplicate key blocking insert would be 14 i.e. non-fatal.
Inserting a NULL into a column which does not support it - this is counted as a user code error (16) - and consequently will not cause the batch to halt. The UPDATE will go ahead.
The other major factor would be if the batch has a configuration of XACT_ABORT to ON. This will cause any failure to abort the whole batch.
Here's some further reading:
list-of-errors-and-severity-level-in-sql-server-with-catalog-view-sysmessages
exceptionerror-handling-in-sql-server
And for the XACT_ABORT
https://www.red-gate.com/simple-talk/sql/t-sql-programming/defensive-error-handling/
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql
In order to understand the outcome of any of the steps in the stored procedure, someone with appropriate permissions (e.g. an admin) will need to edit the stored proc and capture the error message. This will give feedback as to the progress of the stored proc. An unstructured error (i.e. not in try/catch) code of 0 indicates success, otherwise it will contain the error code (which I think will be 515 for NULL insertion). This is non-ideal as mentioned in the comments, as it still won't cause the batch to halt, but it will warn you that there was an issue.
The most simple example:
DECLARE #errnum AS int;
-- Run the insert code
SET #errnum = ##ERROR;
PRINT 'Error code: ' + CAST(#errornum AS VARCHAR);
Error handling can be a complicated issue; it requires significant understanding of the database structure and expected incoming data.
Options can include using an intermediate step (as mentioned by HLGEM), amending the INSERT to include ISNULL / COALESCE statements to purge nulls, checking the data on the client side to remove troublesome issues etc. If you know the number of rows you are expecting to insert, the stored proc can return SET #Rows=##ROWCOUNT in the same way as SET #errnum = ##ERROR.
If you have no authority over the stored proc and no ability to persuade the admin to amend it ... there's not a great deal you can do.
If you have access to run your own queries directly against the database (instead of only through stored proc or views) then you might be able to infer the outcome by running your own query against the original data, performing the stored proc update, then re-running your query and looking for changes. If you have permission, you could also try querying the transaction log (fn_dblog) or the error log (sp_readerrorlog).

SQL Server 2000 DTS Invalid pointer

Morning folks,
I have this annoying error every time I execute the query on my DTS step. It returns "Invalid Pointer" while the query executes successfully on the Query Analyser.
I tried the :
SET NOCOUNT ON
SET ANSI_WARNINGS OFF
==> No success. Even more with the SET ANSI_WARNINGS OFF I get a new error..
Does anyone have an idea about this problem please ?
Obviously, the problem was from another query and the SET NOCOUNT ON did make it work in the end.

How can I ignore 'Arithmetic Overflow' related errors from within a data view?

I have a complex data view that recursively links and summarizes information.
Each night a scheduled task runs a stored procedure that selects all of the data from the data view, and inserts it into a table so that users can query and analyze the data much more quickly than running a select statement on the data view.
The parent table consists of a few hundred thousand records and the result set from the export is well over 1,000,000 records in size.
For most nights the exportation process works without any trouble, however, if a user enters an incorrect value within our master ERP system, it will crash the nightly process because one of the decimal fields will contain a value that doesn't fit within some of the conversions that I have to make on the data. Debugging and finding the specific, errant field can be very hard and time consuming.
With that said, I've read about the two SQL settings NUMERIC_ROUNDABORT and ARITHABORT. These sounds like the perfect options for solving my problem, however, I can't seem to get them to work with either my data view or stored procedure.
My stored procedure is nothing more than a TRUNCATE and INSERT statement. I appended...
SET NUMERIC_ROUNDABORT OFF
SET ARITHABORT OFF
... to the beginning of the SP and that didn't help. I assume this is because the error is technically taking place from within the code associated with the data view.
Next, I tried adding two extended properties to the Data View, hoping that that would work. It didn't.
Is there a way that I can set these SQL properties to ignore rounding errors so that I can export my data from my data view?
I know for most of us, as SO answerers, our first inclination is to ask for code. In this case, however, the code is both extremely complex and proprietary. I know fixing the definitions that cause the occasional overflow is the most ideal solution, but in this circumstance, it is much more efficient to just ignore these type of errors because they happen on such a rare basis and are so difficult to troubleshoot.
What can I do to ignore this behavior?
UPDATE
By chance, I believe I might have found the root cause of the issue, however, I have no idea why this would be occurring. It just doesn't make since.
Through out my table view, I have various fields that are calculated. Since these fields need to fit in fields within the table that are defined as decimal (12, 5), I always wrap the view field statements in a CAST( ... AS DECIMAL(12, 5)) clauses.
By chance, I stumbled upon an oddity. I decided to see how SSMS "saw" my data view. In the SSMS Object Explorer, I expanded the Views->[My View]-Columns section and I saw that one of the fields was defined as a decimal (13, 5).
I assumed that I must have made a mistake in one of my casting statements but after searching throughout the code for the table view, there is no definition for a decimal(13, 5) field?! My only guess is that the definition that SSMS sees of the view field must be derived from resulting data. However, I have no clue how this could happen since I each field to a decimal(12, 5).
I would like to know why this is happening but, again, my original question still stands. How and what SET statement can I define on a table view that will ignore all of thee arithmetic overflows and write a null value in the fields with errant data?
FINAL COMMENTS
I've marked HeavenCore's response as the answer because it does address my question but it hasn't solved my underlying problem.
After a bit of troubleshooting and attempts at trying to get my export to work, I'm going to have to try a different approach. I still can't get the export to work, even if I set the NUMERIC_ROUNDABORT and ARITHABORT properties to OFF.
i think ARITHABORT is your friend here.
For instance, using SET ARITHABORT OFF & SET ANSI_WARNINGS OFF will NULL the values it fails to cast (instead of throwing exceptions)
Here is a quick example:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tbl_OverflowExample](
[Value] [decimal](12, 2) NULL
) ON [PRIMARY]
GO
INSERT [dbo].[tbl_OverflowExample] ([Value]) VALUES (CAST(9999999999.00 AS Decimal(12, 2)))
GO
INSERT [dbo].[tbl_OverflowExample] ([Value]) VALUES (CAST(1.10 AS Decimal(12, 2)))
GO
--#### Select data without any casting - works
SELECT VALUE
FROM dbo.tbl_OverflowExample
--#### With ARITHABORT and ANSI warnings disabled - Returns NULL for 999999 but 1.10 as expected
SET ARITHABORT OFF;
SET ANSI_WARNINGS OFF;
SELECT CONVERT(DECIMAL(3, 2), VALUE)
FROM dbo.tbl_OverflowExample
GO
--#### With defaults - Fails with overflow exception
SET ARITHABORT ON;
SET ANSI_WARNINGS ON;
SELECT CONVERT(DECIMAL(2, 2), VALUE)
FROM dbo.tbl_OverflowExample
Personally though - i'd prefer to debug the view and employ some CASE /.../ END statements to return NULL if the underlying value is greater than the target data type - this would ensure the view works regardless of the connection options.
EDIT: Corrected some factual errors

Why does Microsoft SQL Server Implicitly Rollback when a CREATE statement fails?

I am working on pymssql, a python MSSQL driver. I have encountered an interesting situation that I can't seem to find documentation for. It seems that when a CREATE TABLE statement fails, the transaction it was run in is implicitly rolled back:
-- shows 0
select ##TRANCOUNT
BEGIN TRAN
-- will cause an error
INSERT INTO foobar values ('baz')
-- shows 1 as expected
select ##TRANCOUNT
-- will cause an error
CREATE TABLE badschema.t1 (
test1 CHAR(5) NOT NULL
)
-- shows 0, this is not expected
select ##TRANCOUNT
I would like to understand why this is happening and know if there are docs that describe the situation. I am going to code around this behavior in the driver, but I want to make sure that I do so for any other error types that implicitly rollback a transaction.
NOTE
I am not concerned here with typical transactional behavior. I specifically want to know why an implicit rollback is given in the case of the failed CREATE statement but not with the INSERT statement.
Here is the definitive guide to error handling in Sql Server:
http://www.sommarskog.se/error-handling-I.html
It's long, but in a good way, and it was written for Sql Server 2000 but most of it is still accurate. The part you're looking for is here:
http://www.sommarskog.se/error-handling-I.html#whathappens
In your case, the article says that Sql Server is performing a Batch Abortion, and that it will take this measure in the following situations:
Most conversion errors, for instance conversion of non-numeric string to a numeric value.
Superfluous parameter to a parameterless stored procedure.
Exceeding the maximum nesting-level of stored procedures, triggers and functions.
Being selected as a deadlock victim.
Mismatch in number of columns in INSERT-EXEC.
Running out of space for data file or transaction log.
There's a bit more to it than this, so make sure to read the entire section.
It is often, but not always, the point of a transaction to rollback the entire thing if any part of it fails:
http://www.firstsql.com/tutor5.htm
One of the most common reasons to use transactions is when you need the action to be atomic:
An atomic operation in computer
science refers to a set of operations
that can be combined so that they
appear to the rest of the system to be
a single operation with only two
possible outcomes: success or failure.
en.wikipedia.org/wiki/Atomic_(computer_science)
It's probably not documented, because, if I understand your example correctly, it is assumed you intended that functionality by beginning a transaction with BEGIN TRAN
If you run as one batch (which I did first time), the transaction stays open because the INSERT aborts the batch and CREATE TABLE is not run. Only if you run line-by-line does the transaction get rolled back
You can also generate an implicit rollback for the INSERT by setting SET XACT_ABORT ON.
My guess (just had a light bulb moment as I typed the sentence above) is that CREATE TABLE uses SET XACT_ABORT ON internalls = implicit rollback in practice
Some more stuff from me on SO about SET XACT_ABORT (we use it in all our code because it releases locks and rolls back TXNs on client CommandTimeout)