How can I ignore 'Arithmetic Overflow' related errors from within a data view? - sql

I have a complex data view that recursively links and summarizes information.
Each night a scheduled task runs a stored procedure that selects all of the data from the data view, and inserts it into a table so that users can query and analyze the data much more quickly than running a select statement on the data view.
The parent table consists of a few hundred thousand records and the result set from the export is well over 1,000,000 records in size.
For most nights the exportation process works without any trouble, however, if a user enters an incorrect value within our master ERP system, it will crash the nightly process because one of the decimal fields will contain a value that doesn't fit within some of the conversions that I have to make on the data. Debugging and finding the specific, errant field can be very hard and time consuming.
With that said, I've read about the two SQL settings NUMERIC_ROUNDABORT and ARITHABORT. These sounds like the perfect options for solving my problem, however, I can't seem to get them to work with either my data view or stored procedure.
My stored procedure is nothing more than a TRUNCATE and INSERT statement. I appended...
SET NUMERIC_ROUNDABORT OFF
SET ARITHABORT OFF
... to the beginning of the SP and that didn't help. I assume this is because the error is technically taking place from within the code associated with the data view.
Next, I tried adding two extended properties to the Data View, hoping that that would work. It didn't.
Is there a way that I can set these SQL properties to ignore rounding errors so that I can export my data from my data view?
I know for most of us, as SO answerers, our first inclination is to ask for code. In this case, however, the code is both extremely complex and proprietary. I know fixing the definitions that cause the occasional overflow is the most ideal solution, but in this circumstance, it is much more efficient to just ignore these type of errors because they happen on such a rare basis and are so difficult to troubleshoot.
What can I do to ignore this behavior?
UPDATE
By chance, I believe I might have found the root cause of the issue, however, I have no idea why this would be occurring. It just doesn't make since.
Through out my table view, I have various fields that are calculated. Since these fields need to fit in fields within the table that are defined as decimal (12, 5), I always wrap the view field statements in a CAST( ... AS DECIMAL(12, 5)) clauses.
By chance, I stumbled upon an oddity. I decided to see how SSMS "saw" my data view. In the SSMS Object Explorer, I expanded the Views->[My View]-Columns section and I saw that one of the fields was defined as a decimal (13, 5).
I assumed that I must have made a mistake in one of my casting statements but after searching throughout the code for the table view, there is no definition for a decimal(13, 5) field?! My only guess is that the definition that SSMS sees of the view field must be derived from resulting data. However, I have no clue how this could happen since I each field to a decimal(12, 5).
I would like to know why this is happening but, again, my original question still stands. How and what SET statement can I define on a table view that will ignore all of thee arithmetic overflows and write a null value in the fields with errant data?
FINAL COMMENTS
I've marked HeavenCore's response as the answer because it does address my question but it hasn't solved my underlying problem.
After a bit of troubleshooting and attempts at trying to get my export to work, I'm going to have to try a different approach. I still can't get the export to work, even if I set the NUMERIC_ROUNDABORT and ARITHABORT properties to OFF.

i think ARITHABORT is your friend here.
For instance, using SET ARITHABORT OFF & SET ANSI_WARNINGS OFF will NULL the values it fails to cast (instead of throwing exceptions)
Here is a quick example:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tbl_OverflowExample](
[Value] [decimal](12, 2) NULL
) ON [PRIMARY]
GO
INSERT [dbo].[tbl_OverflowExample] ([Value]) VALUES (CAST(9999999999.00 AS Decimal(12, 2)))
GO
INSERT [dbo].[tbl_OverflowExample] ([Value]) VALUES (CAST(1.10 AS Decimal(12, 2)))
GO
--#### Select data without any casting - works
SELECT VALUE
FROM dbo.tbl_OverflowExample
--#### With ARITHABORT and ANSI warnings disabled - Returns NULL for 999999 but 1.10 as expected
SET ARITHABORT OFF;
SET ANSI_WARNINGS OFF;
SELECT CONVERT(DECIMAL(3, 2), VALUE)
FROM dbo.tbl_OverflowExample
GO
--#### With defaults - Fails with overflow exception
SET ARITHABORT ON;
SET ANSI_WARNINGS ON;
SELECT CONVERT(DECIMAL(2, 2), VALUE)
FROM dbo.tbl_OverflowExample
Personally though - i'd prefer to debug the view and employ some CASE /.../ END statements to return NULL if the underlying value is greater than the target data type - this would ensure the view works regardless of the connection options.
EDIT: Corrected some factual errors

Related

How to "SET NOCOUNT ON" for Nhibernate generated select statements

We have "SET NOCOUNT OFF" by default on our database server.We use "SET NOCOUNT ON" for store procedures.
As reported by dba's that all nhibernate generated select statements are using "SET NOCOUNT OFF". Which is taking long for queries to execute.
We are trying to increase the performance.I can not figure out the way to set "SET NOCOUNT ON" for specific nhibernate session or query. Can someone have some opinion about that.
Regards
You probably misunderstand what SET NOCOUNT ON does and why.
SET NOCOUNT does not have such a significant effect that it becomes a concern. Setting it to ON on statements that DON'T return data is simply an optimization.
On the other hand, setting it to ON on queries, where you very much want to know how many results were returned, makes no sense. Instead of quickly detecting how many results there are, your client would have to enumerate all the data to see how many rows are returned.
Your server will return the data in any case, so telling it to NOT return the number of rows it returns makes no sense.
You probably have other performance issues. You should check what queries are executed, whether your tables have proper the indexes and whether you force NHibernate to execute more queries than you expect (the dreaded N+1 problem)
I can't give you the option in nhibernate to set nocount off, however I know nhibernate is depending on the count to check if the query was succesfull. When you return the wrong count (0), nhibernate will think something is wrong and throws an excetion.
Besides that I do not think you will gain a lot from setting nocount off.

Debug Insert and temporal tables in SQL 2012

I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END

SQL Server turn off ANSI_WARNING ON A stored procedure

I need to know how to turn off ANSI warnings on my stored procedure please. I keep getting the error
String or binary data would be truncated.
However, I would rather this be turned off so as I expect this and would rather allow it.
I added the statement
SET ANSI_WARNINGS OFF
GO
right before the stored procedure however, doing this does not seem to suppress the error at all.
For the reason why I have this truncate error to begin with, well one of my stored procs executes dynamic Sql to retrieve values(SQLFIddle showing the code ). And I had to set the length on all of my fields to the length of the max (NVarchar(3072)). When my query is executed however, I need them back to the right size when printing them to the client.
Would appreciate info on how to best deal with this please. Thanks in advance.
I agree with #marc_s -- fix the problem, not the symptom especially if your intent is to truncate. What will another developer think when he comes along and a proc is throwing these errors and a non standard flag was used to suppress the issue?
Code to make your intent to truncate clear.
Identifying your Problem
The fiddle doesn't display the behavior your describe. So I'm still a little confused as to the issue.
Also, your SQL fiddle is way too dense for a question like this. If I don't answer your question below work to isolate the problem to the simplest use case possible. Don't just dump 500 lines of your app into a window.
Note: The Max NVarchar is either 4000 in version of SQL 7 & 2000 or 2 Gigs (nvarchar(max)) in SQL 2005 and later. I have no idea where you came up with 3072.
My Test
If you're truncating at the SPROC parameter level, ANSI Warnings flags is ignored, as this MSDN page warns. If it's inside your procedure, I created a little test proc that displays the ANSI flag allowing truncation:
CREATE Proc DoSomething (#longThing varchar(50)) AS
DECLARE #T1 TABLE ( shortThing VARCHAR(20) );
SET ANSI_WARNINGS OFF
Print ' I don''t even whimpler when truncating'
INSERT INTO #T1 (ShortThing) VALUES ( #longThing);
SET ANSI_WARNINGS ON
Print ' I yell when truncated'
INSERT INTO #T1 (ShortThing) VALUES ( #longThing);
Then calling it the following works as expected:
exec DoSomething 'Text string longer than 20 characters'
FIXING THE PROBLEM
Nevertheless, why not just code so your intent to (potentially) truncate data is clear? You can avoid the warning rather than turn it off. I would do one of the following:
make your Procedure parameters long enough to accommodate the input
IF you need to shorten string data use Substring() to trim data.
Use CAST or CONVERT to format the data to your requirement. This page (section headed "Implicit Conversions" should help) details how cast & convert work.
My simple example above can be modified as follows to avoid the need to set any flag.
CREATE Proc DoSomethingBETTER (#longThing varchar(50)) AS
SET ANSI_WARNINGS ON
DECLARE #T1 TABLE ( shortThing VARCHAR(20) );
--try one of these 3 options...
INSERT INTO #T1 (ShortThing) VALUES ( Convert(varchar(20), #longThing));
INSERT INTO #T1 (ShortThing) VALUES ( Substring(#longThing, 1, 20));
INSERT INTO #T1 (ShortThing) VALUES ( Cast(#longThing as varchar(20)) );
Print('Ansi warnings can be on when truncating data');
An Aside - Clustered Guids
Looking at your fiddle I noticed that you Uniqueidentifer as the key in your Clustered indexes. In almost every scenario this is a pretty inefficient option. The randomness of GUIDs means your data is constantly being fragmented & re-shuffled.
Hopefully you can convert to int identity, you're using newsequentialid(), or COMB guids as described in Jimmy Nilsson's article.
You can see more about the problem here, here, here, and here.

NHibernate INSERT TO SQL Calculated field SET NOCOUNT

I need to surpress messages output from a SQL function. As in 1 row affected. I can't use SET NOCOUNT as it's invalid in a function.
Anyone know a way to do this?
Thanks.
EDIT
I was trying to limit the background information in an attemp to boil the problem down to it's essence. But I'll expand. I'm using MSSQL2005 and NHibernate to insert a record in to a SQL table. On the table I have a computed column that runs the function which is reporting back 1 row affected.
I didn't really want to edit the NHibernate part of the process but it may be unavoidable.
A function that returns "(1 row affected)" will be part of a bigger query in a batch. It makes no sense to have SET NOCOUNT ON in the function
You need to do this:
SET NOCOUNT ON;
SELECT * FROM MyUDFTVF();
Note a stored procedure is simply a wrapper for this
CREATE PROC Whatever
AS
SET NOCOUNT ON;
SELECT * FROM MyUDFTVF();
GO
SET NOCOUNT ON is normally needed to stop triggers etc breaking client code: why do you need it here?
The nocount setting is not available in functions.
Stored procedures allow you to set nocount. So converting the function to a stored procedure would solve the problem.
Otherwise, the calling code will have to set nocount. That shouldn't be hard, but might be tedious if the function is used in many places.
P.S. If you post the reason why suppressing the count messages is required, perhaps we can offer some more solutions.

SQL Server silently truncates varchar's in stored procedures

According to this forum discussion, SQL Server (I'm using 2005 but I gather this also applies to 2000 and 2008) silently truncates any varchars you specify as stored procedure parameters to the length of the varchar, even if inserting that string directly using an INSERT would actually cause an error. eg. If I create this table:
CREATE TABLE testTable(
[testStringField] [nvarchar](5) NOT NULL
)
then when I execute the following:
INSERT INTO testTable(testStringField) VALUES(N'string which is too long')
I get an error:
String or binary data would be truncated.
The statement has been terminated.
Great. Data integrity preserved, and the caller knows about it. Now let's define a stored procedure to insert that:
CREATE PROCEDURE spTestTableInsert
#testStringField [nvarchar](5)
AS
INSERT INTO testTable(testStringField) VALUES(#testStringField)
GO
and execute it:
EXEC spTestTableInsert #testStringField = N'string which is too long'
No errors, 1 row affected. A row is inserted into the table, with testStringField as 'strin'. SQL Server silently truncated the stored procedure's varchar parameter.
Now, this behaviour might be convenient at times but I gather there is NO WAY to turn it off. This is extremely annoying, as I want the thing to error if I pass too long a string to the stored procedure. There seem to be 2 ways to deal with this.
First, declare the stored proc's #testStringField parameter as size 6, and check whether its length is over 5. This seems like a bit of a hack and involves irritating amounts of boilerplate code.
Second, just declare ALL stored procedure varchar parameters to be varchar(max), and then let the INSERT statement within the stored procedure fail.
The latter seems to work fine, so my question is: is it a good idea to use varchar(max) ALWAYS for strings in SQL Server stored procedures, if I actually want the stored proc to fail when too long a string is passed? Could it even be best practice? The silent truncation that can't be disabled seems stupid to me.
It just is.
I've never noticed a problem though because one of my checks would be to ensure my parameters match my table column lengths. In the client code too. Personally, I'd expect SQL to never see data that is too long. If I did see truncated data, it'd be bleeding obvious what caused it.
If you do feel the need for varchar(max) beware a massive performance issue because of datatype precedence. varchar(max) has higher precedence than varchar(n) (longest is highest). So in this type of query you'll get a scan not a seek and every varchar(100) value is CAST to varchar(max)
UPDATE ...WHERE varchar100column = #varcharmaxvalue
Edit:
There is an open Microsoft Connect item regarding this issue.
And it's probably worthy of inclusion in Erland Sommarkog's Strict settings (and matching Connect item).
Edit 2, after Martins comment:
DECLARE #sql VARCHAR(MAX), #nsql nVARCHAR(MAX);
SELECT #sql = 'B', #nsql = 'B';
SELECT
LEN(#sql),
LEN(#nsql),
DATALENGTH(#sql),
DATALENGTH(#nsql)
;
DECLARE #t table(c varchar(8000));
INSERT INTO #t values (replicate('A', 7500));
SELECT LEN(c) from #t;
SELECT
LEN(#sql + c),
LEN(#nsql + c),
DATALENGTH(#sql + c),
DATALENGTH(#nsql + c)
FROM #t;
Thanks, as always, to StackOverflow for eliciting this kind of in-depth discussion. I have recently been scouring through my Stored Procedures to make them more robust using a standard approach to transactions and try/catch blocks. I disagree with Joe Stefanelli that "My suggestion would be to make the application side responsible", and fully agree with Jez: "Having SQL Server verify the string length would be much preferable". The whole point for me of using stored procedures is that they are written in a language native to the database and should act as a last line of defence. On the application side the difference between 255 and 256 is just a meangingless number but within the database environment, a field with a maximum size of 255 will simply not accept 256 characters. The application validation mechanisms should reflect the backend db as best they can, but maintenance is hard so I want the database to give me good feedback if the application mistakenly allows unsuitable data. That's why I'm using a database instead of a bunch of text files with CSV or JSON or whatever.
I was puzzled why one of my SPs threw the 8152 error and another silently truncated. I finally twigged: The SP which threw the 8152 error had a parameter which allowed one character more than the related table column. The table column was set to nvarchar(255) but the parameter was nvarchar(256). So, wouldn't my "mistake" address gbn's concern: "massive performance issue"? Instead of using max, perhaps we could consistently set the table column size to, say, 255 and the SP parameter to just one character longer, say 256. This solves the silent truncation problem and doesn't incur any performance penalty.
Presumably there is some other disadvantage that I haven't thought of, but it seems a good compromise to me.
Update:
I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
Update 2:
Please see Pyitoechito's answer on this page.
The same behavior can be seen here:
declare #testStringField [nvarchar](5)
set #testStringField = N'string which is too long'
select #testStringField
My suggestion would be to make the application side responsible for validating the input before calling the stored procedure.
Update: I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.
This is probably occurring because the 256th character in the string is white-space. VARCHARs will truncate trailing white-space on insertion and just generate a warning. So your stored procedure is silently truncating your strings to 256 characters, and your insertion is truncating the trailing white-space (with a warning). It will produce an error when said character is not white-space.
Perhaps a solution would be to make the stored procedure's VARCHAR a suitable length to catch a non-white-space character. VARCHAR(512) would probably be safe enough.
One solution would be to:
Change all incoming parameters to be varchar(max)
Have sp private variable of the correct datalength (simply copy and paste all in parameters and add "int" at the end
Declare a table variable with the column names the same as variable names
Insert into the table a row where each variable goes into the column with the same name
Select from the table into internal variables
This way your modifications to the existing code are going to be very minimal like in the sample below.
This is the original code:
create procedure spTest
(
#p1 varchar(2),
#p2 varchar(3)
)
This is the new code:
create procedure spTest
(
#p1 varchar(max),
#p2 varchar(max)
)
declare #p1Int varchar(2), #p2Int varchar(3)
declare #test table (p1 varchar(2), p2 varchar(3)
insert into #test (p1,p2) varlues (#p1, #p2)
select #p1Int=p1, #p2Int=p2 from #test
Note that if the length of the incoming parameters is going to be greater than the limit instead of silently chopping off the string SQL Server will throw off an error.
You could always throw an if statement into your sp's that check the length of them, and if they're greater than the specified length throw an error. This is rather time consuming though and would be a pain to update if you update the data size.
This isn't the Answer that'll solve your problem today, but it includes a Feature Suggestion for MSSQL to consider adding, that would resolve this issue.
It is important to call this out as a shortcoming of MSSQL, so we may help them resolve it by raising awareness of it.
Here's the formal Suggestion if you'd like to vote on it:
https://feedback.azure.com/forums/908035-sql-server/suggestions/38394241-request-for-new-rule-string-truncation-error-for
I share your frustration.
The whole point of setting Character-Size on Parameters is so other Developers will instantly know
what the Size Limits are (via Intellisense) when passing in Data.
This is like having your documentation baked right into the Sproc's Signature.
Look, I get it, Implicit-Conversion during Variable Assignments is the culprit.
Still, there is no good reason to expend this amount of energy battling scenarios
where you are forced to work around this feature.
If you ask me, Sprocs and Functions should have the same engine-rules in place,
for Assigning Parameters, that are used when Populating Tables. Is this really too much to ask?
All these suggestions to use Larger Character-Limits
and then adding Validation for EACH Parameter in EVERY Sproc is ridiculous.
I know it's the only way to ensure Truncation is avoided, but really MSSQL?
I don't care if it's ANSI/ISO Standard or whatever, it's dumb!
When Values are too long - I want my code to break - every time.
It should be: Do not pass go, and fix your code.
You could have multiple truncation bugs festering for years and never catch them.
What happened to ensuring your Data-Integrity?
It's dangerous to assume your SQL Code will only ever be called after all Parameters are Validated.
I try to add the same Validation to both my Website and in the Sproc it calls,
and I still catch Errors in my Sproc that slipped past the website. It's a great sanity-check!
What if you want to re-use your Sproc for a WebSite/WebService and also have it called from other
Sprocs/Jobs/Deployment/Ad-Hoc Scripts (where there is no front-end to Validate Parameters)?
MSSQL Needs a "NO_TRUNC" Option to Enforce this on any Non-Max String Variable
(even those used as Parameters for Sprocs and Functions).
It could be Connection/Session-Scoped:
(like how the "TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" Option affects all Queries)
Or focused on a Single Variable:
(like how "NOLOCK" is a Table Hint for just 1 Table).
Or a Trace-Flag or Database Property you turn on to apply this to All Sproc/Function Parameters in the Database.
I'm not asking to upend decades of Legacy Code.
Just asking MS for the option to better manage our Databases.