Sql select insert debugging - sql

I would like to find a quick method for debugging a insert-select statement.
Example:
Create table tbl_int (
tabid int identity,
col1 bigint)
Create table tbl_char(
tabid int identity,
col2 nvarchar(255))
insert into tbl_char(col2)
select '1' union
select '2' union
select 'a'
insert into tbl_int(col1)
select col2
from tbl_char
Of course, the insert select above fails to run and it is obvious that 'a' cannot be converted to bigint. But what happens when I have 1 milion records in tbl_char. Is there any way of finding the source value of the error:
"Error converting data type nvarchar to bigint."
P.s. Using a convert or cast function and scanning the table with top until finding the right value is a little bit too expensive.

Why don't you wrap the SQL that throw the exception into a Try/Catch block to have more info about it
BEGIN TRY
SELECT *
FROM sys.messages
WHERE message_id = 21;
END TRY
GO
-- The previous GO breaks the script into two batches,
-- generating syntax errors. The script runs if this GO
-- is removed.
BEGIN CATCH
SELECT ERROR_NUMBER() AS ErrorNumber;
END CATCH;
GO
the below are all the error information you can check
ERROR_NUMBER() returns the error number.
ERROR_MESSAGE() returns the complete text of the error message. The text includes the values supplied for any substitutable parameters such as lengths, object names, or times.
ERROR_SEVERITY() returns the error severity.
ERROR_STATE() returns the error state number.
ERROR_LINE() returns the line number inside the routine that caused the error.
ERROR_PROCEDURE() returns the name of the stored procedure or trigger where the error occurred.
http://msdn.microsoft.com/en-us/library/ms179296.aspx
if this is not enough you can even write a query to select all the records where certain field is not convertible to a number(in your case there is a NVarChar which is not convertible)
The following example uses ISNUMERIC to return all the postal codes that are not numeric values.
SELECT City, PostalCode
FROM Person.Address
WHERE ISNUMERIC(PostalCode)<> 1
http://msdn.microsoft.com/en-us/library/ms186272.aspx

Let's start with why are you trying to send data that is not the same data type to another table? If you do not want non integer data in the column why are you allowing it to be string data to begin with? So first you need to look at if you have a design issue that is affecting data integrity.
If you are doing imports from some other system and have no control over the data type the use, then you need to clean the data before doing the insert. There is no one size fits all fix for this. You will have to write code to find all data that won't meet the terms of the table you are moving it to datatype by data type and field by field. This may be much more complex than using isnumeric (which can have false positives especially if there is decimal information in there as well.) and isdate() and you may need to write functions specific to your needs. It can take a long time to get the cleaning correct. You might need to restrict the values to a certain subset or have a conversion table that converts what they put inthe table to what your system will accept. Suggest you identify the bad data first, move it to an exception table and then do the insert based on records not in the exception table.

As you understand, the char value converted into bigint when you execute insert..select.
So, we cannot avoid the converting, but we can try to avoid the second table scanning.
I propose to create INSTEAD OF INSERT trigger for table tbl_int.
In the body of this trigger you can convert the char value into bigint, and if you recieve the error, you can insert this char value into staging table. If there is no error when convering, you can insert this bigint value into your tbl_int table.

Related

SQL Server stored procedure: how to return column names/values of type failures in variable?

Ambiguous thread name, I apologize. I am not new to SQL, but I'm new to coding longer stored procedures so I don't deal with variables much outside of passing through maybe a table name or returning row count, etc.
I have a stored procedure that is executing an insert from a staging table to a fact table. There are a couple type casts in the insert.
If the insert fails due to a typecast. Is there any way to return the name of the column that failed, along with what the failed value was? How would I code that? I know that Try_parse would make it so the stored procedure doesn't fail on type cast failure, but I want to be able to pass back exactly what column and value failed.
I show an example here:
Create Procedure dbo.Example_Insert
#updateUser varchar(255)
As
Begin
Insert Into dbo.Energy_Costs (Energy_Cost_Id, Project_Id, Propane_Cost_Dollars,
Electricity_Cost_Dollars, Fuel_Savings_Evaluator)
Select
Next Value For energy_cost_id,
r.project_id,
Cast(r.propane_cost_dollars As Decimal(18,2)),
Cast(r.electricity_cost_dollars As Decimal(18,2)),
#update_user fuel_savings_evaluator
From
staging_table r
return ##ROWCOUNT
end
You can use CURSOR in sql then insert one line at a time. When insert fail return value currently row error.
I hope my idea suitable with you.

msg 8152 level 16 state 2 string or binary data would be truncated

I am getting this error while executing a procedure
Msg 8152, Level 16, State 2, Procedure SP_xxxxx, Line 92 String or
binary data would be truncated.
I have created a temp table which I will load the data from main table once in procedure and I would be using this table not the main table as main table has huge volume of data and many unnecessary columns.
When I run the below code from sql server management studio then there is no error but when I run this code from a procedure then their is the above error message.
Insert into abc_TMP // tmp for procedure with required columns
Select
Item,
Description,
size,
qty,
stock,
Time ,
Measure
from abc // main table has many columns
One way to check this issue, is to see length of each value
Assume you have table like below
create table t
(
col1 varchar(10),
col2 varchar(10)
)
Inserts into the table will fail,if you try to insert more than 10 characters,if you try to insert them in a batch, you will not get offending value.
So you need to check its length like below , prior to insert
;with cte
as
(
select
len(col1) as col1,
len(Col2) as col2
from table
)
select * from cte where col2>10
There has been number of requests raised with Microsoft to enhance error message and they have finally fixed this issue in SQL2019.
Now you can get the exact value causing the issue
References:
https://voiceofthedba.com/2018/09/26/no-more-mysterious-truncation/
I suspect that you are looking at the wrong line of code in the stored proc.
When you open the proc, i.e. ALTER... you have a header on the stored proc that will throw the line number out.
If you run this, replacing proc_name with your procedure name:
sp_helptext proc_name
That will give you the code that the procedure will actually run, with accurate line numbers if you paste it into a new window.
Then you'll see where the actual error is happening.
If you want a simple way to prove this theory, put a bunch of Print 'some sql 1', Print 'some sql 2' lines in around the code you think is causing the error and see what is output when the error is thrown.

StoredProc manipulating Temporary table throws 'Invalid column name' on execution

I have a a number of sp's that create a temporary table #TempData with various fields. Within these sp's I call some processing sp that operates on #TempData. Temp data processing depends on sp input parameters. SP code is:
CREATE PROCEDURE [dbo].[tempdata_proc]
#ID int,
#NeedAvg tinyint = 0
AS
BEGIN
SET NOCOUNT ON;
if #NeedAvg = 1
Update #TempData set AvgValue = 1
Update #TempData set Value = -1;
END
Then, this sp is called in outer sp with the following code:
USE [BN]
--GO
--DBCC FREEPROCCACHE;
GO
Create table #TempData
(
tele_time datetime
, Value float
--, AvgValue float
)
Create clustered index IXTemp on #TempData(tele_time);
insert into #TempData(tele_time, Value ) values( GETDATE(), 50 ); --sample data
declare
#ID int,
#UpdAvg int;
select
#ID = 1000,
#UpdAvg = 1
;
Exec dbo.tempdata_proc #ID, #UpdAvg ;
select * from #TempData;
drop table #TempData
This code throws an error: Msg 207, Level 16, State 1, Procedure tempdata_proc, Line 8: Invalid column name "AvgValue".
But if only I uncomment declaration AvgValue float - everything works OK.
The question: is there any workaround letting the stored proc code remain the same and providing a tip to the optimizer - skip this because AvgValue column will not be used by the sp due to params passed.
Dynamic SQL is not a welcomed solution BTW. Using alternative to #TempData tablename is undesireable solution according to existing tsql code (huge modifications necessary for that).
Tried SET FMTONLY, tempdb.tempdb.sys.columns, try-catch wrapping without any success.
The way that stored procedures are processed is split into two parts - one part, checking for syntactical correctness, is performed at the time that the stored procedure is created or altered. The remaining part of compilation is deferred until the point in time at which the store procedure is executed. This is referred to as Deferred Name Resolution and allows a stored procedure to include references to tables (not just limited to temp tables) that do not exist at the point in time that the procedure is created.
Unfortunately, when it comes to the point in time that the procedure is executed, it needs to be able to compile all of the individual statements, and it's at this time that it will discover that the table exists but that the column doesn't - and so at this time, it will generate an error and refuse to run the procedure.
The T-SQL language is unfortunately a very simplistic compiler, and doesn't take runtime control flow into account when attempting to perform the compilation. It doesn't analyse the control flow or attempt to defer the compilation in conditional paths - it just fails the compilation because the column doesn't (at this time) exist.
Unfortunately, there aren't any mechanisms built in to SQL Server to control this behaviour - this is the behaviour you get, and anything that addresses it is going to be perceived as a workaround - as evidenced already by the (valid) suggestions in the comments - the two main ways to deal with it are to use dynamic SQL or to ensure that the temp table always contains all columns required.
One way to workaround your concerns about maintenance if you go down the "all uses of the temp table should have all columns" is to move the column definitions into a separate stored procedure, that can then augment the temporary table with all of the required columns - something like:
create procedure S_TT_Init
as
alter table #TT add Column1 int not null
alter table #TT add Column2 varchar(9) null
go
create procedure S_TT_Consumer
as
insert into #TT(Column1,Column2) values (9,'abc')
go
create procedure S_TT_User
as
create table #TT (tmp int null)
exec S_TT_Init
insert into #TT(Column1) values (8)
exec S_TT_Consumer
select Column1 from #TT
go
exec S_TT_User
Which produces the output 8 and 9. You'd put your temp table definition in S_TT_Init, S_TT_Consumer is the inner query that multiple stored procedures call, and S_TT_User is an example of one such stored procedure.
Create the table with the column initially. If you're populating the TEMP table with SPROC output just make it an IDENTITY INT (1,1) so the columns line up with your output.
Then drop the column and re-add it as the appropriate data type later on in the SPROC.
The only (or maybe best) way i can thing off beyond dynamic SQL is using checks for database structure.
if exists (Select 1 From tempdb.sys.columns Where object_id=OBJECT_ID('tempdb.dbo.#TTT') and name = 'AvgValue')
begin
--do something AvgValue related
end
maybe create a simple function that takes table name and column or only column if its always #TempTable and retursn 1/0 if the column exists, would be useful in the long run i think
if dbo.TempTableHasField('AvgValue')=1
begin
-- do something AvgValue related
end
EDIT1: Dang, you are right, sorry about that, i was sure i had ... this.... :( let me thing a bit more

Help with SQL Server Trigger to truncate bad data before insert

We consume a web service that decided to alter the max length of a field from 255. We have a legacy vendor table on our end that is still capped at 255. We are hoping to use a trigger to address this issue temporarily until we can implement a more business-friendly solution in our next iteration.
Here's what I started with:
CREATE TRIGGER [mySchema].[TruncDescription]
ON [mySchema].[myTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [mySchema].[myTable]
SELECT SubType, type, substring(description, 1, 255)
FROM inserted
END
However, when I try to insert on myTable, I get the error:
String or binary data would be
truncated. The statement has been
terminated.
I tried experimenting with SET ANSI_WARNINGS OFF which allowed the query to work but then simply didn't insert any data into the description column.
Is there any way to use a trigger to truncate the too-long data or is there another alternative that I can use until a more eloquent solution can be designed? We are fairly limited in table modifications (i.e. we can't) because it's a vendor table, and we don't control the web service we're consuming so we can't ask them to fix it either. Any help would be appreciated.
The error cannot be avoided because the error is happening when the inserted table is populated.
From the documentation:
http://msdn.microsoft.com/en-us/library/ms191300.aspx
"The format of the inserted and deleted tables is the same as the format of the table on which the INSTEAD OF trigger is defined. Each column in the inserted and deleted tables maps directly to a column in the base table."
The only really "clever" idea I can think of is to take advantage of schemas and the default schema used by a login. If you can get the login that the web service is using to reference another table, you can increase the column size on that table and use the INSTEAD OF INSERT trigger to perform the INSERT into the vendor table. A variation of this is to create the table in a different database and set the default database for the web service login.
CREATE TRIGGER [myDB].[mySchema].[TruncDescription]
ON [myDB].[mySchema].[myTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [VendorDB].[VendorSchema].[VendorTable]
SELECT SubType, type, substring(description, 1, 255)
FROM inserted
END
With this setup everything works OK for me.
Not to state the obvious but are you sure there is data in the description field when you are testing? It is possible they change one of the other fields you are inserting as well and maybe one of those is throwing the error?
CREATE TABLE [dbo].[DataPlay](
[Data] [nvarchar](255) NULL
) ON [PRIMARY]
GO
and a trigger like this
Create TRIGGER updT ON DataPlay
Instead of Insert
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [tempdb].[dbo].[DataPlay]
([Data])
(Select substring(Data, 1, 255) from inserted)
END
GO
then inserting with
Declare #d as nvarchar(max)
Select #d = REPLICATE('a', 500)
SET ANSI_WARNINGS OFF
INSERT INTO [tempdb].[dbo].[DataPlay]
([Data])
VALUES
(#d)
GO
I am unable to reproduce this issue on SQL 2008 R2 using:
Declare #table table ( fielda varchar(10) )
Insert Into #table ( fielda )
Values ( Substring('12345678901234567890', 1, 10) )
Please make sure that your field is really defined as varchar(255).
I also strongly suggest you use an Insert statement with an explicit field list. While your Insert is syntactically correct, you really should be using an explicit field list (like in my sample). The problem is when you don't specify a field list you are at the mercy of SQL and the table definition for the field order. When you do use a field list you can change the order of the fields in the table (or add new fields in the middle) and not care about your insert statements.

How to report an error from a SQL Server user-defined function

I'm writing a user-defined function in SQL Server 2008. I know that functions cannot raise errors in the usual way - if you try to include the RAISERROR statement SQL returns:
Msg 443, Level 16, State 14, Procedure ..., Line ...
Invalid use of a side-effecting operator 'RAISERROR' within a function.
But the fact is, the function takes some input, which may be invalid and, if it is, there is no meaningful value the function can return. What do I do then?
I could, of course, return NULL, but it would be difficult for any developer using the function to troubleshoot this. I could also cause a division by zero or something like that - this would generate an error message, but a misleading one. Is there any way I can have my own error message reported somehow?
You can use CAST to throw meaningful error:
create function dbo.throwError()
returns nvarchar(max)
as
begin
return cast('Error happened here.' as int);
end
Then Sql Server will show some help information:
Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the varchar value 'Error happened here.' to data type int.
The usual trick is to force a divide by 0. This will raise an error and interrupt the current statement that is evaluating the function. If the developer or support person knows about this behavior, investigating and troubleshooting the problem is fairly easy as the division by 0 error is understood as a symptom of a different, unrelated problem.
As bad as this looks from any point of view, unfortunately the design of SQL functions at the moment allows no better choice. Using RAISERROR should absolutely be allowed in functions.
Following on from Vladimir Korolev's answer, the idiom to conditionally throw an error is
CREATE FUNCTION [dbo].[Throw]
(
#error NVARCHAR(MAX)
)
RETURNS BIT
AS
BEGIN
RETURN CAST(#error AS INT)
END
GO
DECLARE #error NVARCHAR(MAX)
DECLARE #bit BIT
IF `error condition` SET #error = 'My Error'
ELSE SET #error = '0'
SET #bit = [dbo].[Throw](#error)
I think the cleanest way is to just accept that the function can return NULL if invalid arguments are passed. As long is this is clearly documented then this should be okay?
-- =============================================
-- Author: AM
-- Create date: 03/02/2010
-- Description: Returns the appropriate exchange rate
-- based on the input parameters.
-- If the rate cannot be found, returns NULL
-- (RAISEERROR can't be used in UDFs)
-- =============================================
ALTER FUNCTION [dbo].[GetExchangeRate]
(
#CurrencyFrom char(3),
#CurrencyTo char(3),
#OnDate date
)
RETURNS decimal(18,4)
AS
BEGIN
DECLARE #ClosingRate as decimal(18,4)
SELECT TOP 1
#ClosingRate=ClosingRate
FROM
[FactCurrencyRate]
WHERE
FromCurrencyCode=#CurrencyFrom AND
ToCurrencyCode=#CurrencyTo AND
DateID=dbo.DateToIntegerKey(#OnDate)
RETURN #ClosingRate
END
GO
A few folks were asking about raising errors in Table-Valued functions, since you can't use "RETURN [invalid cast]" sort of things. Assigning the invalid cast to a variable works just as well.
CREATE FUNCTION fn()
RETURNS #T TABLE (Col CHAR)
AS
BEGIN
DECLARE #i INT = CAST('booooom!' AS INT)
RETURN
END
This results in:
Msg 245, Level 16, State 1, Line 14
Conversion failed when converting the varchar value 'booooom!' to data type int.
RAISEERROR or ##ERROR are not allowed in UDFs. Can you turn the UDF into a strored procedure?
From Erland Sommarskog's article Error Handling in SQL Server – a Background:
User-defined functions are usually
invoked as part of a SET, SELECT,
INSERT, UPDATE or DELETE statement.
What I have found is that if an error
appears in a multi-statement
table-valued function or in a scalar
function, the execution of the
function is aborted immediately, and
so is the statement the function is
part of. Execution continues on the
next line, unless the error aborted
the batch. In either case, ##error is
0. Thus, there is no way to detect that an error occurred in a function
from T-SQL.
The problem does not appear with
inline table-functions, since an
inline table-valued function is
basically a macro that the query
processor pastes into the query.
You can also execute scalar functions
with the EXEC statement. In this case,
execution continues if an error occurs
(unless it is a batch-aborting error).
##error is set, and you can check the
value of ##error within the function.
It can be problematic to communicate
the error to the caller though.
The top answer is generally best, but does not work for inline table valued functions.
MikeTeeVee gave a solution for this in his comment on the top answer, but it required use of an aggregate function like MAX, which did not work well for my circumstance.
I messed around with an alternate solution for the case where you need an inline table valued udf that returns something like select * instead of an aggregate. Sample code solving this particular case is below. As someone has already pointed out... "JEEZ wotta hack" :) I welcome any better solution for this case!
create table foo (
ID nvarchar(255),
Data nvarchar(255)
)
go
insert into foo (ID, Data) values ('Green Eggs', 'Ham')
go
create function dbo.GetFoo(#aID nvarchar(255)) returns table as return (
select *, 0 as CausesError from foo where ID = #aID
--error checking code is embedded within this union
--when the ID exists, this second selection is empty due to where clause at end
--when ID doesn't exist, invalid cast with case statement conditionally causes an error
--case statement is very hack-y, but this was the only way I could get the code to compile
--for an inline TVF
--simpler approaches were caught at compile time by SQL Server
union
select top 1 *, case
when ((select top 1 ID from foo where ID = #aID) = #aID) then 0
else 'Error in GetFoo() - ID "' + IsNull(#aID, 'null') + '" does not exist'
end
from foo where (not exists (select ID from foo where ID = #aID))
)
go
--this does not cause an error
select * from dbo.GetFoo('Green Eggs')
go
--this does cause an error
select * from dbo.GetFoo('Yellow Eggs')
go
drop function dbo.GetFoo
go
drop table foo
go
I can't comment under davec's answer regarding table valued function, but in my humble opinion this is easier solution:
CREATE FUNCTION dbo.ufn_test (#a TINYINT)
RETURNS #returns TABLE(Column1 VARCHAR(10), Value1 TINYINT)
BEGIN
IF #a>50 -- if #a > 50 - raise an error
BEGIN
INSERT INTO #returns (Column1, Value1)
VALUES('error','#a is bigger than 50!') -- reminder Value1 should be TINYINT
END
INSERT INTO #returns (Column1, Value1)
VALUES('Something',#a)
RETURN;
END
SELECT Column1, Value1 FROM dbo.ufn_test(1) -- this is okay
SELECT Column1, Value1 FROM dbo.ufn_test(51) -- this will raise an error
One way (a hack) is to have a function/stored procedure that performs an invalid action. For example, the following pseudo SQL
create procedure throw_error ( in err_msg varchar(255))
begin
insert into tbl_throw_error (id, msg) values (null, err_msg);
insert into tbl_throw_error (id, msg) values (null, err_msg);
end;
Where on the table tbl_throw_error, there is a unique constraint on the column err_msg. A side-effect of this (at least on MySQL), is that the value of err_msg is used as the description of the exception when it gets back up into the application level exception object.
I don't know if you can do something similar with SQL Server, but worth a shot.