For a 10 minute period yesterday, a SQL stored procedure kept throwing the error "string or binary data would be truncated" when it was executed via my webserver. However, when I ran the exact same SQL command via Microsost SQL Server Management Studio there was no error.
In the SP there is only one insert statement; here's an abstraction of it:
DECLARE #TempTable table (Row1 varchar(25), Row2 varchar(4), Row3 int)
INSERT INTO #TempTable (Row1,Row2,Row3)
SELECT DISTINCT
A.Value
,RIGHT(A.Text,4)
,CAST(ISNULL(A.Thing,'0') as int)
FROM ActivityTable A
In the database Activity table, each of those rows is defined as varchar(25) though Thing is always used for integers (stored as varchar, yes it's stupid). On the face of it I can't see how any of those could exceed the insert column's size.
I tried commenting them out one-by-one, inserting an empty string instead. First I replaced A.Value with '' and refreshed the webpage that executes the procedure; there was no error. I assumed this was the problem column so I put it back to the original value, assuming this would bring back the error. Except it didn't, and since then the error hasn't reoccurred.
This SP has run without issues for months, and only broke for those 10 minutes yesterday. Last week I raised the compatibility level on my SQL server from 100 to 130, so I'm assuming that must be somehow connected. But it also seems to have been affected by me altering the procedure, in addition to being time-specific and user-specific.
1) In code you are declaring table #TempTable and inserting into #Temp
2) A.Value in some cases is longer then 25chars. Try to use LEFT(A.Value,25) instead. Or extend the column size.
Related
I am running an insert statement to populate records into a table using SQL Server 2012. In table 1 that has all the records, the datatype is VARCHAR(5000), and I have done a max(len) to determine that the maximum length of data it contains is about 3000.
In table 2 that the records should go into, the datatype for the field is VARCHAR(5000), which mirrors what's in Table 1.
I am getting the dreaded binary or string data would be truncated message, but my destination table field is large enough to store this data.
When I remove this field from the insert statement, the insert statement works fine and my data moves from Table 1 to Table 2 as expected, but including the field causes this error.
Has anyone come across this peculiar case before? Is it possible that the string field has some sort of weird characters in it that could be causing this error.
Thanks
EDIT:
I'm changing column datatype to Varchar, should suggestion work, answer will be upvoted
Full Story:
I receive data for a person with an associated temporary number for every person that is 5 digits long, I process this information and then send variables to a stored procedure that handles the inserting of this data. When sending the variables to the stored procedure I appear to be losing any prefixed 0's.
For example:
Number sent to stored Proc - Number actually inserted column
12345 - 12345
01234 - 1234
12340 - 12340
This only appears to be happening for numbers with a 0 in front. Meaning if I received:
00012 it would insert as 12
Is there a way where I could either update the column to always 0 pad to the left by a fixed number, meaning if we got 12 it would automatically make the value 00012.
OR
Is there a way to do this with the variable when its received by the stored procedure before the variable is inserted into the table.
Something along the lines of:
SET #zeroPaddedFixedNum = LeftPad(#numberRecieved, '0', 5);
Additionally, I now need to stop any more numbers from inserting and update all current incorrectly lengthed numbers. Any suggestions?
Perhaps it's just my Google ability that has failed but I have tried searching numerous pages.
For this, the column should be of varchar datatype. You can then do this
Insert into table(col)
select right('00000'+cast(#var as varchar(5)),5)
EDIT : To update existing data
Update table
set col=right('00000'+cast(col as varchar(5)),5)
where len(col)<5
As pointed out, you'll have to use VARCHAR(5) for your needs... But I would not change the columns type, if the values stored are numbers actually. Rather use one of the following, whenever you pass these values to your SP (You might use a computed column or a VIEW though).
Try
SELECT REPLACE(STR(YourNumber,5),' ','0');
The big advantage: In cases, where your number exceeds 5 digits, this would return *****. It is better to get an error than to get wrong numbers... Other approaches with RIGHT() might truncate your result unpredictably.
With SQL Server 2012 you should use FORMAT()
SELECT FORMAT(YourNumber,'00000')
I have written a query in my stoted procedure, something like,
INSERT INTO Table1
(UniqueStr, Col1, Col2)
SELECT UniqueStr, Col1, Col2
FROM Table2
WHERE ...
It gives me error:
string or binary data would be truncated.
Here are the statistics.
Table1.UniqueStr is VARCHAR(11)
Table2.UniqueStr is VARCHAR(20)
Table2 has records having UniqueStr values of 11 characters and 15 characters.
The where clause of query is written in such a way that SELECT statement will never return records having UniqueStr length greater than 11.
The first weird scenario is - even though SELECT statement returns nothing (when run separately), it gives truncation error when run along with INSERT (i.e. INSERT...SELECT).
Second weird scenario is - it gives error only in Production environment. It gave no error in UAT environment. In Production environment, it ran fine for 1 day.
can anyone tell me what could be the issue?
Note: I fixed this error using SUBSTRING function but I could not find out the reason why SQL server gives this error?
I have a query like :
select * from table where varchar_column=Numeric_value
that is fine until I run an insert script. After the new data is inserted, I must use this query:
select * from table where varchar_column='Numeric_value'
Can inserting a certain kind of data cause it to no longer implicitly convert?
After the insert script, the error is Data conversion fails OLEDB Status = 2
And the second query does work
I'm not certain of this... the first may be doing an implicit conversion of the varchar_column to a numeric value. Not the other way around. But when you insert values into that column that's no longer convertable, it fails. However, with the second, you're doing a varchar to varchar comparison and all is right again with the world. My guess.
Background:
Previously, my company was using a user-defined function to html encode some data in a where clause of a stored procedure. Example below:
DECLARE #LName --HTML encoded last name as input parameter from user
SELECT *
FROM (SELECT LName
FROM SomeView xtra
WHERE (( #LName <> ''
AND dbo.EncodingFunction(dbo.DecodingFunction(xtra.LName)) = #LName)
OR #Lname=''))
I simplified this for clarity sake.
The problem is, when the stored procedure with this query was called 45 times in quick succession, the average performance on a table with 62,000 records was about 85 seconds. When I removed the UDF, the performance improved to just over 1 second to run the sproc 45 times.
So, we consulted and decided on a solution that included a computed column in the table accessed by the view, SomeView. The computed column was written into the table definition like this:
[LNameComputedColumn] AS (dbo.EncodingFunction(dbo.DecodingFunction([LName])))
I then ran a process that updated the table and automatically populated that computed column for all 62,000 records. Then I changed the stored procedure query to the following:
DECLARE #LName --HTML encoded last name as input parameter from user
SELECT * FROM
(SELECT LNameComputedColumn
FROM SomeView xtra
WHERE (( #LName <> '' AND xtra.LNameComputedColumn=#LName) OR #Lname='')
When I ran that stored procedure, the average run time for 45 executions increased to about 90 seconds. My change actually made the problem worse!
What am I doing wrong? Is there a way to improve the performance?
As a side note, we are currently using SQL Server 2000 and are planning to upgrade to 2008 R2 very soon, but all code must work in SQL Server 2000.
Adding a computed creates a virtual column, still computed at runtime for every row selected. What you want is a computed persisted column, which is computed at insert time and stored physically in the table:
[LNameComputedColumn]
AS (dbo.EncodingFunction(dbo.DecodingFunction([LName]))) PERSISTED
Q: MS SQL Computed Column is slowing down performance...
A: Horse hockey ;)
... where #LName <> '' ...
Q: Can you say "full table scan"?
I'm not saying your function isn't expensive. But you've really got to make a more selective "where" clause before you point fingers...
IMHO...
SUGGESTION:
Query the data (get all relevant "Lname's" first)
Run your function on the result (only the selected "Lnames" - which, I presume, aren't every row in the entire view or table)
Do both operations (query-with-filter, then post-process) in your stored procedure