using decimal in where clause - Arithmetic overflow error converting nvarchar to data type numeric - sql

I got a sql server error and not sure how to fix it.I got a column 'NAME' in a view 'Products' with a type of nvarchar(30), the query is generated dynamically in code so cannot quite change it.
I got the 'Arithmetic overflow error converting nvarchar to data type numeric.' for the following query:
select * FROM Products WHERE NAME=12.0
however the following query works fine:
select * FROM Products WHERE NAME=112.0
I am quite confused by the error, I know I should put quotes around the number but just want know why the second query works and is there any settings could make the first query work?
update: also
select * FROM Products WHERE NAME=cast('12.0' as decimal(4,2))
doesn't work, but
select * FROM Products WHERE NAME=cast('12.0' as decimal(5,2))
works, any particular reasons?
Many thanks!

SQL Server is trying to convert the values in your table to match the perceived data type of the value coded into your WHERE clause. If you have data values with more numbers (e.g., DECIMAL(5,2)) and you try to convert them to match a value with fewer (e.g., DECIMAL(3,1)), then you will have an overflow.
Consider the following SQL, which will throw an error:
DECLARE #Products TABLE (NAME NVARCHAR(30))
INSERT INTO #Products VALUES ('123.45')
INSERT INTO #Products VALUES ('12.0')
SELECT *
FROM #Products
WHERE NAME = 12.0
Now try this, which will work:
DECLARE #Products TABLE (NAME NVARCHAR(30))
INSERT INTO #Products VALUES ('123.45')
INSERT INTO #Products VALUES ('12.0')
SELECT *
FROM #Products
WHERE NAME = CAST(12.0 AS DECIMAL(5,2))
The difference between these is that SQL Server now accounts for cases where the table contains a number with a higher precision and/or scale than the one specified in the WHERE clause.
EDIT: further reading. Books Online states in the data type definition for DECIMAL and NUMERIC that:
In Transact-SQL statements, a constant with a decimal point is
automatically converted into a numeric data value, using the minimum
precision and scale necessary. For example, the constant 12.345 is
converted into a numeric value with a precision of 5 and a scale of 3.
Therefore, when you issue a query with the constant '12.0', it is being converted to the data type NUMERIC(3,1) and then trying to convert the NVARCHAR value to match.

Related

Redshift, casting of a decimal value is not rounding off

I have a reshift table, which has a decimal column of (38, 29), but the original data's maximum Integer part is 6 and scale is 12 i.e Decimal (18,12). But the table is created using the max precision and scale. So all the data in that has 0's at the end of the scale part as padding.
For Example:
12345.123456789112300000000000000000000
All the data in the table is like the above example.
Now I'm retrieving the data from the table using the below query.
select cast(column as decimal(30,6)) from table;
The output I'm getting is
12345.123456
But when I try the below query
select cast(12345.123456789112300000000000000000000 as decimal(30,6)) from table;
The output I'm getting is
12345.123457
I want to know why this is happening. when I cast the column in the table, it is not rounding off to its highest value, it is just truncating.
But when I try with the decimal itself it is truncating and it is rounding off.
I also want to how to achieve the second query's result in the table itself.
So this comes down to when is a cast not a cast. If I cast and integer to an int it does nothing. Casting a varchar to a shorter varchar is nearly as simple as long as the data fits. Casting a decimal to a lower scale decimal is also a simplistic operation as it is not changing the data type, just some attribute of it (scale). What you desire is that Redshift implicitly ROUNDS the values when you make this conversion and it is not. (I'll let the database philosophers debate if this is a bug or not.)
Here's a simple example to highlight this:
drop table if exists goo;
create table goo (rownum int, num decimal(30,6));
insert into goo select 1, 12345.123456789112300000000000000000000::text;
insert into goo select 2, 12345.123456789112300000000000000000000::decimal(38,29);
insert into goo select 3, 12345.123456789112300000000000000000000::double;
select rownum, num::text from goo;
In all 3 of these examples there is an implicit cast to the data type of the column 'num' in the table. However you can see that what is getting into the table is different. Lots of experiments can be set up like this. (Note that I'm casting the result to text to avoid any bench precision changes.)
The answer in your case is to explicitly ROUND() the value.

How to cast varchar value to decimal?

I have a column that is currently set as nvarchar(max) and I want to convert the column to decimal(38,2). The problem I am running into is that a few of the millions of rows are in the format of -1.943E12 and because the E is present I get an error, Cannot convert nvarchar into numeric, when I try to cast the column into a decimal. Is there anyway to do this?
CREATE TABLE practice (cost nvarchar(max))
INSERT INTO practice values ('123'),('44232.99'),('43.4432'),('1.943E12')
SELECT CAST(cost as decimal(32,2)) FROM practice
What I'm ultimately trying to do is insert data from a staging table (where all columns are nvarchar(max)) into a table with appropriate datatypes. I kept getting an error, so after digging through each of the 40 columns, I found 5 columns where this scientific notation occurs. Any advice on how to do this at scale without having to check each column?
INSERT INTO practice_corrected_datatypes
SELECT * FROM practice
Numeric cannot convert from exponential format, but float can. Therefore you could do this by going via float. For example:
print cast(cast('1.943E12' as float) as decimal(38,2))
assuming you have try_parse() and try_cast()
SELECT try_cast(try_parse(cost as float) as decimal(32,2))
FROM practice
The advantage is that they return NULL instead of failing. You could probably get by with just the inner try_parse()
SELECT cast(try_parse(cost as float) as decimal(32,2))
FROM practice
Try this...
update p
set cost = cast(cast(cost as float) as decimal(32,2))
from practice p
where cost like '%E%'

Converting varchar to numeric type in SQL Server

I have a column in a table with a varchar datatype. It has 15 digits after the decimal point. Now I am having a hard time converting it to a numeric format.. float, double etc.
Does anyone have any suggestions?
Example :
Table1
Column1
-------------------
-28.851540616246499
-22.857142857142858
-26.923076923076923
76.19047619047619
I tried using the following statements and it doesn't seem to work :
update table1
set Column1 = Convert(float,column1)..
Any suggestions ?
You can use the decimal data type and specify the precision to state how many digits are after the decimal point. So you could use decimal(28,20) for example, which would hold 28 digits with 20 of them after the decimal point.
Here's a SQL Fiddle, showing your data in decimal format.
Fiddle sample:
create table Table1(MyValues varchar(100))
insert into Table1(MyValues)
values
('-28.851540616246499'),
('-22.857142857142858'),
('-26.923076923076923'),
('76.19047619047619')
So the values are held as varchar in this table, but you can cast it to decimal as long as they are all valid values, like so:
select cast(MyValues as decimal(28,20)) as DecimalValues
from table1
Your Sample
Looking at your sample update statement, you wouldn't be able to convert the values from varchar to a numeric type and insert them back in to the same column, as the column is of type varchar. You would be better off adding a new column with a numeric data type and updating that.
So if you had 2 columns:
create table Table1(MyValues varchar(100), DecimalValues decimal(28,20))
You could do the below to update the numeric column with the nvarchar values that have been cast to decimal:
update Table1
set DecimalValues = cast(MyValues as decimal(28,20))
I think you're trying to actually change the data type of that column?
If that is the case you want to ALTER the table and change the column type over to float, like so:
alter table table1
alter column column1 float
See fiddle: http://sqlfiddle.com/#!6/637e6/1/0
You would use CONVERT if you're changing the text values to numbers for temporary use within a query (not to actually permanently change the data).

TSQL Arithmetic overflow using BIGINT

Can someone clarify for me why do I get an error when I try to set the variable #a in the example below?
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*31
/*
ERROR:
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
*/
What I could figure out til now is that, internaly, SQL starts doing the math evaluating the multiplication and placing the temporary result into a INT then it casts it to a BIGINT.
However, if I add a 1.0 * to my list of numbers, there is no error, hence I believe that for this time SQL uses float as a temporary result, then cast it to BIGINT
DECLARE #b BIGINT
SET #b = 1.0 * 7*11*13*17*19*23*29*31
/*
NO ERROR
*/
Frankly, I don't see anything wrong with the code... it's so simple...
[ I am using SQL 2008 ]
[EDIT]
Thanks Nathan for the link.
That's good information I didn't know about, but I still don't understand why do I get the error and why do I have do "tricks" to get a simple script like this working.
Is it something that I should know how to deal with as a programmer?
Or, this a bug and, if so, I will consider this question closed.
When you're doing calculations like this, the individual numbers are stored just large enough to hold that number, ie: numeric(1,0). Check this out:
Caution
When you use the +, -, *,
/, or % arithmetic operators to
perform implicit or explicit
conversion of int, smallint, tinyint,
or bigint constant values to the
float, real, decimal or numeric data
types, the rules that SQL Server
applies when it calculates the data
type and precision of the expression
results differ depending on whether
the query is autoparameterized or not.
Therefore, similar expressions in
queries can sometimes produce
different results. When a query is not
autoparameterized, the constant value
is first converted to numeric, whose
precision is just large enough to hold
the value of the constant, before
converting to the specified data type.
For example, the constant value 1 is
converted to numeric (1, 0), and the
constant value 250 is converted to
numeric (3, 0).
When a query is autoparameterized, the
constant value is always converted to
numeric (10, 0) before converting to
the final data type. When the /
operator is involved, not only can the
result type's precision differ among
similar queries, but the result value
can differ also. For example, the
result value of an autoparameterized
query that includes the expression
SELECT CAST (1.0 / 7 AS float) will
differ from the result value of the
same query that is not
autoparameterized, because the results
of the autoparameterized query will be
truncated to fit into the numeric (10,
0) data type. For more information
about parameterized queries, see
Simple Parameterization.
http://msdn.microsoft.com/en-us/library/ms187745.aspx
Edit
This isn't a bug in SQL Server. From that same page, it states:
The int data type is the primary integer data type in SQL Server.
and
SQL Server does not automatically promote other integer data types (tinyint, smallint, and int) to bigint.
This is defined behavior. As a programmer, if you have reason to believe that your data will overflow the data type, you need to take precautions to avoid that situation. In this case, simply converting one of those numbers to a BIGINT will solve the problem.
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*CONVERT(BIGINT, 31)
In the first example SQL Server multiplies a list of INTs together, and discovers the result is too big to be an INT and the error is generated. In the second example, it notices there's a float so it converts all the INTs to floats first and then does the multiplication.
Similarly, you can do this:
DECLARE #a BIGINT,
#b BIGINT
set #b = 1
SET #a = #b*7*11*13*17*19*23*29*31
This works fine because it notices there's a BIGINT, so it converts all the INTs to BIGINTs and then does the multiplication.

Error converting data type varchar

I currently have a table with a column as varchar. This column can hold numbers or text. During certain queries I treat it as a bigint column (I do a join between it and a column in another table that is bigint)
As long as there were only numbers in this field had no trouble but the minute even one row had text and not numbers in this field I got a "Error converting data type varchar to bigint." error even if in the WHERE part I made sure none of the text fields came up.
To solve this I created a view as follows:
SELECT TOP (100) PERCENT ID, CAST(MyCol AS bigint) AS MyCol
FROM MyTable
WHERE (isnumeric(MyCol) = 1)
But even though the view shows only the rows with numeric values and casts Mycol to bigint I still get a Error converting data type varchar to bigint when running the following query:
SELECT * FROM MyView where mycol=1
When doing queries against the view it shouldn't know what is going on behind it! it should simply see two bigint fields! (see attached image, even mssql management studio shows the view fields as being bigint)
OK. I finally created a view that works:
SELECT TOP (100) PERCENT id, CAST(CASE WHEN IsNumeric(MyCol) = 1 THEN MyCol ELSE NULL END AS bigint) AS MyCol
FROM dbo.MyTable
WHERE (MyCol NOT LIKE '%[^0-9]%')
Thanks to AdaTheDev and CodeByMoonlight. I used your two answers to get to this. (Thanks to the other repliers too of course)
Now when I do joins with other bigint cols or do something like 'SELECT * FROM MyView where mycol=1' it returns the correct result with no errors. My guess is that the CAST in the query itself causes the query optimizer to not look at the original table as Christian Hayter said may be going on with the other views
Ideally, you want to try to avoid storing the data in this form - would be worth splitting the BIGINT data out in to a separate column for both performance and ease of querying.
However, you can do a JOIN like this example. Note, I'm not using ISNUMERIC() to determine if it's a valid BIGINT because that would validate incorrect values which would cause a conversion error (e.g. decimal numbers).
DECLARE #MyTable TABLE (MyCol VARCHAR(20))
DECLARE #OtherTable TABLE (Id BIGINT)
INSERT #MyTable VALUES ('1')
INSERT #MyTable VALUES ('Text')
INSERT #MyTable VALUES ('1 and some text')
INSERT #MyTable VALUES ('1.34')
INSERT #MyTable VALUES ('2')
INSERT #OtherTable VALUES (1)
INSERT #OtherTable VALUES (2)
INSERT #OtherTable VALUES (3)
SELECT *
FROM #MyTable m
JOIN #OtherTable o ON CAST(m.MyCol AS BIGINT) = o.Id
WHERE m.MyCol NOT LIKE '%[^0-9]%'
Update:
The only way I can find to get it to work for having a WHERE clause for a specific integer value without doing another CAST() on the supposedly bigint column in the where clause too, is to use a user defined function:
CREATE FUNCTION [dbo].[fnBigIntRecordsOnly]()
RETURNS #Results TABLE (BigIntCol BIGINT)
AS
BEGIN
INSERT #Results
SELECT CAST(MyCol AS BIGINT)
FROM MyTable
WHERE MyCol NOT LIKE '%[^0-9]%'
RETURN
END
SELECT * FROM [dbo].[fnBigIntRecordsOnly]() WHERE BigIntCol = 1
I don't really think this is a great idea performance wise, but it's a solution
To answer your question about the error message: when you reference a view name in another query (assuming it's a traditional view not a materialised view), SQL Server effectively does a macro replacement of the view definition into the consuming query and then executes that.
The advantage of doing this is that the query optimiser can do a much better job if it sees the whole query, rather than optimising the view separately as a "black box".
A consequence is that if an error occurs, error descriptions may look confusing because the execution engine is accessing the underlying tables for the data, not the view.
I'm not sure how materialised views are treated, but I would imagine that they are treated like tables, since the view data is cached in the database.
Having said that, I agree with previous answers - you should re-think your table design and separate out the text and integer data values into separate columns.
Try changing your view to this :
SELECT TOP 100 PERCENT ID,
Cast(Case When IsNumeric(MyCol) = 1 Then MyCol Else null End AS bigint) AS MyCol
FROM MyTable
WHERE (IsNumeric(MyCol) = 1)
Have you tried to convert other table's bigint field into varchar? As for me it makes sense to perform more robust conversion... It shouldn't affect your performance too much if varchar field is indexed.
Consider creating a redundant bigint field to hold the integer value of af MyCol.
You may then index the new field to speed up the join.
Try using this:
SELECT
ID,
CAST(MyCol AS bigint) as MyCol
FROM
(
SELECT TOP (100) PERCENT
ID,
MyCol
FROM
MyTable
WHERE
(isnumeric(MyCol) = 1)
) as tmp
This should work since the inner select only return numeric values and the outer select can therefore convert all values from the first select into a numeric. It seems that in your own code SQL tries to cast before executing the isnumeric function (maybe it has something to do with optimizing).
Try doing the select in 2 stages.
first create a view that selects all columns where my col is nummeric.
Then do a select in that view where you cast the varchar field.
The other thing you could look at is your design of tables to remove the need for the cast.
EDIT
Are some of the numbers larger than bigint?
Are there any spaces, leading, trailing or in the number?
Are there any format characters? Decimal points?