TSQL Arithmetic overflow using BIGINT - sql

Can someone clarify for me why do I get an error when I try to set the variable #a in the example below?
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*31
/*
ERROR:
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
*/
What I could figure out til now is that, internaly, SQL starts doing the math evaluating the multiplication and placing the temporary result into a INT then it casts it to a BIGINT.
However, if I add a 1.0 * to my list of numbers, there is no error, hence I believe that for this time SQL uses float as a temporary result, then cast it to BIGINT
DECLARE #b BIGINT
SET #b = 1.0 * 7*11*13*17*19*23*29*31
/*
NO ERROR
*/
Frankly, I don't see anything wrong with the code... it's so simple...
[ I am using SQL 2008 ]
[EDIT]
Thanks Nathan for the link.
That's good information I didn't know about, but I still don't understand why do I get the error and why do I have do "tricks" to get a simple script like this working.
Is it something that I should know how to deal with as a programmer?
Or, this a bug and, if so, I will consider this question closed.

When you're doing calculations like this, the individual numbers are stored just large enough to hold that number, ie: numeric(1,0). Check this out:
Caution
When you use the +, -, *,
/, or % arithmetic operators to
perform implicit or explicit
conversion of int, smallint, tinyint,
or bigint constant values to the
float, real, decimal or numeric data
types, the rules that SQL Server
applies when it calculates the data
type and precision of the expression
results differ depending on whether
the query is autoparameterized or not.
Therefore, similar expressions in
queries can sometimes produce
different results. When a query is not
autoparameterized, the constant value
is first converted to numeric, whose
precision is just large enough to hold
the value of the constant, before
converting to the specified data type.
For example, the constant value 1 is
converted to numeric (1, 0), and the
constant value 250 is converted to
numeric (3, 0).
When a query is autoparameterized, the
constant value is always converted to
numeric (10, 0) before converting to
the final data type. When the /
operator is involved, not only can the
result type's precision differ among
similar queries, but the result value
can differ also. For example, the
result value of an autoparameterized
query that includes the expression
SELECT CAST (1.0 / 7 AS float) will
differ from the result value of the
same query that is not
autoparameterized, because the results
of the autoparameterized query will be
truncated to fit into the numeric (10,
0) data type. For more information
about parameterized queries, see
Simple Parameterization.
http://msdn.microsoft.com/en-us/library/ms187745.aspx
Edit
This isn't a bug in SQL Server. From that same page, it states:
The int data type is the primary integer data type in SQL Server.
and
SQL Server does not automatically promote other integer data types (tinyint, smallint, and int) to bigint.
This is defined behavior. As a programmer, if you have reason to believe that your data will overflow the data type, you need to take precautions to avoid that situation. In this case, simply converting one of those numbers to a BIGINT will solve the problem.
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*CONVERT(BIGINT, 31)

In the first example SQL Server multiplies a list of INTs together, and discovers the result is too big to be an INT and the error is generated. In the second example, it notices there's a float so it converts all the INTs to floats first and then does the multiplication.
Similarly, you can do this:
DECLARE #a BIGINT,
#b BIGINT
set #b = 1
SET #a = #b*7*11*13*17*19*23*29*31
This works fine because it notices there's a BIGINT, so it converts all the INTs to BIGINTs and then does the multiplication.

Related

Msg 245, Level 16, State 1, Line 4 Conversion failed when converting the nvarchar value '239.6' to data type int

I have this query:
SELECT SerialNumber
FROM [ETEL-PRDSQL].[ERP10DBLIVE].[ERP].[SerialNo]
WHERE CustNum IN (2);
It's causing this error:
Msg 245, Level 16, State 1, Line 4
Conversion failed when converting the nvarchar value '239.6' to data type int.
The query works if I compare CustNum with a different value, but it fails when I try CustNum IN (2).
How can I fix this?
You have a varchar column named CustNum. The varchar values in this column may contain only digits, but that doesn't make them numbers! Then you compare this text column with the integer value 2. Again, the integer value 2 is not the same as the text value '2'. It's also not the same as the floating point value 2.0. These are all different, they have different types, and SQL Server must resolve any such differences before it can compare values.
Based on type precedence rules SQL Server determines it needs to convert the text in the column to the integer, instead of vice versa. Once this determination is made for the query, if you have any data in the text column that is not integer-compatible, the query is going to fail.
It's important to understand this conversion happens separately from the conditional check in the WHERE clause, and is a prerequisite for that check. It's not enough to expect the WHERE condition to evaluate to FALSE for rows that do not convert. This is true even if you don't need the row, because SQL Server can't know you don't need that row until after it attempts the conversion!
In this case, we have the value 293.6. This value may be numeric, but it is not an integer. Nor is it convertible to integer. Therefore the query fails.
In addition to (eventually!) failing the query, this is absolutely awful for performance. SQL Server has to do this conversion for every row in the table... even rows you don't need. This is because SQL Server doesn't know which rows will match the WHERE clause until after it checks the conditional expression, and it needs to do this conversion in order to make that check. Worse still, the new converted value no longer matches your indexes, so any indexes you might have become worthless for this query. That cuts to the core of database performance.
If you don't like it, define your data types better, or trying comparing the string with another string:
SELECT SerialNumber
FROM [ETEL-PRDSQL].[ERP10DBLIVE].[ERP].[SerialNo]
WHERE CustNum IN ('2');
The query might also run if you did this:
SELECT SerialNumber
FROM [ETEL-PRDSQL].[ERP10DBLIVE].[ERP].[SerialNo]
WHERE CustNum IN (2.0);
Now the type precedence rules will convert your text to a floating point type, and it's possible that will succeed if the rest of the values in the table are compatible. It's also possible this is closer to what you intend... but again, the performance here will be much worse.

Redshift, casting of a decimal value is not rounding off

I have a reshift table, which has a decimal column of (38, 29), but the original data's maximum Integer part is 6 and scale is 12 i.e Decimal (18,12). But the table is created using the max precision and scale. So all the data in that has 0's at the end of the scale part as padding.
For Example:
12345.123456789112300000000000000000000
All the data in the table is like the above example.
Now I'm retrieving the data from the table using the below query.
select cast(column as decimal(30,6)) from table;
The output I'm getting is
12345.123456
But when I try the below query
select cast(12345.123456789112300000000000000000000 as decimal(30,6)) from table;
The output I'm getting is
12345.123457
I want to know why this is happening. when I cast the column in the table, it is not rounding off to its highest value, it is just truncating.
But when I try with the decimal itself it is truncating and it is rounding off.
I also want to how to achieve the second query's result in the table itself.
So this comes down to when is a cast not a cast. If I cast and integer to an int it does nothing. Casting a varchar to a shorter varchar is nearly as simple as long as the data fits. Casting a decimal to a lower scale decimal is also a simplistic operation as it is not changing the data type, just some attribute of it (scale). What you desire is that Redshift implicitly ROUNDS the values when you make this conversion and it is not. (I'll let the database philosophers debate if this is a bug or not.)
Here's a simple example to highlight this:
drop table if exists goo;
create table goo (rownum int, num decimal(30,6));
insert into goo select 1, 12345.123456789112300000000000000000000::text;
insert into goo select 2, 12345.123456789112300000000000000000000::decimal(38,29);
insert into goo select 3, 12345.123456789112300000000000000000000::double;
select rownum, num::text from goo;
In all 3 of these examples there is an implicit cast to the data type of the column 'num' in the table. However you can see that what is getting into the table is different. Lots of experiments can be set up like this. (Note that I'm casting the result to text to avoid any bench precision changes.)
The answer in your case is to explicitly ROUND() the value.

Understanding SQL Server Int Data Types

I have a table with a column called PtObjId of the INT data type.
From my understanding by looking at the thread located Microsoft Documentation here. I can only store up to values that are +/- 2,147,483,647
If I run this query:
Select top 100 *
from [table]
where [PtObjId] IN (44237141916)
Shouldn't it error out?
Why does this query error out below:
select top 100 *
from [table]
where [PtObjID] IN ('44237141916')
but the top query doesn't error out?
This sqlshack-article explains details about implicit conversions in SQL-Server.
One value must implicitly be cast for the comparison. The literal 44237141916 is treated as decimal, which has a higher precedence than int, so the other operand is cast to decimal.
A full list of precedences (and a table of possible conversions) are given in the article, an extract:
10. float
11. real
12. decimal
13. money
14. smallmoney
15. bigint
16. int
17. smallint
18. tinyint
...
25. nvarchar (including nvarchar(max))
(lower number = higher precedence)
In the case of int and nvarchar, the one with higher precedence is int, and this leads to an overflow for 44237141916.

using decimal in where clause - Arithmetic overflow error converting nvarchar to data type numeric

I got a sql server error and not sure how to fix it.I got a column 'NAME' in a view 'Products' with a type of nvarchar(30), the query is generated dynamically in code so cannot quite change it.
I got the 'Arithmetic overflow error converting nvarchar to data type numeric.' for the following query:
select * FROM Products WHERE NAME=12.0
however the following query works fine:
select * FROM Products WHERE NAME=112.0
I am quite confused by the error, I know I should put quotes around the number but just want know why the second query works and is there any settings could make the first query work?
update: also
select * FROM Products WHERE NAME=cast('12.0' as decimal(4,2))
doesn't work, but
select * FROM Products WHERE NAME=cast('12.0' as decimal(5,2))
works, any particular reasons?
Many thanks!
SQL Server is trying to convert the values in your table to match the perceived data type of the value coded into your WHERE clause. If you have data values with more numbers (e.g., DECIMAL(5,2)) and you try to convert them to match a value with fewer (e.g., DECIMAL(3,1)), then you will have an overflow.
Consider the following SQL, which will throw an error:
DECLARE #Products TABLE (NAME NVARCHAR(30))
INSERT INTO #Products VALUES ('123.45')
INSERT INTO #Products VALUES ('12.0')
SELECT *
FROM #Products
WHERE NAME = 12.0
Now try this, which will work:
DECLARE #Products TABLE (NAME NVARCHAR(30))
INSERT INTO #Products VALUES ('123.45')
INSERT INTO #Products VALUES ('12.0')
SELECT *
FROM #Products
WHERE NAME = CAST(12.0 AS DECIMAL(5,2))
The difference between these is that SQL Server now accounts for cases where the table contains a number with a higher precision and/or scale than the one specified in the WHERE clause.
EDIT: further reading. Books Online states in the data type definition for DECIMAL and NUMERIC that:
In Transact-SQL statements, a constant with a decimal point is
automatically converted into a numeric data value, using the minimum
precision and scale necessary. For example, the constant 12.345 is
converted into a numeric value with a precision of 5 and a scale of 3.
Therefore, when you issue a query with the constant '12.0', it is being converted to the data type NUMERIC(3,1) and then trying to convert the NVARCHAR value to match.

Error unable to convert data type nvarchar to float

I have searched both this great forum and googled around but unable to resolve this.
We have two tables (and trust me I have nothing to do with these tables). Both tables have a column called eventId.
However, in one table, data type for eventId is float and in the other table, it is nvarchar.
We are selecting from table1 where eventI is defined as float and saving that Id into table2 where eventId is defined as nvarchar(50).
As a result of descrepancy in data types, we are getting error converting datatype nvarchar to float.
Without fooling around with the database, I would like to cast the eventId to get rid of this error.
Any ideas what I am doing wrong with the code below?
SELECT
CAST(CAST(a.event_id AS NVARCHAR(50)) AS FLOAT) event_id_vre,
The problem is most likely because some of the rows have event_id that is empty. There are two ways to go about solving this:
Convert your float to nvarchar, rather than the other way around - This conversion will always succeed. The only problem here is if the textual representations differ - say, the table with float-as-nvarchar uses fewer decimal digits, or
Add a condition to check for empty IDs before the conversion - This may not work if some of the event IDs are non-empty strings, but they are not float-convertible either (e.g. there's a word in the field instead of a number).
The second solution would look like this:
SELECT
case when a.eventid <> ''
then cast(cast(a.event_id as nvarchar(50)) as float)
ELSE 0.0
END AS event_id_vre,
Convert float to nvarchar instead of nvarchar to float. Of course!