Understanding SQL Server Int Data Types - sql

I have a table with a column called PtObjId of the INT data type.
From my understanding by looking at the thread located Microsoft Documentation here. I can only store up to values that are +/- 2,147,483,647
If I run this query:
Select top 100 *
from [table]
where [PtObjId] IN (44237141916)
Shouldn't it error out?
Why does this query error out below:
select top 100 *
from [table]
where [PtObjID] IN ('44237141916')
but the top query doesn't error out?

This sqlshack-article explains details about implicit conversions in SQL-Server.
One value must implicitly be cast for the comparison. The literal 44237141916 is treated as decimal, which has a higher precedence than int, so the other operand is cast to decimal.
A full list of precedences (and a table of possible conversions) are given in the article, an extract:
10. float
11. real
12. decimal
13. money
14. smallmoney
15. bigint
16. int
17. smallint
18. tinyint
...
25. nvarchar (including nvarchar(max))
(lower number = higher precedence)
In the case of int and nvarchar, the one with higher precedence is int, and this leads to an overflow for 44237141916.

Related

Redshift, casting of a decimal value is not rounding off

I have a reshift table, which has a decimal column of (38, 29), but the original data's maximum Integer part is 6 and scale is 12 i.e Decimal (18,12). But the table is created using the max precision and scale. So all the data in that has 0's at the end of the scale part as padding.
For Example:
12345.123456789112300000000000000000000
All the data in the table is like the above example.
Now I'm retrieving the data from the table using the below query.
select cast(column as decimal(30,6)) from table;
The output I'm getting is
12345.123456
But when I try the below query
select cast(12345.123456789112300000000000000000000 as decimal(30,6)) from table;
The output I'm getting is
12345.123457
I want to know why this is happening. when I cast the column in the table, it is not rounding off to its highest value, it is just truncating.
But when I try with the decimal itself it is truncating and it is rounding off.
I also want to how to achieve the second query's result in the table itself.
So this comes down to when is a cast not a cast. If I cast and integer to an int it does nothing. Casting a varchar to a shorter varchar is nearly as simple as long as the data fits. Casting a decimal to a lower scale decimal is also a simplistic operation as it is not changing the data type, just some attribute of it (scale). What you desire is that Redshift implicitly ROUNDS the values when you make this conversion and it is not. (I'll let the database philosophers debate if this is a bug or not.)
Here's a simple example to highlight this:
drop table if exists goo;
create table goo (rownum int, num decimal(30,6));
insert into goo select 1, 12345.123456789112300000000000000000000::text;
insert into goo select 2, 12345.123456789112300000000000000000000::decimal(38,29);
insert into goo select 3, 12345.123456789112300000000000000000000::double;
select rownum, num::text from goo;
In all 3 of these examples there is an implicit cast to the data type of the column 'num' in the table. However you can see that what is getting into the table is different. Lots of experiments can be set up like this. (Note that I'm casting the result to text to avoid any bench precision changes.)
The answer in your case is to explicitly ROUND() the value.

using decimal in where clause - Arithmetic overflow error converting nvarchar to data type numeric

I got a sql server error and not sure how to fix it.I got a column 'NAME' in a view 'Products' with a type of nvarchar(30), the query is generated dynamically in code so cannot quite change it.
I got the 'Arithmetic overflow error converting nvarchar to data type numeric.' for the following query:
select * FROM Products WHERE NAME=12.0
however the following query works fine:
select * FROM Products WHERE NAME=112.0
I am quite confused by the error, I know I should put quotes around the number but just want know why the second query works and is there any settings could make the first query work?
update: also
select * FROM Products WHERE NAME=cast('12.0' as decimal(4,2))
doesn't work, but
select * FROM Products WHERE NAME=cast('12.0' as decimal(5,2))
works, any particular reasons?
Many thanks!
SQL Server is trying to convert the values in your table to match the perceived data type of the value coded into your WHERE clause. If you have data values with more numbers (e.g., DECIMAL(5,2)) and you try to convert them to match a value with fewer (e.g., DECIMAL(3,1)), then you will have an overflow.
Consider the following SQL, which will throw an error:
DECLARE #Products TABLE (NAME NVARCHAR(30))
INSERT INTO #Products VALUES ('123.45')
INSERT INTO #Products VALUES ('12.0')
SELECT *
FROM #Products
WHERE NAME = 12.0
Now try this, which will work:
DECLARE #Products TABLE (NAME NVARCHAR(30))
INSERT INTO #Products VALUES ('123.45')
INSERT INTO #Products VALUES ('12.0')
SELECT *
FROM #Products
WHERE NAME = CAST(12.0 AS DECIMAL(5,2))
The difference between these is that SQL Server now accounts for cases where the table contains a number with a higher precision and/or scale than the one specified in the WHERE clause.
EDIT: further reading. Books Online states in the data type definition for DECIMAL and NUMERIC that:
In Transact-SQL statements, a constant with a decimal point is
automatically converted into a numeric data value, using the minimum
precision and scale necessary. For example, the constant 12.345 is
converted into a numeric value with a precision of 5 and a scale of 3.
Therefore, when you issue a query with the constant '12.0', it is being converted to the data type NUMERIC(3,1) and then trying to convert the NVARCHAR value to match.

SQL Server: data type "rank" in arithmetic operations?

When two values with different data types are put together for an arithmetic operation, SQL Server will convert values automatically into certain data type. E.g.
DECLARE #d NUMERIC(9,6);
SET #d = 1.0;
SELECT #d/3;
GO
results 0.33333333. What is the internal logic behind this conversion? Is there some "rank" between data types (into which "direction" the conversion will happen)?
See: Data Type Precedence (for SQL-Server 2000) at msdn.microsoft.com
From the same page for SQL-Server 2008:
When an operator combines two
expressions of different data types,
the rules for data type precedence
specify that the data type with the
lower precedence is converted to the
data type with the higher precedence.
If the conversion is not a supported implicit conversion, an error is returned. When both operand expressions have the same data
type, the result of the operation has
that data type.
SQL Server uses the following
precedence order for data types:
user-defined data types (highest)
sql_variant
xml
datetimeoffset
datetime2
datetime
smalldatetime
date
time
float
real
decimal
money
smallmoney
bigint
int
smallint
tinyint
bit
ntext
text
image
timestamp
uniqueidentifier
nvarchar (including nvarchar(max) )
nchar
varchar (including varchar(max) )
char
varbinary (including varbinary(max) )
binary (lowest)
For various details regarding when both operands are char, varchar, binary, or varbinary expressions and they are concatenated or compared and when they are both decimals with different precision or scale, see: Precision, Scale, and Length
The following SO question/answer is also relevant: sql-server-truncates-decimal-points-of-a-newly-created-field-in-a-view
Since both the numerator and the denominator are integers, the result will be an integer. You need to convert one of the two to a decimal value:
Select #i / 3.0
Data Type Conversion (Database Engine)
CAST and CONVERT (Transact-SQL)
Specifically, note the sections on implicit data type conversion. In your example, since all type are of the same type, there is no conversion. Although the Cast/Convert article is on those functions, it also outlines the implicit conversions.

TSQL Arithmetic overflow using BIGINT

Can someone clarify for me why do I get an error when I try to set the variable #a in the example below?
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*31
/*
ERROR:
Msg 8115, Level 16, State 2, Line 1
Arithmetic overflow error converting expression to data type int.
*/
What I could figure out til now is that, internaly, SQL starts doing the math evaluating the multiplication and placing the temporary result into a INT then it casts it to a BIGINT.
However, if I add a 1.0 * to my list of numbers, there is no error, hence I believe that for this time SQL uses float as a temporary result, then cast it to BIGINT
DECLARE #b BIGINT
SET #b = 1.0 * 7*11*13*17*19*23*29*31
/*
NO ERROR
*/
Frankly, I don't see anything wrong with the code... it's so simple...
[ I am using SQL 2008 ]
[EDIT]
Thanks Nathan for the link.
That's good information I didn't know about, but I still don't understand why do I get the error and why do I have do "tricks" to get a simple script like this working.
Is it something that I should know how to deal with as a programmer?
Or, this a bug and, if so, I will consider this question closed.
When you're doing calculations like this, the individual numbers are stored just large enough to hold that number, ie: numeric(1,0). Check this out:
Caution
When you use the +, -, *,
/, or % arithmetic operators to
perform implicit or explicit
conversion of int, smallint, tinyint,
or bigint constant values to the
float, real, decimal or numeric data
types, the rules that SQL Server
applies when it calculates the data
type and precision of the expression
results differ depending on whether
the query is autoparameterized or not.
Therefore, similar expressions in
queries can sometimes produce
different results. When a query is not
autoparameterized, the constant value
is first converted to numeric, whose
precision is just large enough to hold
the value of the constant, before
converting to the specified data type.
For example, the constant value 1 is
converted to numeric (1, 0), and the
constant value 250 is converted to
numeric (3, 0).
When a query is autoparameterized, the
constant value is always converted to
numeric (10, 0) before converting to
the final data type. When the /
operator is involved, not only can the
result type's precision differ among
similar queries, but the result value
can differ also. For example, the
result value of an autoparameterized
query that includes the expression
SELECT CAST (1.0 / 7 AS float) will
differ from the result value of the
same query that is not
autoparameterized, because the results
of the autoparameterized query will be
truncated to fit into the numeric (10,
0) data type. For more information
about parameterized queries, see
Simple Parameterization.
http://msdn.microsoft.com/en-us/library/ms187745.aspx
Edit
This isn't a bug in SQL Server. From that same page, it states:
The int data type is the primary integer data type in SQL Server.
and
SQL Server does not automatically promote other integer data types (tinyint, smallint, and int) to bigint.
This is defined behavior. As a programmer, if you have reason to believe that your data will overflow the data type, you need to take precautions to avoid that situation. In this case, simply converting one of those numbers to a BIGINT will solve the problem.
DECLARE #a BIGINT
SET #a = 7*11*13*17*19*23*29*CONVERT(BIGINT, 31)
In the first example SQL Server multiplies a list of INTs together, and discovers the result is too big to be an INT and the error is generated. In the second example, it notices there's a float so it converts all the INTs to floats first and then does the multiplication.
Similarly, you can do this:
DECLARE #a BIGINT,
#b BIGINT
set #b = 1
SET #a = #b*7*11*13*17*19*23*29*31
This works fine because it notices there's a BIGINT, so it converts all the INTs to BIGINTs and then does the multiplication.

SQL Server Inserting Decimal, but selecting Int

I have a table with two decimal(18,0) fields.
I am inserting into this table, two decimal values. For example, 1.11
When I select from the table (with no casts), I get 1.
I'm losing all percision and I have no clue why.
insert into TEST values (153, 'test', 'test', 1, 1, 1.11, 1.11)
Select * from TEST and they are 1 and 1 instead of 1.11,1.11
Any Ideas?
When you declare a field as decimal(18,0), you are saying that you want 0 digits of precision after the decimal point. You're going to want to define those columns as decimal(18,2) (or however many digits of precision you desire) in order to maintain a value of 1.11.
Refer to the MSDN page on decimal and numeric types for the grisly details.
Define the Precision to Decimal every time else it stores only int values not Decimal values
Try changing to type decimal(9,2)
Maybe try creating the columns as
decimal(18,2)