SELECT TOP COALESCE and bigint - sql

Ignore the practicality of the following sql query
DECLARE #limit BIGINT
SELECT TOP (COALESCE(#limit, 9223372036854775807))
*
FROM
sometable
It warns that
The number of rows provided for a TOP or FETCH clauses row count parameter must be an integer.
Why doesn't it work but the following works?
SELECT TOP 9223372036854775807
*
FROM
sometable
And COALESCE(#limit, 9223372036854775807) is indeed 9223372036854775807 when #limit is null?
I know that changing COALESCE to ISNULL works but I want to know the reason.

https://technet.microsoft.com/en-us/library/aa223927%28v=sql.80%29.aspx
Specifying bigint Constants
Whole number constants that are outside the range supported by the int
data type continue to be interpreted as numeric, with a scale of 0 and
a precision sufficient to hold the value specified. For example, the
constant 3000000000 is interpreted as numeric. These numeric constants
are implicitly convertible to bigint and can be assigned to bigint
columns and variables:
DECLARE #limit bigint
SELECT SQL_VARIANT_PROPERTY(COALESCE(#limit, 9223372036854775807),'BaseType')
SELECT SQL_VARIANT_PROPERTY(9223372036854775807, 'BaseType') BaseType
shows that 9223372036854775807 is numeric, so the return value of coalesce is numeric. Whereas
DECLARE #limit bigint
SELECT SQL_VARIANT_PROPERTY(ISNULL(#limit, 9223372036854775807),'BaseType')
gives bigint. Difference being ISNULL return value has the data type of the first expression, but COALESCE return value has the highest data type.
SELECT TOP (cast(COALESCE(#limit, 9223372036854775807) as bigint))
*
FROM
tbl
should work.

DECLARE
#x AS VARCHAR(3) = NULL,
#y AS VARCHAR(10) = '1234567890';
SELECT
COALESCE(#x, #y) AS COALESCExy, COALESCE(#y, #x)
AS COALESCEyx,
ISNULL(#x, #y) AS ISNULLxy, ISNULL(#y, #x)
AS ISNULLyx;
Output:
COALESCExy COALESCEyx ISNULLxy ISNULLyx
---------- ---------- -------- ----------
1234567890 1234567890 123 1234567890
Notice that with COALESCE, regardless of which input is specified first, the type of the output is VARCHAR(10)—the one with the higher precedence. However, with ISNULL, the type of the output is determined by the first input. So when the first input is of a VARCHAR(3) data type (the expression aliased as ISNULLxy), the output is VARCHAR(3). As a result, the returned value that originated in the input #y is truncated after three characters.That means isnull would not change the type, but coalesce would.

Turns out that 9223372036854775807 is a numeric instead of a bigint
From https://technet.microsoft.com/en-us/library/aa223927(v=sql.80).aspx
Whole number constants that are outside the range supported by the int data type continue to be interpreted as numeric, with a scale of 0 and a precision sufficient to hold the value specified
So we need to explicitly cast it to bigint
DECLARE #limit BIGINT
SELECT TOP (COALESCE(#limit, CAST(9223372036854775807 AS BIGINT)))
*
FROM
sometable

Related

How can varchar containing integer work in calculations

How come string can contain integer. Even if I assume string storing numeric values as string, but even i can use in it calculation and getting the result as well. Just to try I wrote 5 in inverted commas and still calculation works fine. Not sure how?
declare #x varchar(20)
declare #y int
select #x='5'
select #y=6
select #x+#y
SQL Server -- and all other databases -- convert values among types when the need arises.
In this case, you have + which can be either string concatenation or number addition. Because one argument is an integer, it is interpreted as addition, and SQL Server attempts to convert the string to a number.
If the string cannot be converted, then you will get an error.
I would advise you to do your best to avoid such implicit conversions. Use the correct type when defining values. If you need to store other types in a string, use cast()/convert() . . . or better yet, try_cast()/try_convert():
try_convert(int, #x) + #y
A varchar can contain any character from the collations codepage you are using. For the purposes of this answer, I'm going to assume you're using something like the collation SQL_Latin1_General_CP1_CI_AS (which doesn't have any "international" characters, like Kanji, Hiragana, etc).
You first declare the variable #x as a varchar(20) and put the varchar value '5' in it. This is not an int, it's a varchar. This is an important distinction as a varchar and a numerical data type (like an int) behave very differently. For example '10' has a lower value than '2', where as the opposite is true for 10 and 2. (This is one reason why using the correct data type is always important.)
Then the second variable you have is #y, which is an int and has the value 6.
Then you have your expression SELECT #x+#y;. This has 2 parts to it. Firstly, as you have 2 datatypes, Data Type Precedence comes into play. int has a higher precedence than a varchar, and so #x is implicitly converted to an int. Then the expression is calculated, uses + as an addition operator (not a concatenation operator). Therefore the expression is effectively derived like this:
#x + #y = '5' + 6 = CONVERT(int,'5') + 6 = 5 + 6 = 11
SQL Server uses the following precedence order for data types:
user-defined data types (highest)
sql_variant
xml
datetimeoffset
datetime2
datetime
smalldatetime
date
time
float
real
decimal
money
smallmoney
bigint
int
smallint
tinyint
bit
ntext
text
image
timestamp
uniqueidentifier
nvarchar (including nvarchar(max) )
nchar
varchar (including varchar(max) )
char
varbinary (including varbinary(max) )
binary (lowest)

difference between varchar and int when doing max()?

Is there a technical difference between these two, when table.column is a varchar or int? When would the results not be the same? I tried a few examples of digit values (e.g. 1, '1', etc.) and results are the same.
-- table.column is int:
select
MAX(table.column) as m
-- table.column is varchar:
select
MAX(CAST(table.column as int)) as m
Both of the result are same, because after casting both of the values are converted into int type.
If any string which is actually int type converted into int shows no difference over there. But if you have any string of any other type and you're converting that string into int type it gives error.
declare #str nvarchar(100)
set #str = 'sdsfd fdf fd dfsf'
select cast(#str as int)
Msg 245, Level 16, State 1, Line 3486
Conversion failed when converting the nvarchar value 'sdsfd fdf fd dfsf' to data type int.
For strings that can be converted to integers there is no difference in aggregate functions.
declare #dd table ( id varchar(max), id2 int)
insert into #dd ( id, id2 )
values ( '1', 1 )
, ( '99', 99 )
, ( '52', 52 )
select max(id2) as col, max(cast(id as int)) as col1 from #dd
Result
------------------
col col1
99 99
Thanks for #Zohar for reminding.
Although it is better to use Try_cast for your type conversion in SQL Server 2012 and above version.
Is there a technical difference between these two
If there's an index on column, the server can use it to cheaply compute the MAX for the "column is an int" version of your query. Not for the other.
When would the results not be the same?
There shouldn't be a difference in the result1 but as I say above, there may be a considerable difference in the amount of work the server has to do to compute the result. Even without the index, all of those conversions require additional code to run.
1Assuming all of the strings are convertible to ints.

Nvarchar working with logical operator working?

Just need your help here.
I have a table T
A (nvarchar) B()
--------------------------
'abcd'
'xyzxcz'
B should output length of entries in A for which I did
UPDATE T
SET B = LEN(A) -- I know LEN function returns int
But when I checked out the datatype of B using sp_help T, it showed column B as nvarchar.
What's going on ?
select A
from T
where B > 100
also returned correct output?
Why is nvarchar working with logical operators ?
Please help.
Check https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-type-conversion-database-engine?view=sql-server-2017 where it is said that data types are converted explicitly or implicitly when you move, compare or store a variable. In your case, you are comparing column B with 100, forcing sql server to implicitly convert it to integer type (check the picture about conversions on the same page). As a prove, try to alter a row putting some text in column B and, after repeating your select query B>100, sql server will throw a conversione error trying to obtain an integer out of your text.
It works because of implicit conversion between types.
Data type precedence
When an operator combines expressions of different data types, the data type with the lower precedence is first converted to the data type with the higher precedence. If the conversion isn't a supported implicit conversion, an error is returned.
Types precedence:
16. int
...
25. nvarchar (including nvarchar(max) )
In you example:
select A
from T
where B > 100
--nvarchar and int (B is implicitly casted to INT)
when adding a column to a table in ssms, not adding a datatype a "default" datatype is chosen. for me on 2017 developer it's nchar(10). if you want it to be int define the column with datatype of int. in tsql it'd be
create table T (
A nvarchar --for me the nvarchar without a size gives an nvarchar(2)
,B int
);
sp_help T
--to make a specific size, largest for nvarchar is 4000 or max...max is the replacement for ntext of old, as.
create table Tmax (
A nvarchar(max)
,B int
);
--understanding nvarchar and varchar for len() and datalength()
select
datalength(N'wibble') datalength_nvarchar -- nvarchar is unicode and uses 2 bytes per char, so 12
,datalength('wibble') datalength_varchar -- varchar uses 1 byte per so 6
,len(N'wibble') len_nvarchar -- count of chars, so 6
,len('wibble') len_varchar -- count of char so still 6
nvarchar(max) and varchar(max)
hope this helps, the question is a bit discombobulated

using parameters with max len and check for null vall

I'm trying to get a count for a column to see the max characters. I'm getting a warning, I know it doesn't effect, but it's more of an annoyance and would like to eliminate the warning.
My example is as follows:
Declare #Countthis varchar (255)
select #Counthis = max(len(col1)) from #temp
Print '------- This is the largest count for this column-----' + #Countthis
The warning I receive is:
Warning: Null value is eliminated by an aggregate or other SET operation.
I tried using Case statement but I couldn't figure it out. If the value is NULL just ignore the value.
Is this possible?
You can use
Declare #Countthis varchar (255)
select #Counthis = max(len(IsNull(col1,''))) from #temp

casting to Integer is not working properly?

execute these and check the result why is it so ?
declare #a decimal(8,3) =235.363
declare #b int =1
select case #b
when 1 then cast(#a as int)
when 2 then CAST(#a as decimal(8,3))
end
Result : 235.000
declare #a decimal(8,3) =235.363
declare #b int =1
select case #b
when 1 then cast(#a as int)
--when 2 then CAST(#a as decimal(8,3))
end
Result : 235
declare #a decimal(8,3) =235.363
declare #b int =1
select case #b
when 1 then cast(#a as tinyint)
when 2 then CAST(#a as float)
end
Result : 235
What you see is not what you get.
For the column type, SQL Server picks the correct, more wide type (float over tinyint, decimal over int). You can verify that by doing select into instead of just select.
It's just the display rules that are different.
When the selected column type is float, you don't see the trailing .000 when there is no fractional part.
For decimal with explicit positions set, such as decimal(8,3), you will see the trailing .000 even if there's no fractional part. If you remove the specifier and only leave decimal as the column type, the .000 will disappear.
All that does not affect the actual column type, which is always the widest one.
This behaviour is documented in the BOL entry for CASE
Return Types
Returns the highest precedence type from the set of types in
result_expressions and the optional else_result_expression. For more
information, see Data Type Precedence (Transact-SQL).
If you follow the link to data type precedence you will see that float has higher precedence than decimal which in turn has higher precedence than tinyint so this behaviour is expected.
Probably cast operation will cast all the options to a bigger type.
From MSDN:
The data types of input_expression and each when_expression must be
the same or must be an implicit conversion.
http://msdn.microsoft.com/en-us/library/ms181765.aspx
Casting to Integer is not working properly.
Your statement is not correct!
In CASE statement, you can only return one type of data, so according to your statement you can return either INT or decimal(8,3), since your case statement has decimal(8,3) so here INT data is implicitly converted to decimal! Please see below example:, always try to use same return type in CASE statement to get proper and expected result, thanks.
1.
select case #b
when 1 then CAST(#a as int) -- return type INT
when 2 then CAST(#a as int) -- return type INT
end
2.
select case #b
when 1 then CAST(#a as int) -- return type INT and then converted to decimal(8,3)
when 2 then CAST(#a as decimal(8,3)) -- return type return type INT
end