I have always used CONVERT (and not CAST), as I assumed the former would recognize types and do an appropriate conversion where as the latter is simply trying to interpret a stream of bytes differently. But just learned that CAST=CONVERT for most purposes!
But can someone explain why the following happens. CAST produces different results for the same value (101), but represented differently - decimal (101) and hexadecimal (0x65) representations.
select cast(0x65 as varchar(5))
-----
e
select cast(101 as varchar(5))
-----
101
EDIT:
The query was run from SSMS.
I assume you are using SQL Server (where the confusion between the two functions would make sense).
That is simple. 0x defines a binary constant. 101 is a numeric constant. These are not the same thing.
When you convert a binary constant to a string, it attempts to interpret the constant as a character. When you convert a number to a string, SQL Server converts the decimal representation.
You can learn more about constants in the documentation.
You are trying to convert to completely different values. As Gordon mentioned, one is binary representation while the other is numeric.
But you need to note that there is some differences between CAST and CONVERT:
CAST is part of the ANSI-SQL specification; whereas, CONVERT is not. In fact, CONVERT is Microsoft SQL Server implementation specific.
CONVERT differences lie in that it accepts an optional style
parameter which is used for formatting.
Read more here: https://www.essentialsql.com/what-is-the-difference-between-cast-and-convert/
Related
I recently came across a weird case in an ETL process where the results seem unpredictable to me. I read Difference between numeric, float and decimal in SQL Server, but I don't think it's an overflow or decimal precision issue.
Scenario:
Source table "test" in SQL Server 2008 SP3, column a declared as numeric (38,6).
The result is cast first to real, and then to int. The issue doesn't occur if there is a direct cast from numeric to int.
Results of:
SELECT a,CAST(a as real) as real_a,CAST(CAST(a as real) as int) as int_a FROM test;
a: 778881838.810000
real_a: 7.78819E+08
int_a: 778881856
The same experiment, run in SQL Server 2017 (sql fiddle) gives this:
http://sqlfiddle.com/#!18/45aca/2
a: 778881838.81
real_a: 778881860
int_a: 778881856
I can (vaguely) understand the ..19E+08 case, but why is there a +18 difference in the double conversion case? The number seems completely arbitrary to me.
OK, first of all, the result in SQL Server 2017 for real_a is not 778881860. It is 778881856, exactly, just as in SQL Server 2008. How this floating-point value is presented by the client is another matter -- Management Studio shows me 7.788819E+08, sqlcmd produces 7.7888186E+8, and apparently SQL Fiddle uses another library altogether (one I would personally have issue with, seeing as how it obscures significant figures!)
This value is not arbitrary. REAL is a single-precision floating point type that cannot represent 778881838.81 exactly. The closest representable value is 778881856, hence your result (the next lower representable value is 778881792). Without casting to INT, you can see this value using
SELECT STR(CONVERT(REAL, CONVERT(NUMERIC(38, 6), 778881838.810000)), 40, 16)
778881856.0000000000000000
Your use of the term "double" makes me think you're confusing this with FLOAT, which is the double-precision floating point type. FLOAT cannot represent this value exactly either, but it comes much closer:
SELECT STR(CONVERT(FLOAT, CONVERT(NUMERIC(38, 6), 778881838.810000)), 40, 16)
778881838.8099999400000000
Converting this value to an INT yields the (truncated) 778881838. (This truncation is documented and does not happen for conversions to NUMERIC; you'll need to ROUND first before converting if you'd prefer 778881839 instead.)
Easy example for other people that want to test locally:
DECLARE #test numeric (38,6)='778881838.810000'
SELECT #test as [Original],CAST(#test as real) as real_a,CAST(CAST(#test as real) as int) as int_a;
Original real_a int_a
778881838.810000 7.788819E+08 778881856
You would likely need someone from Microsoft to explain the way it works inside the SQL engine (and certainly to know why they made that decision), but I'll take a stab at the reasoning:
If the output is in scientific notation on the first cast and is then needed to cast to an int, it sets the int to the minimum value that would result in that scientific notation. It ends in 6 instead of 5 because rounding on 5 does not consistently round up on all cases (Alternating tie-breaking for example).
But, no matter the reason, if precision is important, you should explicitly cast to a numeric data type with a defined precision.
When you want to convert from float or real to character data, using the STR string function is usually more useful than CAST( ). This is because STR enables more control over formatting. For more information, see STR (Transact-SQL) and Functions (Transact-SQL).
Please find the below links
USE STR Instead of real
STR example
Use the below query : -
SELECT a,STR(a ,38,6) as real_a,CAST(CAST(a as real) as int) as int_a FROM test;
Please let me know if you find any issue.
I have a 64-bit integer field in my Postgres database, which is populated with 64 bit integer numbers. (Non) coincidentally, those numbers are actually 8-chars strings in ASCII format, little endian. For example, a number 5208208757389214273 is a numeric representation of a string "ABCDEFGH": it is 0x4847464544434241 in hex, where 0x41 is A, 0x42 is B, 0x43 is C and so forth.
I would like to convert those numbers purely for display purposes - i.e. find a way to leave them as numbers in the database, but be able to see them as strings when querying. Is there any way to do it in SQL? If not in SQL, is there anything I can do on the server side (install extensions, stored procedures, anything at all) which would allow this? This problem is trivially solvable with any script or programming language, but I do not know how to solve it with SQL.
P.S. And just one more time for some of trigger-happy duplicate-hammer-yielding bunch - this is not a question of translating number like 5208208757389214273 to string "5208208757389214273" (we have a lot of answers on how to do this, but this is not what I am looking for).
Use to_hex() to get a hexadecimal representation for the number. Then use decode() to turn it into a bytea. (Unfortunately I did not find any direct way from bigint to bytea.) Cast that to text and reverse() it, because of the endianess.
reverse(decode(to_hex(5208208757389214273), 'hex')::text)
ABCDEFGH
The bytea_output must be set to 'escape' for this to work properly -- use SET bytea_output = 'escape';.
(Tested on versions 9.4 and 9.6.)
An alternative way to achieve the same rsult without using SET is following:
select reverse(encode(decode(to_hex(5208208757389214273),'hex'),'escape'))
I have a hex number in string form, for example, 0x47423f34b640c3eb6e2a18a559322d68. When I try to convert this to an int, for comparison with another value that is an int, I run up against the BIGINT size limit. I've tried various methods such as decimal, etc, but dont seem to be able to get past the 64bit size limit.
Ideally, I'd like a conversion that converts from a hex string, to an int string, so I can bypass the int limits. However, doing this without using intermediate conversions (to values that are out of range) is causing me some problems.
An alternative solution would be to be able to convert from a decimal string (the other comparison value, which in the example given, is 94719161261466374640962917648041127272), to hex, or to binary, for comparison. I already have a routine that can convert arbitrary length hex strings to binary without overflowing intermediate variables, but I haven't had much luck doing decimal strings to binary without using intermediate variables, so I can't currently compare them that way either.
I already have a c# based solution for this conversion, so I could use SQL CLR or other similar solutions, but I'd much prefer a native SQL method for doing this conversion (decimal string to hex string or binary string, or hex string to decimal string or binary string).
The SQL Server 2008 release updated the See CONVERT() here function to be able to convert hexadecimal values:
select convert(bigint, convert (varbinary(8), '0x0000010d1858798c', 1))
Result:
1155754654092 (decimal) ( == 0x0000010d1858798c )
I am joining a field that has single digit numbers formatted with a leading 0 to another that does not have leading 0's. When I realized this I tested my query out only to find that it was actually working perfectly. Then I realized what I'd done... I had joined an nvarchar field to an int field. I would have thought sql would have given me an error for this but apparently it converts the character field to an int field for me.
I realize this is probably not a good practice and I plan to explicitly cast it myself now, but I'm just curious if there are rules for how SQL decides which field to cast in these situations. What's to keep it from casting the int field to a character type instead (in which case my query would no longer work properly)?
There are rules indeed.
CAST and CONVERT (Transact-SQL) to learn what can be converted to what ("Implicit Conversions" section).
Data Type Precedence (Transact-SQL) to learn what will be converted to what unless specifically asked.
What is the general guidance on when you should use CAST versus CONVERT? Is there any performance issues related to choosing one versus the other? Is one closer to ANSI-SQL?
CONVERT is SQL Server specific, CAST is ANSI.
CONVERT is more flexible in that you can format dates etc. Other than that, they are pretty much the same. If you don't care about the extended features, use CAST.
EDIT:
As noted by #beruic and #C-F in the comments below, there is possible loss of precision when an implicit conversion is used (that is one where you use neither CAST nor CONVERT). For further information, see CAST and CONVERT and in particular this graphic: SQL Server Data Type Conversion Chart. With this extra information, the original advice still remains the same. Use CAST where possible.
Convert has a style parameter for date to string conversions.
http://msdn.microsoft.com/en-us/library/ms187928.aspx
To expand on the above answercopied by Shakti, I have actually been able to measure a performance difference between the two functions.
I was testing performance of variations of the solution to this question and found that the standard deviation and maximum runtimes were larger when using CAST.
*Times in milliseconds, rounded to nearest 1/300th of a second as per the precision of the DateTime type
CAST is standard SQL, but CONVERT is only for the dialect T-SQL. We have a small advantage for convert in the case of datetime.
With CAST, you indicate the expression and the target type; with CONVERT, there’s a third argument representing the style for the conversion, which is supported for some conversions, like between character strings and date and time values. For example, CONVERT(DATE, '1/2/2012', 101) converts the literal character string to DATE using style 101 representing the United States standard.
Something no one seems to have noted yet is readability. Having…
CONVERT(SomeType,
SomeReallyLongExpression
+ ThatMayEvenSpan
+ MultipleLines
)
…may be easier to understand than…
CAST(SomeReallyLongExpression
+ ThatMayEvenSpan
+ MultipleLines
AS SomeType
)
CAST uses ANSI standard. In case of portability, this will work on other platforms. CONVERT is specific to sql server. But is very strong function. You can specify different styles for dates
You should also not use CAST for getting the text of a hash algorithm. CAST(HASHBYTES('...') AS VARCHAR(32)) is not the same as CONVERT(VARCHAR(32), HASHBYTES('...'), 2). Without the last parameter, the result would be the same, but not a readable text. As far as I know, You cannot specify that last parameter in CAST.